paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:561e202e2167aa602c815441e8ca52992d81b03b
[ "In this paper, the authors propose a model-based approach with representation balancing (RepB-SDE)to cope with the distribution shift of offline reinforcement learning. RepB-SDE learns a robust representation for the model learning process, which regularizes the distance between the data distribution and the discount stationary distribution of the target policy in the representation space. RepB-SDE adopts the estimation techniques of DualDICE and a novel point is that RepB-SDE plugs this trick into the model-based representation learning and proposes an effective model-based offline RL algorithm.\t\t\t\t\t" ]
One of the main challenges in offline and off-policy reinforcement learning is to cope with the distribution shift that arises from the mismatch between the target policy and the data collection policy. In this paper, we focus on a model-based approach, particularly on learning the representation for a robust model of the environment under the distribution shift, which has been first studied by Representation Balancing MDP (RepBM). Although this prior work has shown promising results, there are a number of shortcomings that still hinder its applicability to practical tasks. In particular, we address the curse of horizon exhibited by RepBM, rejecting most of the pre-collected data in long-term tasks. We present a new objective for model learning motivated by recent advances in the estimation of stationary distribution corrections. This effectively overcomes the aforementioned limitation of RepBM, as well as naturally extending to continuous action spaces and stochastic policies. We also present an offline model-based policy optimization using this new objective, yielding the state-of-the-art performance in a representative set of benchmark offline RL tasks.
[ { "affiliations": [], "name": "Byung-Jun Lee" }, { "affiliations": [], "name": "Jongmin Lee" }, { "affiliations": [], "name": "Kee-Eung Kim" } ]
[ { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Arthur Argenson", "Gabriel Dulac-Arnold" ], "title": "Model-based offline planning", "venue": "arXiv preprint arXiv:2008.05556,", "year": 2020 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and Gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4RL: Datasets for deep data-driven reinforcement learning", "venue": "arXiv preprint arXiv:2004.07219,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Carles Gelada", "Marc G Bellemare" ], "title": "Off-policy deep reinforcement learning by bootstrapping the covariate shift", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Assaf Hallak", "Shie Mannor" ], "title": "Consistent on-line off-policy evaluation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Wassily Hoeffding" ], "title": "Probability inequalities for sums of bounded random variables", "venue": "Journal of the American Statistical Association,", "year": 1963 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy value evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fredrik D Johansson", "Nathan Kallus", "Uri Shalit", "David Sontag" ], "title": "Learning weighted representations for generalization across designs", "venue": "arXiv preprint arXiv:1802.08598,", "year": 2018 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Rahul Kidambi", "Aravind Rajeswaran", "Praneeth Netrapalli", "Thorsten Joachims" ], "title": "MOReL: Modelbased offline reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative Q-learning for offline reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Byung-Jun Lee", "Jongmin Lee", "Peter Vrancx", "Dongho Kim", "Kee-Eung Kim" ], "title": "Batch reinforcement learning with hyperparameter gradients", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yao Liu", "Omer Gottesman", "Aniruddh Raghu", "Matthieu Komorowski", "Aldo A Faisal", "Finale DoshiVelez", "Emma Brunskill" ], "title": "Representation balancing MDPs for off-policy policy evaluation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yao Liu", "Adith Swaminathan", "Alekh Agarwal", "Emma Brunskill" ], "title": "Off-policy policy gradient with state distribution correction", "venue": "arXiv preprint arXiv:1904.08473,", "year": 2019 }, { "authors": [ "Tatsuya Matsushima", "Hiroki Furuta", "Yutaka Matsuo", "Ofir Nachum", "Shixiang Gu" ], "title": "Deploymentefficient reinforcement learning via model-based offline optimization", "venue": "arXiv preprint arXiv:2006.03647,", "year": 2020 }, { "authors": [ "Colin McDiarmid" ], "title": "On the method of bounded differences", "venue": "Surveys in combinatorics,", "year": 1989 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Ali Mousavi", "Lihong Li", "Qiang Liu", "Denny Zhou" ], "title": "Black-box off-policy estimation for infinitehorizon reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alfred Müller" ], "title": "Integral probability metrics and their generating classes of functions", "venue": "Advances in Applied Probability,", "year": 1997 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "DualDICE: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Bo Dai", "Ilya Kostrikov", "Yinlam Chow", "Lihong Li", "Dale Schuurmans" ], "title": "AlgaeDICE: Policy gradient from arbitrary experience", "venue": "arXiv preprint arXiv:1912.02074,", "year": 2019 }, { "authors": [ "Xue Bin Peng", "Aviral Kumar", "Grace Zhang", "Sergey Levine" ], "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series, pp", "year": 2000 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Swish: a self-gated activation function", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Bharath K Sriperumbudur", "Kenji Fukumizu", "Arthur Gretton", "Bernhard Schölkopf", "Gert RG Lanckriet" ], "title": "On integral probability metrics, φ-divergences and binary classification", "venue": "arXiv preprint arXiv:0901.2698,", "year": 2009 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 135", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Richard S Sutton", "Hamid R Maei", "Csaba Szepesvári" ], "title": "A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Phillip Swazinna", "Steffen Udluft", "Thomas Runkler" ], "title": "Overcoming model bias for robust offline deep reinforcement learning", "venue": "arXiv preprint arXiv:2008.05533,", "year": 2020 }, { "authors": [ "Ziyang Tang", "Yihao Feng", "Lihong Li", "Dengyong Zhou", "Qiang Liu" ], "title": "Doubly robust bias reduction in infinite horizon off-policy estimation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "arXiv preprint arXiv:1911.11361,", "year": 2019 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "MOPO: Model-based offline policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": null, "year": 2021 }, { "authors": [ "Liu" ], "title": "2018b), which is originally from the implementation of RLPy3", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "2018b) to optimize the target policy for the HIV", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "For the purpose of comparison, we minimize the L2 distance between the model prediction and the desired outcome from data, which corresponds to using a model of Gaussian predictive distribution with fixed variance. We standardized the inputs and outputs of the neural network and used Adam (Kingma & Ba, 2014) with a learning rate of 3× 10−4 for the optimization", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "2018b) and use dot product kernel k(φ(s), φ(s̄)) = φ(s)>φ(s̄) for the OPE experiment, which is not universal but allows us to avoid search of kernel hyperparameters, such as length-scales. After the training, we generate another 200 trajectories (50 in case of HIV simulator), and rollout in both true and simulated (based on learned model) environments to evaluate models", "venue": null, "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has accomplished remarkable results in a wide range of domains, but its successes were mostly based on a large number of online interactions with the environment. However, in many real-world tasks, exploratory online interactions are either very expensive or dangerous (e.g. robotics, autonomous driving, and healthcare), and applying a standard online RL would be impractical. Consequently, the ability to optimize RL agents reliably without online interactions has been considered as a key to practical deployment, which is the main goal of batch RL, also known as offline RL (Fujimoto et al., 2019; Levine et al., 2020).\nIn an offline RL algorithm, accurate policy evaluation and reliable policy improvement are both crucial for the successful training of the agent. Evaluating policies in offline RL is essentially an off-policy evaluation (OPE) task, which aims to evaluate the target policy given the dataset collected from the behavior policy. The difference between the target and the behavior policies causes a distribution shift in the estimation, which needs to be adequately addressed for accurate policy evaluation. OPE itself is one of the long-standing hard problems in RL (Sutton et al., 1998; 2009; Thomas & Brunskill, 2016; Hallak & Mannor, 2017).\nHowever, recent offline RL studies mainly focus on how to improve the policy conservatively while using a common policy evaluation technique without much considerations for the distribution shift, e.g. mean squared temporal difference error minimization or maximum-likelihood training of environment model (Fujimoto et al., 2019; Kumar et al., 2019; Yu et al., 2020). While conservative policy improvement helps the policy evaluation by reducing the off-policyness, we hypothesize that addressing the distribution shift explicitly during the policy evaluation can further improve the overall performance, since it can provide a better foundation for policy improvement.\nTo this end, we aim to explicitly address the distribution shift of the OPE estimator used in the offline RL algorithm. In particular, we focus on the model-based approach, where we train an\nenvironment model robust to the distribution shift. One of the notable prior works is Representation Balancing MDP (RepBM) (Liu et al., 2018b), which regularizes the representation learning of the model to be invariant between the distributions. However, despite the promising results by RepBM, its step-wise estimation of the distance between the distributions has a few drawbacks that limit the algorithm from being practical: not only it assumes a discrete-action task where the target policy is deterministic, but it also performs poorly in long-term tasks due to the curse of horizon of step-wise importance sampling (IS) estimators (Liu et al., 2018a).\nTo address these limitations, we present the Representation Balancing with Stationary Distribution Estimation (RepB-SDE) framework, where we aim to learn a balanced representation by regularizing, in the representation space, the distance between the data distribution and the discounted stationary distribution induced by the target policy. Motivated by the recent advances in estimating stationary distribution corrections, we present a new representation balancing objective to train a model of the environment that no longer suffers from the curse of horizon. We empirically show that the model trained by the RepB-SDE objective is robust to the distribution shift for the OPE task, particularly when the difference between the target and the behavior is large. We also introduce a model-based offline RL algorithm based on the RepB-SDE framework and report its performance on the D4RL benchmark (Fu et al., 2020), showing the state-of-the-art performance in a representative set of tasks." }, { "heading": "2 RELATED WORK", "text": "Learning balanced representation Learning a representation invariant to specific aspects of data is an established method for overcoming distribution shift that arises in unsupervised domain adaptation (Ben-David et al., 2007; Zemel et al., 2013) and in causal inference from observational data (Shalit et al., 2017; Johansson et al., 2018). They have shown that imposing a bound on the generalization error under the distribution shift leads to the objective that learns a balanced representation such that the training and the test distributions look similar. RepBM (Liu et al., 2018b) can be seen as a direct extension to the sequential case, which encourages the representation to be invariant under the target and behavior policies in each timestep.\nStationary distribution correction estimation (DICE) Step-wise importance sampling (IS) estimators (Precup, 2000) compute importance weights by taking the product of per-step distribution ratios. Consequently, these methods suffer from exponentially high variance in the lengths of trajectories, which is a phenomenon called the curse of horizon (Liu et al., 2018a). Recently, techniques of computing a stationary DIstribution Correction Estimation (DICE) have made remarkable progress that effectively addresses the curse of horizon (Liu et al., 2018a; Nachum et al., 2019a; Tang et al., 2020; Zhang et al., 2020; Mousavi et al., 2020). DICE has been also used to explicitly address the distribution shift in online model-free RL, by directly applying IS on the policy and action-value objectives (Liu et al., 2019; Gelada & Bellemare, 2019). We adopt one of the estimation techniques, DualDICE (Nachum et al., 2019a), to measure the distance between the stationary distribution and the data distribution in the representation space.\nOffline reinforcement learning There are extensive studies on improving standard online modelfree RL algorithms (Mnih et al., 2015; Lillicrap et al., 2016; Haarnoja et al., 2018) for stable learning in the offline setting. The main idea behind them is to conservatively improve policy by (1) quantifying the uncertainty of value function estimate, e.g. using bootstrapped ensembles (Kumar et al., 2019; Agarwal et al., 2020), or/and (2) constraining the optimized target policy to be close to the behavior policy (i.e. behavior regularization approaches) (Fujimoto et al., 2019; Kumar et al., 2019; Wu et al., 2019; Lee et al., 2020). A notable exception is AlgaeDICE (Nachum et al., 2019b), which implicitly uses DICE to regularize the discounted stationary distribution induced by the target policy to be kept inside of the data support, similar to this work.\nOn the other hand, Yu et al. (2020) argued that the model-based approach can be advantageous due to its ability to generalize predictions on the states outside of the data support. They introduce MOPO (Yu et al., 2020), which uses truncated rollouts and penalized rewards for conservative policy improvement. MOReL (Kidambi et al., 2020) trains a state-action novelty detector and use it to penalize rewards in the data-sparse region. Matsushima et al. (2020), MOOSE (Swazinna et al., 2020)\nand MBOP (Argenson & Dulac-Arnold, 2020) guide their policy optimization using the behavior policy, similar to the behavior regularization approaches.\nNote that these aforementioned offline RL methods build on the standard approximate dynamic programming algorithm for action-value estimation (model-free) or on a maximum-likelihood environment model (model-based), without explicitly addressing the distribution shift in the estimator. In contrast, we augment the objective for model learning to obtain a robust model under the distribution shift, which is the first attempt for offline RL to the best of our knowledge." }, { "heading": "3 PRELIMINARIES", "text": "A Markov Decision Process (MDP) is specified by a tuple M = 〈S,A, T,R, d0, γ〉, consisting of state space S, action space A, transition function T : S × A → ∆(S), reward function R : S × A → ∆([0, rmax]), initial state distribution d0, and discount rate γ. In this paper, we mainly focus on continuous state space S ⊆ Rds and conduct experiments on both discrete action spaces A = {a0, ...ana} and continuous action spaces A ⊆ Rda . Given MDP M and policy π, which is a (stochastic) mapping from state to action, the trajectory can be generated in the form of s0, a0, r0, s1, a1, r1, ..., where s0 ∼ d0 and for each timestep t ≥ 0, at ∼ π(st), rt ∼ R(st, at), and st+1 ∼ T (st, at). The goal of RL is to optimize or evaluate a policy, based on the normalized expected discounted return: Rπ , (1− γ)EM,π [ ∑∞ t=0 γ trt].\nA useful and important concept throughout the paper is the discounted stationary distribution, which represents the long-term occupancy of states:\ndπ(s, a) , (1− γ) ∞∑ t=0 γt Pr(st = s, at = a|M,π).\nFrom the definition, it can be observed that Rπ can be obtained by Rπ = E(s,a)∼dπ [r(s, a)].\nOffline RL and off-policy evaluation In this paper, we focus on the offline RL problem where the agent can only access a static dataset D = {(si, ai, ri, s′i)}Ni=1 for the maximization of Rπ . We consider a behavior-agnostic setting where we do not have any knowledge of the data collection process. We denote the empirical distribution of the dataset by dD.\nBefore improving policy, we first aim to better evaluateRπ given a target policy π and a static dataset D, which corresponds to an off-policy evaluation (OPE) problem. We mainly focus on a modelbased approach where the algorithm first estimates the unknown dynamics T̂ , R̂ using the dataset D. This defines an approximate MDP M̂ = 〈S,A, T̂ , R̂, d0, γ〉, with the approximate expected discounted return R̂π , (1−γ)E M̂,π [ ∑∞ t=0 γ trt] obtained from M̂ . In this paper, we are interested in the MDP estimate M̂ that can effectively reduce the error in the evaluation of policy π, |Rπ−R̂π|. In order to do so, we need to learn a good representation of a model that results in a small OPE error. We assume a bijective representation function φ : S ×A → Z where Z ⊆ Rdz is the representation space. We define the transition and the reward models in terms of the representation function φ, i.e. T̂ = T̂z ◦ φ and R̂ = R̂z ◦ φ. In practice, where we use a neural network for T̂ and R̂, z can be chosen to be the output of an intermediate hidden layer, making φ represented by lower layers and T̂z, R̂z by the remaining upper layers. We define dπφ(z) the discounted stationary distribution on Z induced by dπ(s, a) under the representation function z = φ(s, a), and similarly for dDφ (z)." }, { "heading": "4 REPRESENTATION BALANCING OFFLINE MODEL-BASED RL", "text": "" }, { "heading": "4.1 GENERALIZATION ERROR BOUND FOR MODEL-BASED OFF-POLICY EVALUATION", "text": "We aim to construct a model M̂ from the dataset D, which can accurately evaluate the policy π, by minimizing a good upper bound of policy evaluation error |Rπ − R̂π|. We define the following point-wise model loss for notational convenience:\nEφ,R̂z,T̂z (s, a) = cRDTV ( R(r|s, a) ∣∣∣∣∣∣R̂z(r|φ(s, a)))+ cTDTV (T (s′|s, a)∣∣∣∣∣∣T̂z(s′|φ(s, a))),\nwhere cR = 2(1− γ) and cT = 2γrmax. Then, we start by restating the simulation lemma (Kearns & Singh, 2002) to bound the policy evaluation error in terms of the point-wise model loss. The proof is available in Appendix A.\nLemma 4.1. Given an MDP M and its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, the policy evaluation error of a policy π can be bounded by:∣∣∣Rπ − R̂π∣∣∣ ≤ E(s,a)∼dπ[Eφ,R̂z,T̂z (s, a)] (1) The Lemma 4.1 has a natural interpretation: if the model error is small in the states frequently visited by following the policy π, the resultant policy evaluation error will also be small. However, minimizing the RHS of Eq. (1) in the off-policy evaluation (OPE) task is generally intractable since the distribution dπ is not directly accessible. Therefore, the common practice has been to construct a maximum-likelihood MDP using D while ignoring the distribution shift, but its OPE performance is not guaranteed.\nInstead, we will derive a tractable upper bound on the policy evaluation error by eliminating the direct dependence on dπ in Eq. (1). To this end, we adopt the distance metric between two distributions over representations dπφ and d D φ that can bound their difference in expectations, which is the Integral Probability Metric (IPM) (Müller, 1997):\nIPMG(p, q) = sup g∈G ∣∣Ez∼p[g(z)]− Ez∼q[g(z)]∣∣. (2) where particular choices of G make the IPM equivalent to different well-known distances of distributions, e.g. total variation distance or Wasserstein distance (Sriperumbudur et al., 2009).\nTheorem 4.2. Given an MDP M and its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, assume that there exists a constant Bφ > 0 and a function class G , {g : Z → R} such that 1Bφ Eφ,R̂z,T̂z ( φ−1(·) ) ∈ G. Then, for any policy π,∣∣∣Rπ − R̂π∣∣∣ ≤ E(s,a)∼dD[Eφ,R̂z,T̂z (s, a)]+BφIPMG(dπφ, dDφ ) (3)\nThis theorem is an adaptation of Lemma 1 of Shalit et al. (2017) to an infinite horizon model-based policy evaluation and can be derived by the definition of IPMG(dπφ, d D φ ) since it serves as an upper bound of the difference in the expectations of any function in G. The first term in Eq. (3) corresponds to the fitness to the data following dD, while the second term serves as a regularizer. To see this, minimizing the second term would yield a near-constant representation function, which would be bad for the first term since it cannot distinguish states and actions well enough. It shows a natural trade-off between optimizing the model that fits data better and learning the representation that is invariant with respect to dπφ and d D φ .\nNevertheless, RHS of Eq. (3) still cannot be evaluated naively due to its dependence on dπ in estimating the IPM. We address this challenge via a change of variable, which is known as a DualDICE trick (Nachum et al., 2019a). Define ν : Z → R as an arbitrary function of state-action pairs that satisfies:\nν ( φ(s, a) ) , g ( φ(s, a) ) + γEs′∼T (s,a)\na′∼π(s′)\n[ ν ( φ(s′, a′) )] , ∀(s, a) ∈ S ×A.\nThen we can rewrite the IPM as:\nIPMG(d π φ, d D φ ) = sup\ng∈G ∣∣∣E(s,a)∼dπ[g(φ(s, a))]− E(s,a)∼dD[g(φ(s, a))]∣∣∣ = sup ν∈F ∣∣∣∣∣(1− γ)E s∼d0a∼π(s) [ ν ( φ(s, a) )] − E(s,a,s′)∼dD\na′∼π(s′)\n[ ν ( φ(s, a) ) − γν ( φ(s′, a′) )]∣∣∣∣∣ , (4) where F = { ν : ν(z) = ET,π\n[ ∞∑ t=0 γtg(φ(st, at)) ∣∣∣∣ (s0, a0) = φ−1(z) ] , g ∈ G } .\nIn other words, we are now taking a supremum over the new function class F , which captures a function that returns the expected discounted sum of g(φ(st, at)) following the policy π in an MDP\nM given an initial representation z. While it is now difficult to choose F from G, Eq. (3) still can be kept valid by using a sufficiently rich function class for F . In this work, we choose F to be the family of functions in the unit ball in a reproducing kernel Hilbert space (RKHS) Hk with the kernel k, which allows the following closed-form formula (see Lemma A.3 in Appendix for details):\nIPMG(d π φ, d D φ ) 2 = Es0∼d0,a0∼π(s0),(s,a,s′)∼dD,a′∼π(s′) s̄0∼d0,ā0∼π(s̄0),(s̄,ā,s̄′)∼dD,ā′∼π(s̄′)\n[ (5)\nk ( φ(s, a), φ(s̄, ā) ) + (1− γ)2k ( φ(s0, a0), φ(s̄0, ā0) ) + γ2k ( φ(s′, a′), φ(s̄′, ā′) ) − 2(1− γ)k ( φ(s0, a0), φ(s̄, ā) ) − 2γk ( φ(s′, a′), φ(s̄, ā) ) + 2γ(1− γ)k ( φ(s′, a′), φ(s̄0, ā0) )] .\nThis completes our derivation for the tractable upper bound of policy evaluation error (Eq. (3)), whose direct dependence on dπ is eliminated by Eq. (5). Finally, we can train a model by minimizing the upper bound that encourages us to learn balanced representation while improving data fitness, where the trained model can readily provide a model-based OPE.\nThe IPMG(dπφ, d D φ ) 2 in Eq. (5) can be estimated via finite random samples, and we denote its sampled-based estimator as ÎPM(dπφ, d D φ )\n2. We show in the following that, a valid upper bound can be established based on the sample-based estimators instead of the exact terms in the RHS of Eq. (3) under certain conditions.\nTheorem 4.3. Given an MDP M , its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, and an RKHS Hk ⊂ (Z → R) induced by a universal kernel k such that supz∈Z k(z, z) = k̄, assume that fφ,R̂z,T̂z (z) =\nET,π [∑∞\nt=0 γ tEφ,R̂z,T̂z (st, at) ∣∣∣(s0, a0) = φ−1(z)] ∈ Hk with Bφ = ‖fφ,R̂z,T̂z‖Hk and the loss is bounded by Ē = sups∈S,a∈A Eφ,R̂z,T̂z (s, a). Let n be the number of data in D. With probability 1− 2δ,∣∣∣Rπ−R̂π∣∣∣ ≤ 1\nn ∑ (s,a)∈D Eφ,R̂z,T̂z (s, a)+BφÎPM(d π φ, d D φ )+ √ Ē2 2n log 1 δ +Bφ √ k̄ n ( 4 + √ 8 log 3 δ ) .\nThis result can be proved by adapting the convergence results of the empirical estimate of the MMD (Gretton et al., 2012) and Hoeffding’s inequality (Hoeffding, 1963). With the choice of an RKHSHk, we can now interpret Bφ as the RKHS norm ‖fφ,R̂z,T̂z‖Hk , which captures the magnitude and the smoothness of the expected cumulative model loss fφ,R̂z,T̂z . In general, assuming smooth underlying dynamics, we can expect Bφ to be small when the model error is small. Furthermore, although k̄ depends on the kernel function we use, we can always let k̄ = 1 and subsume it into Bφ as long as it is bounded, i.e. using B̃φ , Bφ √ k̄. In the next section, we develop algorithms based on practical approximations of Eq. (3).\nDetailed comparison to RepBM As previously stated, RepBM (Liu et al., 2018b) is a modelbased finite-horizon OPE algorithm that trains the model to have balanced representation φ, which is encouraged to be invariant under the target and behavior policies. Specifically, given the deterministic target policy π and the behavior policy µ, at each timestep t, it defines the factual distribution on Z given that the actions until timestep t have been executed according to the policy π: pFφ,t(z) = Pr(zt|M,µ, a0:t = π(s0:t)) and the counterfactual distribution on Z given the same condition except the action at timestep t: pCFφ,t (z) = Pr(zt|M,µ, a0:t−1 = π(s0:t−1), at 6= π(st)). Then, RepBM bounds the OPE error as,1∣∣∣Rπ−R̂π∣∣∣ ≤ (1−γ) ∞∑\nt=0\nγt ( Ep(st|M,µ,a0:t=π(s0:t))[Eφ,R̂z,T̂z (st, π(st))] +Bφ,tIPMGt(p F φ,t, p CF φ,t ) ) .\nAlthough RepBM achieves performance improvement over other OPE algorithms, we found a number of practical challenges: from the definition of the IPMGt(p F φ,t, p CF φ,t ), it requires a discrete-action\n1We adapted their formulation to the infinite horizon discounted MDP setting.\nenvironment and a deterministic policy π, which cannot be met by many practical RL settings. In addition, since the sample-based estimation of IPMGt(p F φ,t, p CF φ,t ) requires samples consistent with the policy π, i.e. a0:t−1 = π(s0:t−1), the algorithm would reject exponentially many samples with respect to t, which is the curse of horizon (Liu et al., 2018a). When there is a large difference between the behavior and the target policies in long-term tasks, their implementation becomes close to using the maximum likelihood objective, which can also be observed empirically in our experiments. In contrast, our work is free from the abovementioned practical limitations by performing balancing between the discounted stationary distribution dπ and the data distribution dD, leveraging the recent advances in stationary distribution correction estimation (i.e. the DualDICE trick) to overcome the difficulties pertinent to the expectation concerning dπ required to evaluate the IPM in the objective." }, { "heading": "4.2 REPRESENTATION BALANCING WITH STATIONARY DISTRIBUTION ESTIMATION", "text": "In the following, we describe algorithms for OPE and offline RL based on the practical approximations to Eq. (3), which we call the RepB-SDE framework.\nObjective for off-policy evaluation As we mentioned earlier, we aim to minimize the upper bound of OPE error |Rπ − R̂π| specified in Theorem 4.2. To make the RHS of Eq. (3) tractable for optimization, we replace the intractable total variation distance with KL-divergence, which can be easily minimized by maximizing the data log-likelihood. We also replace the IPM with its samplebased estimator to obtain the learning objective:\nmin φ,R̂z,T̂z\n1\nn ∑ (s,a,s′,r)∈D [ − log R̂z ( r|φ(s, a) )︸ ︷︷ ︸ DKL(R(r|s,a)||R̂z(r|φ(s,a))) − log T̂z ( s′|φ(s, a) )︸ ︷︷ ︸ DKL(T (s′|s,a)||T̂z(s′|φ(s,a))) ] + αM ÎPM(d π φ, d D φ ) (6)\nThe constant Bφ in Theorem 4.2 depends on the function classes and cannot be estimated, and thus, we replace it with a tunable hyperparameter αM that balances between data fitness and representation invariance. Remark that αM = 0 recovers the simple maximum-likelihood objective.\nBy simulating the target policy π under the environment model (T̂ , R̂) obtained by minimizing Eq. (6), it is possible to perform a model-based OPE that approximately minimizes the upper bound of OPE error.\nObjectives for offline model-based RL By rearranging Eq. (3), we have, Rπ ≥ R̂π − E(s,a)∼D [ Eφ,R̂z,T̂z (s, a) ] −BφIPMG(dπφ, dDφ ). (7)\nThen, we can maximize the RHS of Eq. (7) to get the model and the policy that maximizes the lower bound of true return Rπ . Similar to the derivation of Eq. (6), we replace the total variation distance by KL-divergence to obtain the following learning objectives:\nL M̂ (M̂, π, αM ) = EdD [ − log R̂z ( r|φ(s, a) ) − log T̂z ( s′|φ(s, a) )] + αM IPMG(d π φ, d D φ ), (8)\nJπ(π, M̂, απ) = EM̂,π [ ∞∑ t=0 γtrt ] − απIPMG(dπφ, dDφ ). (9)\nwhere the expectation in Eq. (9) can be optimized using various model-based RL algorithms, e.g. with planning (Chua et al., 2018) or using a model-free learner (Janner et al., 2019). By iterating between the minimization of L\nM̂ with respect to M̂ and the maximization of Jπ with respect to π\nby stochastic gradient method, it is possible to perform offline model-based RL that approximately maximizes the lower bound of the true return Rπ .\nImplementation details Following the recent practice (Chua et al., 2018; Janner et al., 2019; Yu et al., 2020), we model the dynamics (T̂ , R̂) using a bootstrap ensemble of neural networks. To optimize a policy based on the objective, we perform full rollouts (until reaching the terminal states or maximum timesteps) using learned dynamics (T̂ , R̂). During obtaining the rollouts, we pessimistically augment the estimated reward function using the penalty proportional to the bootstrapped uncertainty, which helped the algorithm to perform robustly. We suspect the difficulty in calculating\naccurate IPM estimation is what makes the additional pessimism beneficial. We store the generated experiences to a separate dataset D̂ and update the policy π with IPM-regularized soft actor-critic (SAC) (Haarnoja et al., 2018) using samples from both datasets D ∪ D̂ similar to MBPO (Janner et al., 2019).\nSince the presented model objective requires a policy π to perform a balancing, we initially trained the model and the policy using αM = απ = 0: by M̂0 = arg minM̂ LM̂ (M̂, ·, 0) and π0 = arg minπ Lπ(π, M̂0, 0). Then, we retrained the model and the policy using π0: M̂1 = arg min\nM̂ L M̂ (M̂, π0, αM ) and π1 = arg minπ Lπ(π, M̂1, απ) for some non-negative αM and απ . While it is desirable to repeat the optimization of the model and the policy until convergence, we did not observe significant improvement after the first iteration and reported the performance of the policy after the first iteration, π1." }, { "heading": "5 EXPERIMENTS", "text": "We demonstrate the effectiveness of the RepB-SDE framework, by comparing the OPE performance of Eq. (6) to that of RepBM and evaluating the presented model-based offline RL algorithm on the benchmarks. The code used to produce the results is available online.2 A detailed description of the experiments can be found in Appendix B." }, { "heading": "5.1 MODEL-BASED OFF-POLICY EVALUATION", "text": "For the sake of comparison with RepBM, we test our OPE algorithm on three continuous-state discrete-action tasks where the goal is to evaluate a deterministic target policy. We trained a suboptimal deterministic target policy π and used an -greedy policy with various values of as the data collection policy. In each experiment, we trained environment models for a fixed number of epochs concerning three different objectives: simple maximum-likelihood baseline, step-wise representation balancing objective used in RepBM (Liu et al., 2018b), and the presented OPE objective of RepB-SDE (Eq. (6)). We measured the individual mean squared error (Liu et al., 2018b).\nThe normalized error of each algorithm, relative to the error of the baseline valued at 1, is presented in Figure 1. We can observe that the presented objective of RepB-SDE can reduce the OPE error from the baseline significantly, outperforming RepBM in most of the cases. As the off-policyness between the policies ( ) increases, representation balancing algorithms should more benefit compared to the maximum-likelihood baseline in principle. However, the result shows that the performance of RepBM merely increases due to the increased sample rejection rate under large .\n2https://github.com/dlqudwns/repb-sde" }, { "heading": "5.2 OFFLINE REINFORCEMENT LEARNING WITH REPRESENTATION BALANCED MODEL", "text": "We evaluate the offline model-based RL algorithm presented in Section 4.2 on a subset of datasets in the D4RL benchmark (Fu et al., 2020): using four types of datasets (Random, Medium, MediumReplay, and Medium-Expert) from three different MuJoCo environments (HalfCheetah-v2, Hopperv2, and Walker2d-v2) (Todorov et al., 2012). Random dataset contains 106 experience tuples from a random policy. Medium dataset contains 106 experience tuples from a policy trained to approximately 1/3 the performance of the expert, which is an agent trained to completion with SAC. Med-Replay dataset contains 105 (2× 105 for Walker2d-v2) experience tuples, which are from the replay buffer of a policy trained up to the performance of the medium agent. Med-Expert dataset is a Medium dataset combined with 106 samples from the expert. This experimental setting exactly follows that of (Yu et al., 2020; Argenson & Dulac-Arnold, 2020).\nThe normalized score of each algorithm is presented in Table 1. MF denotes the best score from offline model-free algorithms (taken from Fu et al. (2020) and Kumar et al. (2020)), including SAC (Haarnoja et al., 2018), BCQ (Fujimoto et al., 2019), BEAR (Kumar et al., 2019), BRAC (Wu et al., 2019), AWR (Peng et al., 2019), cREM (Agarwal et al., 2020), AlgaeDICE (Nachum et al., 2019b), and CQL (Kumar et al., 2020). The actual algorithm that achieves the reported score is presented next to the numbers. Base shows the performance of the most naive baseline, which attempts to maximize the estimated policy return under the maximum-likelihood model. RP denotes the performance of Base equipped with the appropriate reward penalty using the bootstrapped uncertainty of the model, which is equivalent to π0 described in Section 4.2. RepB-SDE denotes the performance after a single iteration of our algorithm, corresponding to π1. We also provide BC, the performance of direct behavior cloning from the data, and MOPO (Yu et al., 2020), an offline model-based RL algorithm that optimizes policy based on truncated rollouts with the heuristic reward penalty.\nThe significant gap between RepB-SDE and RP in the results shows the advantage brought by our framework that encourages balanced representation. While our approach was less successful on some of the datasets (mostly on the Hopper-v2 environment), we hypothesize that the conservative training techniques: the behavior regularization approaches exploited in the model-free algorithms, the rollout truncation technique in MOPO, and the pessimistic training based on the bootstrapped uncertainty estimates adopted in our algorithm exhibit their strengths in different datasets. For example, it may be the case that the ensemble models are overconfident especially in Hopper-v2, and should be regularized with more explicit methods. Nevertheless, we emphasize that the presented framework can be used jointly with any other conservative training technique to improve their performance." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we presented RepB-SDE, a framework for balancing the model representation with stationary distribution estimation, aiming at obtaining a model robust to the distribution shift that arises in off-policy and offline RL. We started from the theoretical observation that the model-based policy evaluation error can be upper-bounded by the data fitness and the distance between two distributions in the representation space. Motivated by the bound, we presented a model learning objective for off-policy evaluation and model-based offline policy optimization. RepB-SDE can be seen as an extension of RepBM, which addresses the curse of horizon by leveraging the recent advances in stationary distribution correction estimation (i.e. the DualDICE trick). Using stationary distribution also frees us from other limitations of RepBM to be applied to more practical settings. To the best of our knowledge, it is the first attempt to introduce an augmented objective for the learning of model robust to a specific distribution shift in offline RL.\nIn the experiments, we empirically demonstrated that we can significantly reduce the OPE error from the baseline, outperforming RepBM in most cases. We also showed that the robust model also helps in the offline model-based policy optimization, yielding the state-of-the-art performance in a representative set of D4RL benchmarks. We emphasize that our approach can be directly adopted in many other model-based offline RL algorithms.\nThere are a number of promising directions for future work. Most importantly, we have not leveraged the learned representation in the policy when optimizing the policy, yet it is very natural to do so. We can easily incorporate the representation into the policy by assuming energy-based models, but this would make the computation of entropy intractable in entropy-regularized policy optimization algorithms. It would be also interesting to see if the proposed framework for learning balanced representation can benefit off-policy (and offline) model-free methods." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the National Research Foundation (NRF) of Korea (NRF2019M3F2A1072238 and NRF-2019R1A2C1087634), and the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2019-0-00075, IITP No. 2020-0-00940 and IITP No. 2017-0-01779 XAI)." }, { "heading": "A PROOFS", "text": "Lemma 4.1. Given an MDP M and its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, the policy evaluation error of a policy π can be bounded by:∣∣∣Rπ − R̂π∣∣∣ ≤ E(s,a)∼dπ[Eφ,R̂z,T̂z (s, a)] (1) Proof. We define a value function V π(s) = EM,π[ ∑∞ t=0 γ\ntrt|s0 = s], the expected discounted return starting from the state s, for the following proof. Due to its definition, we can write Rπ = (1− γ)Es∼d0 [V π(s)]. The following recursive equation also holds:\nV π(s) = Ea∼π(s) [ r(s, a) + γEs′∼T (s,a)[V (s′)] ] ,\nand similarly with an approximate value function V̂ π(s) = E M̂,π [ ∑∞ t=0 γ\ntrt|s0 = s] given an MDP estimate M̂ . Then,\n1\n1− γ ∣∣∣Rπ − R̂π∣∣∣ = ∣∣∣∣Ed0 [V π(s0)− V̂ π(s0)] ∣∣∣∣ =\n∣∣∣∣Ed0,π [r0 − r̂0] + γEd0 [∫ (V π(s1)T (s1|s0, a0)− V̂ π(s1)T̂ (s1|s0, a0))π(a0|s0)ds1da0] ∣∣∣∣ =\n∣∣∣∣Ed0,π [r0 − r̂0] + γEd0[ ∫ (V π(s1)T (s1|s0, a0)− V̂ π(s1)T (s1|s0, a0) + V̂ π(s1)T (s1|s0, a0)− V̂ π(s1)T̂ (s1|s0, a0) ) π(a0|s0)ds1da0\n]∣∣∣∣ =\n∣∣∣∣Ed0,π [r0 − r̂0] + γEd0[ ∫ (V π(s1)− V̂ π(s1))︸ ︷︷ ︸ This forms a recursive equation T (s1|s0, a0)π(a0|s0)ds1da0 ]\n+ γEd0 [ ∫ V̂ π(s1) ( T (s1|s0, a0)− T̂ (s1|s0, a0) ) π(a0|s0)ds1da0 ]∣∣∣∣ =\n∣∣∣∣E(s,a)∼dπ[ ∫ (R(r|s, a)− R̂(r|s, a))dr + γ ∫ V̂ π(s′)(T (s′|s, a)− T̂ (s′|s, a))ds′]∣∣∣∣ ≤ E(s,a)∼dπ\n[ ∫ ∣∣∣R(r|s, a)− R̂(r|s, a)∣∣∣dr + γ ∫ V̂ π(s′)∣∣∣T (s′|s, a)− T̂ (s′|s, a)∣∣∣ds′] ≤ E(s,a)∼dπ [ ∫ ∣∣∣R(r|s, a)− R̂(r|s, a)∣∣∣dr + γ ∫ rmax 1− γ\n∣∣∣T (s′|s, a)− T̂ (s′|s, a)∣∣∣ds′] = 2E(s,a)∼dπ [ DTV (R(r|s, a)||R̂z(r|φ(s, a)))\n] +\n2γrmax 1− γ\nE(s,a)∼dπ [ DTV (T (s ′|s, a)||T̂z(s′|φ(s, a))) ]\nTheorem 4.2. Given an MDP M and its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, assume that there exists a constant Bφ > 0 and a function class G , {g : Z → R} such that 1Bφ Eφ,R̂z,T̂z ( φ−1(·) ) ∈ G. Then, for any policy π,∣∣∣Rπ − R̂π∣∣∣ ≤ E(s,a)∼dD[Eφ,R̂z,T̂z (s, a)]+BφIPMG(dπφ, dDφ ) (3)\nProof. From Lemma 4.1, we directly have:∣∣∣Rπ − R̂π∣∣∣ ≤ E(s,a)∼dD[Eφ,R̂z,T̂z (s, a)]+ ∫ Eφ,R̂z,T̂z (s, a) (dπ(s, a)− dD(s, a)) dsda.\nThen, ∫ Eφ,R̂z,T̂z (s, a) ( dπ(s, a)− dD(s, a) ) dsda\n= ∫ Eφ,R̂z,T̂z (φ −1(z)) ( dπ (z)− dD ( φ−1(φ−1(z)) )) ∣∣∣∣d(s, a)dz ∣∣∣∣ dz\n= ∫ Eφ,R̂z,T̂z (φ −1(z)) ( dπφ(z)− dDφ (z) ) dz\n≤ Bφ ∣∣∣∣∫ 1Bφ Eφ,R̂z,T̂z (φ−1(z)) (dπφ(z)− dDφ (z)) dz ∣∣∣∣ (Bφ > 0) ≤ Bφ sup\ng∈G ∣∣∣∣∫ g(z) (dπφ(z)− dDφ (z)) dz∣∣∣∣ ( 1Bφ Eφ,R̂z,T̂z (φ−1(z)) ∈ G )\n= BφIPMG ( dπφ, d D φ ) We state some previous results, which are required for further proof. Theorem A.1 (McDiarmid’s inequality (McDiarmid, 1989)). Let {Xi}ni=1 be independent random variables taking values in set X , and assume that f : Xn → R satisfies\nsup {xi}ni=1∈Xn,x̃∈X\n|f({xi}ni=1)− f(x1, ..., xi−1, x̃, xi+1, ..., xn)| ≤ ci. (10)\nThen for every > 0,\nPr{f({Xi}ni=1)− E{Xi}ni=1 [f({Xi} n i=1)] ≥ } ≤ exp\n( − 2\n2∑n i=1 c 2 i\n) . (11)\nLemma A.2 (Rademacher complexity of RKHS (Bartlett & Mendelson, 2002)). LetF be a unit ball in a universal RKHS on the compact domainX , with kernel bounded according to 0 ≤ k(x, x′) ≤ k̄. Let {xi}ni=1 be an i.i.d. sample of size n drawn according to a probability measure p on X , and let σi be i.i.d. and take values in {−1, 1} with equal probability. The Rademacher complexity, which is defined as below, is upper bounded as:\nRn(F) , E{xi}ni=1,σ sup f∈F ∣∣∣∣∣ 1n n∑ i=1 σif(xi) ∣∣∣∣∣ ≤ √ k̄ n . (12)\nThe upper bound is followed by Lemma 22 of (Bartlett & Mendelson, 2002).\nNow we prove the following using the results above. Lemma A.3. Let Hk be a RKHS associated with universal kernel k(·, ·). Let 〈·, ·〉Hk be the inner product of Hk, which satisfies the reproducing property ν(z) = 〈ν, k(·, z)〉Hk . When G is chosen such that\nG = { g ∈ (Z → R) : g(z) = ν(z)− γEs′∼T (φ−1(z))\na′∼π(s′) [ν(φ(s′, a′))], ν ∈ (Z → R), 〈ν, ν〉Hk ≤ 1\n} ,\nthe IPMG(dπφ, d D φ ) has the following closed form definition:\nIPMG(d π φ, d D φ ) 2 = Es0∼d0,a0∼π(s0),(s,a,s′)∼dD,a′∼π(s′) s̄0∼d0,ā0∼π(s̄0),(s̄,ā,s̄′)∼dD,ā′∼π(s̄′) [ k(φ(s, a), φ(s̄, ā)) + (1− γ)2k(φ(s0, a0), φ(s̄0, ā0)) + γ2k(φ(s′, a′), φ(s̄′, ā′))\n− 2(1− γ)k(φ(s0, a0), φ(s̄, ā))− 2γk(φ(s′, a′), φ(s̄, ā)) + 2γ(1− γ)k(φ(s′, a′), φ(s̄0, ā0)) ] .\nFurthermore, suppose that k̄ , supz∈Z k(z, z). The estimator ÎPM(d π φ, d D φ ) 2, which is the samplebased estimation of IPMG(dπφ, d D φ )\n2 from n samples, satisfies with probability at least 1− δ,∣∣∣IPMG(dπφ, dDφ )− ÎPM(dπφ, dDφ )∣∣∣ ≤ √ k̄\nn\n( 4 + √ 8 log 3\nδ\n) .\nProof. In the below we write in shorthand, IPMG to denote IPMG(dπφ, d D φ ). As in Eq. (4), we can rewrite the IPM term as:\nIPMG = sup ν∈F ∣∣∣∣∣(1− γ)E s∼d0a∼π(s)[ν(φ(s, a))]− E(s,a,s′)∼dDa′∼π(s′) [ν(φ(s, a))− γν(φ(s′, a′))] ∣∣∣∣∣ ,\nand F here becomes F = {ν ∈ (Z → R) : 〈ν, ν〉Hk ≤ 1}, a unit ball in RKHS Hk. Using the reproducing property ofHk:\nIPM2G = { sup ν∈F ∣∣∣∣∣(1− γ)E s∼d0a∼π(s)[ν(φ(s, a))]− E(s,a,s′)∼dDa′∼π(s′) [ν(φ(s, a))− γν(φ(s′, a′))] ∣∣∣∣∣ }2\n= sup ν∈F\n{ (1− γ)E s∼d0\na∼π(s) [ν(φ(s, a))]− E(s,a,s′)∼dD a′∼π(s′) [ν(φ(s, a))− γν(φ(s′, a′))]\n}2\n= sup ν∈F\n{ (1− γ)E s∼d0\na∼π(s) [〈ν, k(·, φ(s, a))〉Hk ]\n− E(s,a,s′)∼dD a′∼π(s′)\n[〈ν, k(·, φ(s, a))〉Hk − γ〈ν, k(·, φ(s′, a′))〉Hk ] }2\n= sup ν∈F 〈ν, ν∗〉2Hk ,\nwhere ν∗(·) = (1− γ)E s∼d0\na∼π(s) [k(·, φ(s, a))]− E(s,a,s′)∼dD a′∼π(s′) [k(·, φ(s, a))− γk(·, φ(s′, a′))] .\nDue to the Cauchy-Schwarz inequality and 〈ν, ν〉Hk ≤ 1, ∀ν ∈ F , 〈ν, ν∗〉2Hk ≤ 〈ν, ν〉Hk〈ν\n∗, ν∗〉Hk ≤ 〈ν∗, ν∗〉Hk holds and supν∈F 〈ν, ν∗〉2Hk = 〈ν\n∗, ν∗〉Hk . Using the property that 〈k(·, z), k(·, z̄)〉Hk = k(z, z̄), we can derive the closed form expression in the lemma from 〈ν∗, ν∗〉Hk . Now we prove the error bound of the estimator. First, we divide IPMG(dπφ, d D φ ) into three parts:\nIPMG(d π φ, d D φ ) = sup ν∈F |f1(ν) + f2(ν) + f3(ν)|\nwhere f1(ν) = (1− γ)Es∼d0,a∼π(s)[ν(φ(s, a))], f2(ν) = −E(s,a)∼dD [ν(φ(s, a))] , f3(ν) = E(s,a,s′)∼dD,a′∼π(s′) [γν(φ(s′, a′))] .\nGiven n samples {s(i)0 , a (i) 0 , s (i), a(i), s′(i), a′(i)}ni=1 from the generative process s (i) 0 ∼ d0, a (i) 0 ∼ π(s (i) 0 ), (s\n(i), a(i), s′(i)) ∼ dD, a′(i) ∼ π(s′(i)) ∀i, we define the sample-based estimator ÎPM(dπφ, d D φ ):\nÎPM(dπφ, d D φ )\n2 = 1\nn2 ∑ i,j [ k(φ(s(i), a(i)), φ(s(j), a(j))) + (1− γ)2k(φ(s(i)0 , a (i) 0 ), φ(s (j) 0 , a (j) 0 ))\n+ γ2k(φ(s′(i), a′(i)), φ(s′(j), a′(j)))− 2(1− γ)k(φ(s(i)0 , a (i) 0 ), φ(s (j), a(j)))\n− 2γk(φ(s′(i), a′(i)), φ(s(j), a(j))) + 2γ(1− γ)k(φ(s′(i), a′(i)), φ(s(j)0 , a (j) 0 ))\n] .\nBy deriving in reverse order, we can recover its another definition in supremum, which can be divided into three parts:\nÎPM(dπφ, d D φ ) = sup\nν∈F ∣∣∣f̂1(ν) + f̂2(ν) + f̂3(ν)∣∣∣ where f̂1(ν) =\n1− γ n n∑ i=1 ν ( φ ( s (i) 0 , a (i) 0 )) , f̂2(ν) = − 1 n n∑ i=1 ν ( φ ( s(i), a(i) )) ,\nf̂3(ν) = γ\nn n∑ i=1 ν ( φ ( s′(i), a′(i) )) .\nWe can bound the error of sample-based estimator with individual errors as:∣∣∣∣IPMG(dπφ, dDφ )− ÎPM(dπφ, dDφ )∣∣∣∣ = ∣∣∣∣sup ν∈F |f1(ν) + f2(ν) + f3(ν)| − sup ν∈F ∣∣∣f̂1(ν) + f̂2(ν) + f̂3(ν)∣∣∣∣∣∣∣ ≤ sup ν∈F\n∣∣∣|f1(ν) + f2(ν) + f3(ν)| − ∣∣∣f̂1(ν) + f̂2(ν) + f̂3(ν)∣∣∣∣∣∣ ≤ sup ν∈F\n∣∣∣f1(ν) + f2(ν) + f3(ν)− f̂1(ν)− f̂2(ν)− f̂3(ν)∣∣∣ ≤ sup ν∈F\n[∣∣∣f1(ν)− f̂1(ν)∣∣∣+ ∣∣∣f2(ν)− f̂2(ν)∣∣∣+ ∣∣∣f3(ν)− f̂3(ν)∣∣∣] ≤ sup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣+ sup ν∈F ∣∣∣f2(ν)− f̂2(ν)∣∣∣+ sup ν∈F\n∣∣∣f3(ν)− f̂3(ν)∣∣∣ . We then observe that\nsup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣ = (1− γ){Es∼d0,a∼π(s) s̄∼d0,ā∼π(s) [k(φ(s, a), φ(s̄, ā))]\n− 2 n n∑ i=1 Es̄∼d0,ā∼π(s) [ k ( φ ( s(i), a(i) ) , φ(s̄, ā) )] + 1\nn2 n∑ i=1 n∑ j=1 k ( φ ( s(i), a(i) ) , φ ( s(j), a(j) ))}1/2 ,\nwhich shows that changing s(i), a(i) results in changes of supν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣ in magnitude\nof at most 2(1 − γ)k̄1/2/n where k̄ = supz∈Z k(z, z). Therefore, by McDiarmid’s inequality (Theorem A.1),\nPr { sup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣− Es(i),a(i) [sup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣] ≥ } ≤ exp(− n 2 2(1− γ)2k̄ ) .\nAlso, Es(i),a(i) [\nsup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣] =\n1− γ n Es(i),a(i) [ sup ν∈F ∣∣∣∣∣Es̄(i),ā(i) [ n∑ i=1 ν ( φ ( s̄ (i) 0 , ā (i) 0 ))] − n∑ i=1 ν ( φ ( s (i) 0 , a (i) 0 ))∣∣∣∣∣ ]\n≤ 1− γ n Es(i),a(i),s̄(i),ā(i) [ sup ν∈F ∣∣∣∣∣ n∑ i=1 ν ( φ ( s̄ (i) 0 , ā (i) 0 )) − n∑ i=1 ν ( φ ( s (i) 0 , a (i) 0 ))∣∣∣∣∣ ]\n= 1− γ n Es(i),a(i),s̄(i),ā(i),σ(i) [ sup ν∈F ∣∣∣∣∣ n∑ i=1 σ(i) { ν ( φ ( s̄ (i) 0 , ā (i) 0 )) − ν ( φ ( s (i) 0 , a (i) 0 ))}∣∣∣∣∣ ]\n≤ 2(1− γ) √ k̄\nn ,\nwhere the last inequality is from Lemma A.2. Combining the results, we get\nPr { sup ν∈F ∣∣∣f1(ν)− f̂1(ν)∣∣∣− 2(1− γ)√ k̄ n ≥ } ≤ exp ( − n 2 2(1− γ)2k̄ ) .\nSimilarly, we derive bounds for f2 and f3 respectively:\nPr { sup ν∈F ∣∣∣f2(ν)− f̂2(ν)∣∣∣− 2√ k̄ n ≥ } ≤ exp ( −n 2 2k̄ ) ,\nPr { sup ν∈F ∣∣∣f3(ν)− f̂3(ν)∣∣∣− 2γ√ k̄ n ≥ } ≤ exp ( − n 2 2γ2k̄ ) .\nBy letting RHS of above bounds to be δ/3 and using union bound, we get, with probability 1-δ, we get ∣∣∣∣IPMG(dπφ, dDφ )− ÎPM(dπφ, dDφ )∣∣∣∣ ≤ √ k̄\nn\n( 4 + √ 8 log 3\nδ\n) . (13)\nThe relationship between F and G G = { g ∈ (Z → R) : g(z) = ν(z)− γEs′∼T (φ−1(z))\na′∼π(s′) [ν(φ(s′, a′))], ν ∈ (Z → R), 〈ν, ν〉Hk ≤ 1 } show that when the conditional expectation Es′∼T (φ−1(·)),a′∼π(s′)[ν(φ(s′, a′))] : Z → R is a function in RKHSHk, G also becomes a subset ofHk. Then we can prove the following Theorem.\nTheorem 4.3. Given an MDP M , its estimate M̂ with a bijective representation function φ, i.e. (T̂ , R̂) = 〈T̂z ◦ φ, R̂z ◦ φ〉, and an RKHS Hk ⊂ (Z → R) induced by a universal kernel k such that supz∈Z k(z, z) = k̄, assume that fφ,R̂z,T̂z (z) =\nET,π [∑∞\nt=0 γ tEφ,R̂z,T̂z (st, at) ∣∣∣(s0, a0) = φ−1(z)] ∈ Hk with Bφ = ‖fφ,R̂z,T̂z‖Hk and the loss is bounded by Ē = sups∈S,a∈A Eφ,R̂z,T̂z (s, a). Let n be the number of data in D. With probability 1− 2δ,∣∣∣Rπ−R̂π∣∣∣ ≤ 1\nn ∑ (s,a)∈D Eφ,R̂z,T̂z (s, a)+BφÎPM(d π φ, d D φ )+ √ Ē2 2n log 1 δ +Bφ √ k̄ n ( 4 + √ 8 log 3 δ ) .\nProof. Applying the Hoeffding inequality (Hoeffding, 1963), with probability 1− δ, we get:\nE(s,a)∼dD [ Eφ,R̂z,T̂z (s, a) ] ≤ 1 n ∑ (s,a)∼D Eφ,R̂z,T̂z (s, a) + √ Ē2 2n log 1 δ .\nBy using an union bound with Eq. (13) and substituting terms in Eq. (3), we recover the result." }, { "heading": "B EXPERIMENT DETAILS", "text": "B.1 COMPUTING INFRASTRUCTURE\nAll experiments were conducted on the Google Cloud Platform. Specifically, we used computeoptimized machines (c2-standard-4) that provide 4 vCPUs and 16 GB memory for the evaluation experiment of Section 5.1, and we used high-memory machines (n1-highmem-4), which provide 4 vCPUs and 26GB memory, equipped with an Nvidia Tesla K80 GPU for the RL experiment of Section 5.2.\nB.2 DETAILS OF THE OPE EXPERIMENT\nTask details We did not modify CartPole-v0 environment and Acrobot-v1 environment from the original implementation of OpenAI Gym (Brockman et al., 2016) except for the maximum trajectory length. We ran PPO (Schulman et al., 2017) to optimize policies to reach a certain performance level and set them as the target policies for CartPole-v0 and Acrobot-v1. For the HIV simulator, we used the code adapted by Liu et al. (2018b), which is originally from the implementation of RLPy3. We modified the environment to have more randomness in the initial state (up to 10% perturbation from the baseline initial state) and to use the reward function that gives the logarithm of original reward values, as the original reward function scales up to 1010. We used a tree-based fitted q-iteration algorithm implemented by Liu et al. (2018b) to optimize the target policy for the HIV simulator. All the other details are shown in Table 2. We assume that the termination conditions of tasks are known in prior.\nModel and algorithm details The model we learn is composed of a representation module and a dynamics module. To be consistent with the experiment settings in Liu et al. (2018b), we use a representation module of a single hidden layer feed-forward network that takes the state as input and outputs representation. We squashed the representation between (−1, 1) using the tanh activation function. The dynamics module is also a single hidden layer feed-forward network that takes representation as input and outputs state difference and reward prediction for each action. We use the swish activation function (Ramachandran et al., 2017) for the hidden layers of two modules. As a whole, the model can also be seen as a feed-forward network with three hidden layers of varying activation functions, where the output of the second hidden layer is the representation we regularize.\nFor the purpose of comparison, we minimize the L2 distance between the model prediction and the desired outcome from data, which corresponds to using a model of Gaussian predictive distribution with fixed variance. We standardized the inputs and outputs of the neural network and used Adam (Kingma & Ba, 2014) with a learning rate of 3× 10−4 for the optimization. When compared to the similar experiments conducted in Liu et al. (2018b), we used a larger and more expressive model with more optimization steps with a smaller learning rate for a more accurate comparison. While the derivation of the RepB-SDE objective was based on the state-action representation function, we use state representation in this experiment for direct comparison with RepBM, which uses state representation (it can be also understood as using action invariant kernel). We follow the choice\n3https://github.com/rlpy/rlpy\nof Liu et al. (2018b) and use dot product kernel k(φ(s), φ(s̄)) = φ(s)>φ(s̄) for the OPE experiment, which is not universal but allows us to avoid search of kernel hyperparameters, such as length-scales.\nAfter the training, we generate another 200 trajectories (50 in case of HIV simulator), and rollout in both true and simulated (based on learned model) environments to evaluate models. We measure the individual MSE, which is\nIndividual MSE = Es0∼d0 [( E M̂,π [ ∞∑ t=0 γtrt|s0 ] − EM,π [ ∞∑ t=0 γtrt|s0 ])2]\nfor measuring the performance of each model. Whole experiment, from sampling data to learning and evaluating the model, is repeated 200 times with different random seeds.\nChoice and effect of hyperparameter α For choosing hyperparameter α for each algorithm, we searched over α ∈ {0.001, 0.01, 0.1, 1, 10} for each off-policyness and for each environment. Chosen αs were mainly α ∈ {0.001, 0.01} for CartPole-v0, α ∈ {1, 10} for Acrobotv1, and α ∈ {0.01, 0.1} for HIV simulator for both algorithms. In general, large α was beneficial when high off-policyness ( ) is present and/or the task is hard to generalize. On the right we show the example of effect of varying α in CartPole-v0.\n0.0001 0.001 0.01 0.1\n0.01\n0.02\nRe la\ntiv e\nIn di\nvi du\nal M\nSE\nCartPole-v0, =0.5\nBaseline RepBM MRB (ours)\nComparison to other baselines In Figure 2, the OPE results with other model-free baselines are presented. FQE: Fitted Q-evaluation, IS: step-wise importance sampling, DR: doubly robust estimator based on step-wise importance sampling using the value function learned with fitted Qevaluation (Jiang & Li, 2016), DualDICE: stationary distribution correction algorithm (Nachum et al., 2019a). We used the implementation provided by the authors in case of DualDICE. All results are normalized to set the MSE of model-based baseline to be 1. Here, we used average MSEs instead of individual MSEs since importance sampling estimators are not suitable in computing individual MSEs. The results show that model-based methods are more robust to increasing off-policyness when compared to FQE. The results of model-based methods on Acrobot is relatively worse due to the difficult dynamics of Acrobot environment.\nB.3 DETAILS OF THE OFFLINE RL EXPERIMENTS\nTask details We used 12 datasets in D4RL (Fu et al., 2020) over four dataset types and three environments as specified in the main text. In Table 1, normalized scores suggested by (Fu et al., 2020) is used to report the result, where score of 0 corresponds to a random policy and 100 corresponds to a converged SAC policy. In HalfCheetah-v2, score of 0 means the undiscounted return of −280, and score of 1 means the undiscounted return of 12135. In Hopper-v2, score of 0 means the undiscounted return of−20, and score of 1 means the undiscounted return of 3234. In Walker2d-v2 score of 0 means the undiscounted return of 2, and score of 1 means the undiscounted return of 4592. We assume that the termination conditions of tasks are known in prior.\nRepresentation balancing maximizing undiscounted return As we report the undiscounted sum of rewards in the experiments, the maximization of lower bound of Rπ may result in an underutilization of experiences of later timesteps. One way to mitigate the mismatch is to optimize the policy by maximizing the returns starting from the states in the dataset. It corresponds to maximizing R̃π = (1 − γ)Es0∼dD,M,π[ ∑∞ t=0 γ\ntrt] instead of Rπ , where the expectation respect to the initial state distribution d0 is altered with the data distribution dD.\nConsequently, to bound the error of R̃π , the representation should be balanced with another discounted stationary distribution d̃π(s, a) , (1 − γ) ∑∞ t=0 γ\nt Pr(st = s, at = a|s0 ∼ dD, T, π), the distribution induced by the policy π where the initial state is sampled from the data distribution. The derivations can be easily adapted by noting that:\nIPMG(d̃ π φ, d D φ ) = sup\nν∈F ∣∣∣∣∣(1− γ)E s∼dDa∼π(s) [ ν ( φ(s, a) )] − E(s,a,s′)∼dD\na′∼π(s′)\n[ ν ( φ(s, a) ) − γν ( φ(s′, a′) )]∣∣∣∣∣ , and changing the initial state sampling distributions to dD during the estimation of IPM.\nModel and algorithm details Similar to the model used in the OPE experiment, the model we learn is composed of a representation module and a dynamics module. A representation module is a feed-forward network with two hidden layers that takes the state-action pair as input and outputs representation through the tanh activation function. The dynamics module is a single hidden layer network that takes representation as input and outputs parameters of diagonal Gaussian distribution predicting state difference and reward. We use 200 hidden units for all intermediate layers including the representation layer. Across all domains, we train an ensemble of 7 models and pick the best 5 models on their validation error on hold-out set of 1000 transitions in the dataset. The inputs and outputs of the neural network is normalized. We present the pseudo-code of the presented Representation Balancing Offline Model-based RL algorithm below.\nAlgorithm 1 Representation Balancing Offline Model-based RL Input: Offline dataset D, previous policy π Output: Optimized policy π\n1: Sample K independent datasets with replacement from D. 2: Train bootstrapped ensemble of K models {T̂i, R̂i}Ki=0 minimizing Eq. (8) (adapted with d̃πφ). 3: for repeat = 0, 1, . . . do 4: for rollout = 0, 1, . . . , B do 5: Sample initial rollout state s0 from D. 6: for t = 0, 1, . . . do 7: Sample an action at ∼ π(st). 8: Randomly pick (T̂i, R̂i) and sample (st+1, rt) ∼ (T̂i, R̂i)(st, at). 9: Compute r̃t = rt − γλ ∥∥∥√VK [µ(st, at)]∥∥∥ 2\nand store (st, at, r̃t, st+1) in D̂. 10: end for 11: end for 12: Draw samples from D to compute ÎPM(d̃πφ, dDφ ). 13: Draw samples from D and D̂ to update critic Q. 14: Maximize Es∼D∪D̂,a∼π(s)[Q(s, a)− τ log π(a|s)]− απ ÎPM(d̃ π φ, d D φ ) to update π. 15: end for\nFor the result of MOPO (Yu et al., 2020), we ran the code kindly provided by the authors4 without any modification on the algorithm or the hyperparameters. All algorithms we experimented (Base, RP, RepB-SDE) share all the hyperparameters in common except the ones associated with changing objectives. We run SAC on the full rollouts from the trained ensemble models as shown in Algorithm 1. The common hyperparameters shared among algorithms are shown in Table 3. We simply tried the listed hyperparameters and not tuned them further. For RP and RepB-SDE, we penalized the reward from simulated environments with the standard deviation of prediction means of neural network ensembles. We used standardized output of all 7 neural networks to compute the reward\n4https://github.com/tianheyu927/mopo\npenalty. We first search over the reward penalty coefficient λ ∈ {0, 2, 5, 7, 10, 15} that grants RP the best performance. We shared same λ for RepB-SDE and searched over αM ∈ {0.01, 0.1, 1}, απ ∈ {0, 0.01, 0.1, 1, 10}. We ran the algorithms for 1.5 × 106 gradient updates, except for HalfCheetah-Medium-Expert where we ran for 5.0 × 106. In Table 4, we present the standard errors of results, which was omitted in the main text.\nIn addition to the performance after the final iteration presented in Table 1, we also present the learning curves of the experiment in Figure 3, and the effect of the choice of αM and απ in Figure 4. The Figure 4 shows that the presented regularization is robust to the choice of αM , απ except for too large αM , consistently improving from RP, which corresponds to αM = 0, απ = 0.\nEffect of varying M and (Medium-Walker2d-v2)" } ]
2,021
REPRESENTATION BALANCING OFFLINE MODEL-BASED REINFORCEMENT LEARNING
SP:7689dac5ea0db9c2a021e33f03d8cdeb9b5e4290
[ "This paper explores the problem of designing a distance measure in the space of the functions parameterized by neural networks. Ideally, such a measure should be independent of the parameterization of the networks. Also, the measure should support quantifying the distance between the networks with different structures and/or different underlying training tasks. The information distance meets this natural requirement. However, information distance is computational infeasible as it is defined by Kolmogorov complexity." ]
We provide a practical distance measure in the space of functions parameterized by neural networks. It is based on the classical information distance, and we propose to replace the uncomputable Kolmogorov complexity with information measured by codelength of prequential coding. We also provide a method for directly estimating the expectation of such codelength with limited examples. Empirically, we show that information distance is invariant with respect to different parameterization of the neural networks. We also verify that information distance can faithfully reflect similarities of neural network functions. Finally, we applied information distance to investigate the relationship between neural network models, and demonstrate the connection between information distance and multiple characteristics and behaviors of neural networks.
[]
[ { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Léonard Blier", "Yann Ollivier" ], "title": "The description length of deep learning models", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Gong Cheng", "Junwei Han", "Xiaoqiang Lu" ], "title": "Remote sensing image scene classification: Benchmark and state of the art", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Rudi Cilibrasi", "Paul M.B. Vitányi" ], "title": "The google similarity distance", "venue": "IEEE Trans. Knowl. Data Eng.,", "year": 2007 }, { "authors": [ "Mircea Cimpoi", "Subhransu Maji", "Iasonas Kokkinos", "Sammy Mohamed", "Andrea Vedaldi" ], "title": "Describing textures in the wild", "venue": "In 2014 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Tommaso Furlanello", "Zachary Chase Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born-again neural networks", "venue": "Proceedings of the 35th International Conference on Machine Learning, ICML 2018,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Patrick Helber", "Benjamin Bischke", "Andreas Dengel", "Damian Borth" ], "title": "Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification", "venue": "IEEE J. Sel. Top. Appl. Earth Obs. Remote", "year": 2019 }, { "authors": [ "Geoffrey E. Hinton", "Drew van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In Lenny Pitt (ed.), Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory,", "year": 1993 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Yiding Jiang", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ], "title": "Predicting the generalization gap in deep networks with margin distributions", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Simon Kornblith", "Mohammad Norouzi", "Honglak Lee", "Geoffrey E. Hinton" ], "title": "Similarity of neural network representations revisited", "venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": "arXiv preprint arXiv:1404.5997,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "Fu Jie Huang", "Léon Bottou" ], "title": "Learning methods for generic object recognition with invariance to pose and lighting", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "year": 2004 }, { "authors": [ "Fei-Fei Li", "Robert Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2006 }, { "authors": [ "Xuhong Li", "Yves Grandvalet", "Rémi Flamary", "Nicolas Courty", "Dejing Dou" ], "title": "Representation transfer by optimal transport", "venue": "arXiv preprint arXiv:2007.06737,", "year": 2020 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": null, "year": 2017 }, { "authors": [ "Hossein Mobahi", "Mehrdad Farajtabar", "Peter L Bartlett" ], "title": "Self-distillation amplifies regularization in hilbert space", "venue": "arXiv preprint arXiv:2002.05715,", "year": 2020 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "In Sixth Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Omkar M. Parkhi", "Andrea Vedaldi", "Andrew Zisserman", "C.V. Jawahar" ], "title": "Cats and dogs", "venue": "In 2012 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Marco Túlio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should I trust you?”: Explaining the predictions of any classifier", "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Yossi Rubner", "Carlo Tomasi", "Leonidas J. Guibas" ], "title": "A metric for distributions with applications to image databases", "venue": "In Proceedings of the Sixth International Conference on Computer Vision", "year": 1998 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "Proceedings of the British Machine Vision Conference 2016,", "year": 2016 }, { "authors": [ "Xiaohua Zhai", "Joan Puigcerver", "Alexander Kolesnikov", "Pierre Ruyssen", "Carlos Riquelme", "Mario Lucic", "Josip Djolonga", "Andre Susano Pinto", "Maxim Neumann", "Alexey Dosovitskiy" ], "title": "A large-scale study of representation learning with the visual task adaptation benchmark", "venue": "arXiv preprint arXiv:1910.04867,", "year": 2019 }, { "authors": [ "Xian Zhang", "Yu Hao", "Xiaoyan Zhu", "Ming Li", "David R. Cheriton" ], "title": "Information distance from a question to an answer", "venue": "Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2007 }, { "authors": [ "Xiao Zhang", "Xingjian Li", "Dejing Dou", "Ji Wu" ], "title": "Measuring information transfer in neural networks", "venue": "arXiv preprint arXiv:2009.07624,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks can be trained to represent complex functions that describe sophisticated input-output relationships, such as image classification and machine translation. Because the functions are highly non-linear and are parameterized in high-dimensional spaces, there is relatively little understanding of the functions represented by deep neural networks. One could interpret deep models by linear approximations (Ribeiro et al., 2016), or from the perspective of piece-wise linear functions, such as in (Arora et al., 2018).\nIf the space of functions representable by neural networks admits a distance measure, then it would be a useful tool to help analyze and gain insight about neural networks. A major difficulty is the vast number of possibilities of parameterizing a function, which makes it difficult to characterize the similarity given two networks. Measuring similarity in the parameter space is straightforward but is restricted to networks with the same structure. Measuring similarity at the output is also restricted to networks trained on the same task. Similarity of representations produced by intermediate layers of networks is proved to be more reliable and consistent (Kornblith et al., 2019), but is not invariant to linear transformations and can fail in some situations, as shown in our experiments.\nIn this paper, we provide a distance measure of functions based on information distance (Bennett et al., 1998), which is independent of the parameterization of the neural network. This also removes the arbitrariness of choosing “where” to measure the similarity in a neural network. Information distance has mostly been used in data mining (Cilibrasi & Vitányi, 2007; Zhang et al., 2007). Intuitively, information distance measures how much information is needed to transform one function to the other. We rely on prequential coding to estimate this quantity. Prequential coding can efficiently encode neural networks and datasets (Blier & Ollivier, 2018). If we regard prequential coding as a compression algorithm for neural networks, then the codelength can give an upper bound of the information quantity in a model.\nWe propose a method for calculating an approximated version of information distance with prequential coding for arbitrary networks. In this method, we use KL-divergence in prequential training and coding, which allow us to directly estimate the expected codelength without any sampling process. Then we perform experiments to demonstrate that this information distance is invariant to the parameterization of the network while also being faithful to the intrinsic similarity of models. Using information distance, we are able to sketch a rough view into the space of deep neural networks and uncover the relationship between datasets and models. We also found that information distance can\nhelp us understand regularization techniques, measure the diversity of models, and predict a model’s ability to generalize." }, { "heading": "2 METHODOLOGY", "text": "Information distance measures the difference between two objects by information quantity. The information distance between two functions fA and fB can be defined as (Bennett et al., 1998):\nd(fA, fB) = max{K(fA|fB),K(fB |fA)} (1)\nThis definition makes use of Kolmogorov complexity: K(fB |fA) is the length of the shortest program that transforms fA into fB , and information distance d is the larger length of either direction. (Note that this is not the only way to define information distance with Kolmogorov complexity, however we settle with this definition for its simplicity.) Intuitively, this is the minimum number of bits we need to encode fB with the help of fA, or how much information is needed to know fB if fA is already known.\nGiven two functions fA : X → Y and fB : X → Y ′ defined on the same input space X , each parameterized by a neural network with weights θA and θB , we want to estimate the information distance between fA and fB . The estimation of Kolmogorov complexity term is done by calculating the codelength of prequential coding, so what we get is an upper bound of d, which we denote by dp (p for prequential coding).\n2.1 ESTIMATING K(fB |fA) WITH PREQUENTIAL CODING\nTo send fB to someone who already knows fA, we generate predictions yi from fB using input xi sampled from X . Assume that {xi} is known, we can use prequential coding to send labels {yi}. If we send enough labels, the receiver can use {xi, yi} to train a model to recover fB . If fA and fB have something in common, i.e. K(fB |fA) < K(fB), then with the help of fA we can reduce the codelength used to transmit fB . A convenient way of doing so is to use θA as the initial model in prequential coding. The codelength of k samples is:\nLpreq(y1:k|x1:k) := − k∑ i=1 log pθ̂i(yi|x1:i, y1:i−1) (2)\nwhere θ̂i is the parameter of the model trained on {x1:i−1, y1:i−1}, and θ̂1 = θA. With sufficient large k, the function parameterized by θ̂k would converge to fB .\nIf both fA and fB are classification models, we can sample y from the output distribution of fB . In this case, the codelength (2) not only transmits fB , but also k specific samples we draw from fB . The information contained in these specific samples is ∑k i=1 log pθB (yi|xi). Because we only care about estimating K(fB |fA), using the “bits-back protocol” (Hinton & van Camp, 1993) the information of samples can be subtracted from the codelength, resulting in an estimation of K(fB |fA) as Lk(fB |fA):\nLk(fB |fA) = − k∑ i=1 log pθ(yi|x1:i, y1:i−1) + k∑ i=1 log pθB (yi|xi) (3)\nIn practice, we want to use k sufficiently large such that fθ̂k can converge to fB , for example by the criterion\nEx[DKL(fB(x)||fθ̂k(x))] ≤ (4)\nHowever, empirically we found that this often means a large k is needed, which can make the estimation using (3) unfeasible when the number of available x is small. Also the exact value of (3) depends on the specific samples used, introducing variance into the estimation.\n2.2 THE PRACTICAL INFORMATION DISTANCE dp\nWe propose to directly estimate the expectation of Lk(fB |fA), which turns out to be much more efficient in the number of examples x, by leveraging infinite y samples. The expectation of codelength Ey1:k [Lk] over all possible samples y1:k from fB is:\nEy1:k∼fB(x1:k)[Lk(fB |fA)] =− k∑ i=1 Ey1:i log pθ(yi|x1:i, y1:i−1) + k∑ i=1 Eyi log pθB (yi|xi)\n≥− k∑ i=1 Eyi logEy1:i−1pθ(yi|x1:i, y1:i−1) + k∑ i=1 Eyi log pθB (yi|xi)\n= k∑ i=1 DKL(fB(xi)||Ey1:i−1fθ̂i(xi)) (5)\n≈ k∑ i=1 DKL(fB(xi)||fθ̄i(xi)) =: L ′(fB |fA) (6)\nIn (5), Ey1:i−1fθ̂i(xi) represents an infinite ensemble of models θ̂i estimated from all possible samples y1:i−1. We replace this ensemble with a single model θ̄i that is directly trained on all the samples. θ̄i is trained using KL-divergence as objective, which is equivalent to training with infinite samples (see Appendix A for details from (5) to (6)).\nThe expected codelength E[Lk] is related, via (6), to the KL-divergence between the output distributions of fB and fθ̄i . Another interpretation of the above estimation is, we finetune model θA with an increasing number of outputs generated by θB , and aggregate the KL-divergence between the two models along the way. The more information fA shares with fB , the faster the KL-divergence decreases, resulting in a lower estimation of K(fB |fA). Now dp is the approximation of (1) we propose in this paper:\ndp(fA, fB) = ∧ max{L′(fA|fB), L′(fB |fA)} (7)\n2.3 PROPERTIES OF dp\nThe information distance d in (1) applied to functions defines a metric on the space of functions. Now we check if dp satisfy the axioms of a metric:\n1. dp(fA, fB) = 0 ⇔ fA = fB : fA = fB if and only if they always produce the same predictions, which is equivalent to dp(fA, fB) = 0\n2. dp(fA, fB) = dp(fB , fA): by definition.\n3. dp(fA, fB) ≤ dp(fA, fC) + dp(fC , fB): whether dp keeps this property of d depends on the efficiency of prequential coding, which in turn depends on model optimization.\nAnother important property of the information distance d is the invariancy with respect to the parameterization of function f . We found that dp is also largely invariant to parameterization of the functions. dp can be used to compare models trained differently, having different structures, or even trained on different tasks. The only condition is that both models should have sufficient expressibility to allow approximation of each other.\nThere is also a connection between Lk(fB |fA) and the information transfer measure LIT (Zhang et al., 2020):\nLkIT (θn) = L preq θ0 (y1:k|x1:k)− Lpreqθn (y1:k|x1:k)\nas n→∞, θn → θB , and when yi ∼ fB(xi), we have\nE[LkIT (θn)] =E[L preq θA (y1:k|x1:k)]− E[LpreqθB (y1:k|x1:k)] =E[Lk(fB |fA)] (8)" }, { "heading": "2.4 DATA-DEPENDENCY AND EQUIVALENT INTERPRETATIONS OF DATA", "text": "In machine learning, we often only care about the output of a model on the data distribution of the task. Neural network models are trained on input data from a specific domain, for example, image classification models take natural images in RGB format as valid input. It would be meaningless to discuss the behavior of such a model on non-RGB images. This is an important distinction between a neural network function and a function in the mathematical sense.\nThis motivates us to take a data-dependent formulation of distance measure. In this paper, we limit our discussion to distribution-dependent information distance:\ndp(fA, fB) = max{K(f ′A|fB),K(f ′B |fA)} (9) where\nf ′A = arg min f∈FA K(f |fB), f ′B = arg min f∈FB K(f |fA) (10)\nare equivalencies of fA and fB in the below function family ( can be A or B):\nF = {f |Ex∼D[DKL(f(x)||f (x))] ≤ } (11) F is a set containing all the functions producing outputs almost indistinguishable from f , in the expected sense over x drawn from data distribution D. Because they produce almost identical outputs for x ∼ D, we call them equivalent interpretations of data in D. Intuitively, this means that instead of transmitting fB , we can transmit f ′B , which is equivalent to fB on D, if f ′ B can be transmitted in fewer bits.\nA quick note on why data-dependency here in the context of neural network models does not break the definition of information distance: if f is a neural network trained on dataset D, then f is fully determined by the information in D plus a random seed (which is of negligible information).\nBy introducing data-dependency, it enables us to approximate Kolmogorov complexity by coding samples drawn from data distribution D, in other words we can use the training set for coding." }, { "heading": "3 EMPIRICAL STUDY", "text": "The proposed information distance dp relies on an estimation of Kolmogorov complexity of neural networks with prequential codelength, which unfortunately does not have known theoretical guarantees. Therefore we validate the performance of dp mainly with empirical results. We use experiments to show the advantages of dp and in what situations are dp useful." }, { "heading": "3.1 EXPERIMENT SETUP", "text": "All experiments in this section are performed using ResNet-56 models (He et al., 2016) on TinyImageNet1, a 200-class image classification dataset. To make the codelength estimation more reliable, we try to achieve lower codelength in prequential coding by optimizing the model estimation process in sequential coding. We performed a hyper-parameter search to select optimal hyperparameters that result in the lowest codelength. Unless otherwise stated, we use k = 10000 in experiments throughout this paper, which we found in most cases allow the difference between the coding model and the reference model Ex[DKL(fB(x)||fθ̂k(x))] to converge." }, { "heading": "3.2 INVARIANCES OF INFORMATION DISTANCE", "text": "A prominent advantage of information distance is independent of parameterization. Neural networks like multi-layer perceptrons can have a very large number of different configurations (units, weights) that correspond to the same function. Because there is no “canonical” way of parameterizing neural networks, comparing the input-output functions represented by different neural networks can be difficult by merely looking at the network parameters.\nThere exist many metrics for measuring the similarity or distance between neural networks. But because neural networks are so versatile, similar neural networks can look similar under some metric\n1https://tiny-imagenet.herokuapp.com\nwhile very dissimilar by other metrics. There lacks a universal definition of similarity, which is precisely the problem we try to solve with dp.\nTo empirically examine the invariancy of dp, we evaluated dp under different re-parameterizations of a neural network. We also include a number of distance metrics as baselines for comparison. Descriptions of test scenarios, baselines, and the results are shown in Figure 1. Table 1 summarizes the observed invariancy of distance measures with a quantitative measure.\nResults indicates that dp is relatively stable under different kinds of re-parameterization of the network and is the most invariant overall. Other distance measures all exhibit strong dependency on certain kind of parameterization or is inapplicable for some parameterizations. For re-parameterizations that does not change or only minimally change the function f (scaling, neuron swapping, initialization, architecture), dp also exhibit minimal change. For adding perturbations to the network, information distance only starts to increases when the perturbation is large enough. This is because only large noise will start to “wipe out” information in the network. dp is also robust to small adversarial perturbations, while also showing that adversarial perturbations destroy information in the network faster than random noise.\nIn Initialization experiments, we initialize θA with a network pre-trained on CIFAR-10 (Krizhevsky & Hinton, 2009). dp increase 7% compared to random initialization, because θA carries over some information from CIFAR-10, making θA and θB slightly less similar. In Architecture experiments, if θA uses a different architecture than θB (which is ResNet-56), we also observe increase in dp. The more different the models are (in terms of number of layers) form ResNet-56, the distance dp is also slightly higher. This indicates that while dp is largely invariant to model parameterizations, it is also consistent with intuitive similarities of models. This is not observed with EMD and CKA distances." }, { "heading": "3.3 IS INFORMATION DISTANCE FAITHFUL?", "text": "Having seen the invariance properties of dp, we next check if dp is faithful to reflect true model distance. Empirically, we verify if dp is aligned with intuitive indicators of model distance. We experiment with two settings: examining model interpolation and the model training progress. These can serve as a sanity-check that dp does reflect differences in the information of models.\nIn model interpolation we use a straightforward way to manipulate the distance between models: we use two ResNet-56 models θA and θB trained on Tiny-Imagenet, and perform linear interpolation in parameter space to get model θ = (1 − c) θA + c θB . As c change from 0 to 1, the model θ goes smoothly from θA to θB . We apply dp to measure the distance from the interpolated model θ to θB for different c. The results are shown in Figure 2.2\nAs we interpolate two functions fA and fB in the parameter space, if θA and θB are parameterized similarly, we observe dp(f, fB) to monotonically decrease as f gets closer to fB (Figure 2 left). At the beginning when c is small, increase in c introduces more “fresh information” about fB , thus dp\n2To avoid clutter in graphs, we did not include L2 and cosine measures in Figure 2 and 3, as they fail basic invariancy tests in Section 3.2. Full results are given in Appendix B.2.\ndecreases faster than later in interpolation. On the other hand, if we interpolate two functions that are parameterized differently, because linear mixing of θA and θB in parameter space leads to a degraded network, the distance would first increase then decrease, indicating a loss of information middle in the interpolation. Overall, the general trend of dp agrees with advanced similarity measures like representation EMD and CKA in this scenario.\nIn the experiment training progress, we use dp to examine the training progress of a model. Starting from θ0, we train the model n epochs to converge and measure the distance from the i-th epoch model θi to the initial model θ0 as well as to the final model θn. Results are shown in Figure 3.\nWith the training progress after each epoch, dp(fi, f0) steadily decreases and dp(fi, fn) increases. Change in distance is faster in earlier epochs because most of the learning happens in the first few epochs. In this scenario, dp is monotonic with training and correctly depicts the training progress. Both representation EMD and CKA fail to show the dynamics of training beyond the first epoch.\nTo summarize, parameter space distances fail when function similarity does not correspond to parameter value similarities, and representation space distances can be too noisy to be reliable when similarity is high. Only information distance dp remains faithful in both scenarios." }, { "heading": "4 APPLICATION", "text": "To illustrate the utility of a universal function distance, we provide a few scenarios where we use dp for understanding and making predictions." }, { "heading": "4.1 SKETCHING THE GEOMETRY OF DATA AND MODEL SPACE", "text": "A distance measure can help us understand the relationship between datasets and between models. Datasets and models usually live in very high-dimensional spaces, which makes it hard to directly perform a comparison. Instead, we can use dp to get the information distance between datasets and models. In computer vision there is a myriad of datasets and model structures, and we use the Visual Task Adaptation Benchmark (VTAB) (Zhai et al., 2019) as a collection of vision datasets. On each dataset, a model is trained to represent the input-output function of the task. Then we use dp to measure pairwise distances between these functions. To help to visualize the relationship, we use Isometric Mapping (Tenenbaum et al., 2000), a manifold learning algorithm to generate threedimensional embeddings for each function. The distance of points in three-dimensional space is optimized to keep the original structure.\nDistances can tell a lot about the relationship between models. In the nine large datasets of VTAB, datasets cluster largely according to the three categories proposed in VTAB (natural, specialized, and structured). CIFAR-100 is very different from any other datasets, but is relatively closer to satellite image datasets than to artificial shape datasets. SVHN (Netzer et al., 2011) is close to 2D shape datasets. The four small datasets are evenly distributed in space: no pair of them is very similar. In terms of model architecture, ResNet variants are relatively similar, while AlexNet (Krizhevsky, 2014) and VGG (Simonyan & Zisserman, 2014) are farther away. VGG with batch normalization\nis closer to ResNet than without. ResNet-50, ResNeXt-50 (Xie et al., 2017) and WideResNet-50 (Zagoruyko & Komodakis, 2016) are closest as they are indeed very similar." }, { "heading": "4.2 UNDERSTANDING REGULARIZATIONS", "text": "Regularization techniques like L2 regularization can bias the learned neural networks toward less complex functions, while for techniques like dropout (Srivastava et al., 2014) and self-distillation (Furlanello et al., 2018), the regularization effect may be less straightforward to explain. We can use dp to examine the (information) complexity of a network f , by measuring its distance dp(f,0) to an empty function.\nFrom Figure 5, we observe that all the listed techniques result in a reduction of dp(f,0), which means that the information complexity of the model function f is reduced. For weight decay, information complexity only starts to decrease after the regularization coefficient is larger than a threshold. Self-distillation has a similar effect to regularization, with the number of distillation iterations controlling regularization strength. This agrees with the theoretical analysis in (Mobahi et al., 2020). Label smoothing and dropout also result in simpler models, highlighting their regularization effect." }, { "heading": "4.3 ENSEMBLES AND MODEL DIVERSITY", "text": "Distance can be used as an indicator of model diversity: the larger dp between models, the more diverse the models are. Ensembling is a common technique to use the consensus of multiple models to deliver superior performance than a single model. We speculate that a larger model diversity will result in more performance gain from ensembles.\nTo verify this connection, we train a number of models on Tiny-Imagenet, all to the same performance on the validation set, but with different initializations and different subsets of the training set. Then we choose models in pairs to measure their ensemble performance as well as the distance dp(f1, f2) between them. The result is given in Figure 6, and we found a clear correlation between dp and ensemble performance, and the relationship is about linear. This also indicates that dp captures model diversity." }, { "heading": "4.4 PREDICTING GENERALIZATION", "text": "Finally, dp is also linked with model generalization. Generalization of neural networks is heavily affected by hyper-parameters and optimization. There have been several works aiming to find the relationship between generalization performance and properties of the network, but it turns out that predicting generalization gap can be a challenging task (Jiang et al., 2019; 2020).\nWe perform a small-scale experiment to illustrate the connection between information distance and generalization gap. We train a number of models with different hyper-parameters (batch size, learning rate, optimizer, etc.) all to the same loss on the training set, and then measure the distance to a random model by dp(f,0). In Figure 7, we observe that the information complexity of the model is also linked with generalization gap, which also turns out to be a roughly linear relationship. Models that generalize better are farther away from a random model than less performing models." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "The proposed distance dp is based on information distance defined with Komolgorov complexityK. We do not attempt to give a good estimation ofK, but instead relying on the efficiency of prequential coding, we empirically illustrate that dp share the invariance properties of information distance, and reflects the similarity relationships of functions parameterized by neural networks. We also found that dp is linked with behaviors of models, making it a potential tool for analyzing neural networks.\nThe most notable difference between dp and other similarity metrics is universality. Theoretically rooted in information distance, dp is independent of parameterization and widely applicable in situations involving different tasks and models. However, dp’s utilization of prequential coding also introduces limitations that it might not work in situations where prequential coding fails, for example, when f cannot be efficiently approximated by neural networks.\ndp could introduce a potential scale-free, or even parameterization-free geometry of space spanned by neural models. Optimization with manifold descent by dp could also remove the dependency on parameterization, thus avoiding ill-posed conditions in some parameterizations (Dinh et al., 2017)." }, { "heading": "B EXPERIMENT DETAILS", "text": "B.1 INVARIANCE EXPERIMENTS\nWe provide details for re-parametreizations used in invariance experiments:\n• Scaling: the weights in layer i is multiplied by c, and the weights in layer i− 1 multiplied by 1/c. For relu networks, this keeps the output of network unchanged.\n• Neuron swapping: we randomly permute c·(total number of neurons) neurons in layer i. We also correspondingly permute the input of layer i + 1 so that the network output is unchanged.\n• Perturbation: we add Gaussian noise of zero mean and standard deviation c to each individual weight of the network.\n• Adversarial perturbation: we add a vector v of standard deviation c to the weight vector of the network, and optimize the vector to maximize deviations of second-to-last layer representations. i.e. maxv:std(v)=cEx[(frθ (x)− frθ+v(x))2]\n• Initialization: we experiment with random initialization and initialize with a pre-trained network on another dataset (CIFAR-10).\n• Architecture: we use ResNet architecture with different width and depth. ResNet-56-s refers to ResNet-56 with half the width in each layer.\nNext we list the baseline distance (or similarity) measures and describe how to calculate them for network fA and fB :\nParameter space distances: we denote the i-th layer weight matrix of network fA as wiA. For L2 and cosine measures, we first flatten and concatenate all weight matrices wiA and biases of the network into a long vector wallA (excluding parameters in batch normalization layers, because some statistic variable in them can be large and dominate the norm of vector w).\n• L2: dl2 = ||wallA − wallB ||2. • Cosine: dcosine = wallA · wallB /(||wallA ||2||wallB ||2). • EMD: we use “Optimal Transport of Neurons” in (Li et al., 2020). The distance matrix M\nis taken to be the pairwise L2 distance between weights of each neuron\nM imn = ||wiAm· − wiBn·||2 The EMD distance is the optimal transport cost of matching neurons from one network to neurons in the other network\ndemd = min P∈Π(µ,ν) 〈P,M〉F\nwhere P is the optimal transport plan.\nRepresentation space distances: we sample xi from data distribution D and denote the output representation vector of the second-to-last layer of model fA by fAi.\n• L2: dl2 = 1k ∑k i=1 ||fAi − fBi||2.\n• Cosine: dcosine = 1k ∑k i=1[fAi · fBi/(||fAi||2||fBi||2)]. • EMD: same as in “parameter space distances”, except that the distance matrix M is taken to be the pairwise L2 distance between the activation vector of each neuron\nMmn = ( k∑ i=1 (fAim − fBin)2)1/2\n• Linear CKA: we use implementation provided by (Kornblith et al., 2019):\ndcka = ||fB·T fA·||2F\n||fA·T fA·||F ||fB·T fB·||F where fA· is a a matrix whose k rows are vectors fA1, ..., fAk.\nOutput space distances: we use the output distributions of fA and fB .\n• KL-divergence: Ex[DKL(fA(x)||fB(x))]\nB.2 MORE RESULTS OF SECTION 3.3\nFigure 8 and 9 shows the results in model interpolation experiments and training progress experiments, for all distance measures studied in this paper. We also show that if each method gives the correct trend, and whether it can be used to identify different models.\nB.3 GEOMETRY EXPERIMENTS\nFrom the 19 datasets included in VTAB (Zhai et al., 2019), we were able to download 13 datasets for using in this work. Because the dataset size vary greatly among the 13 datasets, we divide them into two groups: larger datasets (size > 10000), which include:\n• cifar100: CIFAR-100 (Krizhevsky & Hinton, 2009)\n• svhn: SVHN (Netzer et al., 2011)\n• eurosat: EuroSAT (Helber et al., 2019)\n• resisc45: Resisc45 (Cheng et al., 2017)\n• dsprites position: dSprites/location (Matthey et al., 2017)\n• dsprites orientation: dSprites/orientation\n• smallnorb azimuth: SmallNORB/azimuth (LeCun et al., 2004)\n• smallnorb elevation: SmallNORB/elevation\n• dmlab: DMLab (Beattie et al., 2016)\nand smaller datasets (size < 10000), which include:\n• caltech101: Caltech101 (Li et al., 2006)\n• dtd: DTD (Cimpoi et al., 2014)\n• oxford flowers102: Flowers102 (Nilsback & Zisserman, 2008)\n• oxford iiit pet: Pets (Parkhi et al., 2012)\nFor larger datasets, we use k = 10000 as with other experiments. For smaller datasets, we use k = 2000. Model is ResNet-56 trained from scratch.\nIn model geometry experiments, we use the following models trained on ImageNet, as provided by torchvision3:\n• resnet18: ResNet-18 (He et al., 2016)\n• resnet34: ResNet-34\n• resnet50: ResNet-50\n• vgg11: VGG-11 without batch normalization (Simonyan & Zisserman, 2014)\n• vgg11 bn: VGG-11 with batch normalization\n• alexnet: AlexNet (Krizhevsky, 2014)\n• resnext50: ResNeXt-50-32x4d (Xie et al., 2017)\n• wide resnet50: WideResNet-50-2 (Zagoruyko & Komodakis, 2016)\n• densenet121: Densenet-121 (Huang et al., 2017)\n• squeezenet: SqueezeNet 1.1 (Iandola et al., 2016)\n• mobilenet: MobileNet V2 (Sandler et al., 2018)\nCodelength is calculated on the training set of ILSVRC 2012 (Russakovsky et al., 2015).\nB.4 ENSEMBLE EXPERIMENTS\nWe train multiple ResNet-56 models with 2 different random initializations and half of the examples sampled form the TinyImageNet training set. This means the training examples seen by each model can overlap from 0% to 100%. We then select two models out of this collection and ensemble the two models. Ensemble performance and distance between the two models measured by dp(fA, fB) is given in Table 2. We list pairs with same or different initialization, and with different overlap in training examples. Generally speaking, the less the training examples overlaps the larger the distance between the models. Different initialization can also makes the models more dissimilar. Note that from Figure 6, we see that distance dp(fA, fB) is correlated with ensemble performance regardless of whether diversity comes from difference in training examples or difference in initializations.\nB.5 GENERALIZATION EXPERIMENTS\nWe run experiments on CIFAR-10 with different hyperparameters and model configurations, and in all configurations we train the model to cross-entropy loss of 0.1 on training set. Then we measure the generalization gap as the loss on testing set minus the loss on training set. Starting from a default configuration (which is the same hyperparameters we use in other experiments in this paper), each time we modify one of the hyperparameters. Results are listed in Table 3. In terms of studying generalization gap, our experiments is far less thorough than in (Jiang et al., 2020), but here we would like to illustrate the connection between dp with generalization gap under different experiment settings, without spending too many machine hours." } ]
2,020
null
SP:3f8deffff011d2fb7cc8d38f8e7e28ede4e632ca
[ "In the recent literature there has been a rise in the number of papers which attempt to verify neural networks. The specification of the verification problems often gets adapted according to the application in mind. More specifically, for image classification networks, the problem is to prove that the output of the neural network does not flip for small perturbations to the pixel values. For a robotic setting, the problem is often safety and convergence to some goal state. Where the neural network operates in closed loop with the system dynamics. " ]
Motivated by the need to reliably characterize the robustness of deep neural networks, researchers have developed verification algorithms for deep neural networks. Given a neural network, the verifiers aim to answer whether certain properties are guaranteed with respect to all inputs in a space. However, little attention has been paid to floating point numerical error in neural network verification. We exploit floating point errors in the inference and verification implementations to construct adversarial examples for neural networks that a verifier claims to be robust with respect to certain inputs. We argue that, to produce sound verification results, any verification system must accurately (or conservatively) model the effects of any float point computations in the network inference or verification system.
[]
[ { "authors": [ "Tahmid Abtahi", "Colin Shea", "Amey Kulkarni", "Tinoosh Mohsenin" ], "title": "Accelerating convolutional neural network with FFT on embedded hardware", "venue": "IEEE Transactions on Very Large Scale Integration (VLSI) Systems,", "year": 2018 }, { "authors": [ "Sylvie Boldo", "Guillaume Melquiond" ], "title": "Computer Arithmetic and Formal Proofs: Verifying Floating-point Algorithms with the Coq System", "venue": null, "year": 2017 }, { "authors": [ "Rudy Bunel", "Jingyue Lu", "Ilker Turkaslan", "P Kohli", "P Torr", "P Mudigonda" ], "title": "Branch and bound for piecewise linear neural network verification", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Chih-Hong Cheng", "Georg Nührenberg", "Harald Ruess" ], "title": "Maximum resilience of artificial neural networks", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2017 }, { "authors": [ "Sharan Chetlur", "Cliff Woolley", "Philippe Vandermersch", "Jonathan Cohen", "John Tran", "Bryan Catanzaro", "Evan Shelhamer" ], "title": "cudnn: Efficient primitives for deep learning", "venue": "arXiv preprint arXiv:1410.0759,", "year": 2014 }, { "authors": [ "Florian Corzilius", "Ulrich Loup", "Sebastian Junges", "Erika Ábrahám" ], "title": "Smt-rat: an smt-compliant nonlinear real arithmetic toolbox", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2012 }, { "authors": [ "Souradeep Dutta", "Susmit Jha", "Sriram Sankaranarayanan", "Ashish Tiwari" ], "title": "Output range analysis for deep feedforward neural networks", "venue": "In NASA Formal Methods Symposium,", "year": 2018 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned verifiers", "venue": "arXiv preprint arXiv:1805.10265,", "year": 2018 }, { "authors": [ "Ruediger Ehlers" ], "title": "Formal verification of piece-wise linear feed-forward neural networks", "venue": "In International Symposium on Automated Technology for Verification and Analysis,", "year": 2017 }, { "authors": [ "Matteo Fischetti", "Jason Jo" ], "title": "Deep neural networks and mixed integer linear optimization. Constraints", "venue": null, "year": 2018 }, { "authors": [ "Timon Gehr", "Matthew Mirman", "Dana Drachsler-Cohen", "Petar Tsankov", "Swarat Chaudhuri", "Martin Vechev" ], "title": "Ai2: Safety and robustness certification of neural networks with abstract interpretation", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2018 }, { "authors": [ "Dario Guidotti", "Francesco Leofante", "Luca Pulina", "Armando Tacchella" ], "title": "Verification of neural networks: Enhancing scalability through pruning", "venue": "arXiv preprint arXiv:2003.07636,", "year": 2020 }, { "authors": [ "Xiaowei Huang", "Marta Kwiatkowska", "Sen Wang", "Min Wu" ], "title": "Safety verification of deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Kai Jia", "Martin Rinard" ], "title": "Efficient exact verification of binarized neural networks", "venue": "arXiv preprint arXiv:2005.03597,", "year": 2020 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L Dill", "Kyle Julian", "Mykel J Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Andrew Lavin", "Scott Gray" ], "title": "Fast algorithms for convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Alessio Lomuscio", "Lalit Maganti" ], "title": "An approach to reachability analysis for feed-forward relu neural networks", "venue": "arXiv preprint arXiv:1706.07351,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Antoine Miné" ], "title": "Relational abstract domains for the detection of floating-point run-time errors", "venue": "In European Symposium on Programming,", "year": 2004 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nina Narodytska", "Shiva Kasiviswanathan", "Leonid Ryzhyk", "Mooly Sagiv", "Toby Walsh" ], "title": "Verifying properties of binarized deep neural networks", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy S Liang" ], "title": "Semidefinite relaxations for certifying robustness to adversarial examples", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Philipp Rümmer", "Thomas Wahl" ], "title": "An smt-lib theory of binary floating-point arithmetic", "venue": "In International Workshop on Satisfiability Modulo Theories (SMT),", "year": 2010 }, { "authors": [ "Hadi Salman", "Greg Yang", "Huan Zhang", "Cho-Jui Hsieh", "Pengchuan Zhang" ], "title": "A convex relaxation barrier to tight robustness verification of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Karsten Scheibler", "Leonore Winterer", "Ralf Wimmer", "Bernd Becker" ], "title": "Towards verification of artificial neural networks", "venue": "In MBMV, pp", "year": 2015 }, { "authors": [ "Andy Shih", "Adnan Darwiche", "Arthur Choi" ], "title": "Verifying binarized neural networks by angluinstyle learning", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2019 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Matthew Mirman", "Markus Püschel", "Martin Vechev" ], "title": "Fast and effective robustness certification", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Püschel", "Martin Vechev" ], "title": "An abstract domain for certifying neural networks", "venue": "Proc. ACM Program. Lang.,", "year": 2019 }, { "authors": [ "Robert Skeel" ], "title": "Roundoff error and the patriot missile", "venue": "SIAM News,", "year": 1992 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna Estrach", "Dumitru Erhan", "Ian Goodfellow", "Robert Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Vincent Tjeng", "Kai Y. Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Florian Tramer", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Formal security analysis of neural networks using symbolic intervals", "venue": "In 27th {USENIX} Security Symposium ({USENIX} Security", "year": 2018 }, { "authors": [ "Lily Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Luca Daniel", "Duane Boning", "Inderjit Dhillon" ], "title": "Towards fast computation of certified robustness for ReLU networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "J Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "arXiv preprint arXiv:1711.00851,", "year": 2017 }, { "authors": [ "Eric Wong", "Leslie Rice", "J. Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kai Y. Xiao", "Vincent Tjeng", "Nur Muhammad (Mahi) Shafiullah", "Aleksander Madry" ], "title": "Training for faster adversarial robustness verification via inducing reLU stability", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "Advances in Neural Information Processing Systems", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) are known to be vulnerable to adversarial inputs (Szegedy et al., 2014), which are images, audio, or texts indistinguishable to human perception that cause a DNN to give substantially different results. This situation has motivated the development of network verification algorithms that claim to prove the robustness of a network (Bunel et al., 2020; Tjeng et al., 2019; Salman et al., 2019), specifically that the network produces identical classifications for all inputs in a perturbation space around a given input.\nVerification algorithms typically reason about the behavior of the network assuming real-valued arithmetic. In practice, however, the computation of both the verifier and the neural network is performed on physical computers that use floating point numbers and floating point arithmetic to approximate the underlying real-valued computations. This use of floating point introduces numerical error that can potentially invalidate the guarantees that the verifiers claim to provide. Moreover, the existence of multiple software and hardware systems for DNN inference further complicates the situation, because different implementations exhibit different numerical error characteristics.\nWe present concrete instances where numerical error leads to unsound verification of real-valued networks. Specifically, we train robust networks on the MNIST and CIFAR10 datasets. We work with the MIPVerify complete verifier (Tjeng et al., 2019) and several inference implementations included in the PyTorch (Paszke et al., 2019) framework. For each implementation, we construct image pairs (x0,xadv) where x0 is a brightness modified natural image, such that the implementation classifies xadv differently from x0, xadv falls in a `∞-bounded perturbation space around x0, and the verifier incorrectly claims that no such adversarial image xadv exists for x0 within the perturbation space. Moreover, we show that the incomplete verifier CROWN is also vulnerable to floating point error. Our method of constructing adversarial images is not limited to our setting, and it is applicable to other verifiers that do not soundly model floating point arithmetic." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Training robust networks: Researchers have developed various techniques to train robust networks (Madry et al., 2018; Mirman et al., 2018; Tramer & Boneh, 2019; Wong et al., 2020). Madry et al. formulate the robust training problem as minimizing the worst loss within the input perturbation and propose to train robust networks on the data generated by the Projected Gradient Descent\n(PGD) adversary (Madry et al., 2018). In this work we consider robust networks trained with the PGD adversary.\nComplete verification: The goal of complete verification (a.k.a. exact verification) methods is to either prove the property being verified or provide a counterexample to disprove it. Complete verification approaches have formulated the verification problem as a Satisfiability Modulo Theories (SMT) problem (Scheibler et al., 2015; Huang et al., 2017; Katz et al., 2017; Ehlers, 2017; Bunel et al., 2020) or as a Mixed Integer Linear Programming (MILP) problem (Lomuscio & Maganti, 2017; Cheng et al., 2017; Fischetti & Jo, 2018; Dutta et al., 2018; Tjeng et al., 2019). While SMT solvers are able to model exact floating point arithmetic (Rümmer & Wahl, 2010) or exact real arithmetic (Corzilius et al., 2012), deployed SMT solvers for verifying neural networks all use inexact floating point arithmetic to reason about the neural network inference for efficiency reasons. MILP solvers work directly with floating point, do not attempt to exactly model real arithmetic, and therefore exhibit numerical error. Since floating point arithmetic is not associative, different neural network implementations may produce different results for the same neural network, implying that any sound verifier for this class of networks must reason about the specific floating point error characteristics of the neural network implementation at hand. To the best of our knowledge, no prior work formally recognizes the problem of floating point error in neural network complete verification or exploits floating point error to invalidate verification results.\nIncomplete verification: On the spectrum of the tradeoff between completeness and scalability, incomplete methods (a.k.a. certification methods) aspire to deliver more scalable verification by adopting over-approximation, while admitting the inability to either prove or disprove the properties in certain cases. There is a large body of related research (Wong & Kolter, 2017; Weng et al., 2018; Gehr et al., 2018; Zhang et al., 2018; Raghunathan et al., 2018; Dvijotham et al., 2018; Mirman et al., 2018; Singh et al., 2019). Salman et al. (2019) has unified most of the relaxation methods under a common convex relaxation framework. Their results suggest that there is an inherent barrier to tight verification via layer-wise convex relaxation captured by their framework. We highlight that floating point error of implementations that use a direct dot product formulation has been accounted for in some certification frameworks (Singh et al., 2018; 2019) by maintaining upper and lower rounding bounds for sound floating point arithmetic (Miné, 2004). Such frameworks should be extensible to model numerical error in more sophisticated implementations like the Winograd convolution (Lavin & Gray, 2016), but the effectiveness of this extension remains to be studied. Most of the certification algorithms, however, have not considered floating point error and may be vulnerable to attacks that exploit this deficiency.\nFloating point arithmetic: Floating point is widely adopted as an approximate representation of real numbers in digital computers. After each calculation, the result is rounded to the nearest representable value, which induces roundoff error. In the field of neural networks, the SMT-based verifier Reluplex (Katz et al., 2017) has been observed to produce false adversarial examples due to floating point error (Wang et al., 2018). The MILP-based verifier MIPVerify (Tjeng et al., 2019) has been observed to give NaN results when verifying pruned neural networks (Guidotti et al., 2020). Such observed floating point unsoundness behavior occurs unexpectedly in running large scale benchmarks. However, no prior work tries to systematically invalidate neural network verification results via exploiting floating point error.\nThe IEEE-754 (IEEE, 2008) standard defines the semantics of operations and correct rounding behavior. On an IEEE-754 compliant implementation, computing floating point expressions consisting of multiple steps that are equivalent in the real domain may result in different final roundoff error because rounding is performed after each step, which complicates the error analysis. Research on estimating floating point roundoff error and verifying floating point programs has a long history and is actively growing (Boldo & Melquiond, 2017), but we are unaware of any attempt to apply these tools to obtain a sound verifier for any neural network inference implementation. Any such verifier must reason soundly about floating point errors in both the verifier and the neural network inference algorithm. The failure to incorporate floating point error in software systems has caused real-world disasters. For example, in 1992, a Patriot missile missed its target and lead to casualties due to floating point roundoff error related to time calculation (Skeel, 1992)." }, { "heading": "3 PROBLEM DEFINITION", "text": "" }, { "heading": "3.1 ADVERSARIAL ROBUSTNESS OF NEURAL NETWORKS", "text": "We consider 2D image classification problems. Let y = NN (x; W ) denote the classification confidence given by a neural network with weight parametersW for an input x, where x ∈ Rm×n×c[0,1] is an image with m rows and n columns of pixels each containing c color channels represented by floating point values in the range [0, 1], and y ∈ Rk is a logits vector containing the classification scores for each of the k classes. The class with the highest score is the classification result of the neural network.\nFor a logits vector y and a target class number t, we define the Carlini-Wagner (CW) loss (Carlini & Wagner, 2017) as the score of the target class subtracted by the maximal score of the other classes:\nLCW (y, t) = yt −max i 6=t yi (1)\nNote that x is classified as an instance of class t if and only if LCW (NN (x; W ) , t) > 0, assuming no equal scores of two classes.\nAdversarial robustness of a neural network is defined for an input x0 and a perturbation bound , such that the classification result is stable within allowed perturbations:\n∀x ∈ Adv (x0) : LCW (NN (x; W ) , t0) > 0 (2) where t0 = argmax NN (x0; W )\nIn this work we focus on `∞-norm bounded perturbations:\nAdv (x0) = {x | ‖x− x0‖∞ ≤ ∧ minx ≥ 0 ∧ maxx ≤ 1} (3)" }, { "heading": "3.2 FINDING ADVERSARIAL EXAMPLES FOR VERIFIED NETWORKS VIA EXPLOITING NUMERICAL ERROR", "text": "Due to the inevitable presence of numerical error in both the network inference system and the verifier, the exact specification of NN (·; W ) (i.e., a bit-level accurate description of the underlying computation) is not clearly defined in (2). We consider the following implementations of convolutional layers included in the PyTorch framework to serve as our candidate definitions of the convolutional layers in NN (·; W ), and other layers use the default PyTorch implementation:\n• NNC,M (·; W ): A matrix multiplication based implementation on x86/64 CPUs. The convolution kernel is copied into a matrix that describes the dot product to be applied on the flattened input for each output value.\n• NNC,C (·; W ): The default convolution implementation on x86/64 CPUs. • NNG,M (·; W ): A matrix multiplication based implementation on NVIDIA GPUs. • NNG,C (·; W ): A convolution implementation using the IMPLICIT_GEMM algorithm\nfrom the cuDNN library (Chetlur et al., 2014) on NVIDIA GPUs. • NNG,CWG (·; W ): A convolution implementation using the WINOGRAD_NONFUSED al-\ngorithm from the cuDNN library (Chetlur et al., 2014) on NVIDIA GPUs. It is based on the Winograd fast convolution algorithm (Lavin & Gray, 2016), which has much higher numerical error compared to others.\nFor a given implementation NNimpl (·; W ), our method finds pairs of (x0, xadv) represented as single precision floating point numbers such that\n1. x0 and xadv are in the dynamic range of images: minx0 ≥ 0, minxadv ≥ 0, maxx0 ≤ 1, and maxxadv ≤ 1.\n2. xadv falls in the perturbation space of x0: ‖xadv − x0‖∞ ≤ 3. The verifier claims that (2) holds for x0 4. xadv is an adversarial image for the implementation: LCW (NNimpl (xadv; W ) , t0) < 0\nNote that the first two conditions are accurately defined for any implementation compliant with the IEEE-754 standard, because the computation only involves element-wise subtraction and maxreduction that incur no accumulated error. The Gurobi (Gurobi Optimization, 2020) solver used by MIPVerify operates with double precision internally. Therefore, to ensure that our adversarial examples satisfy the constraints considered by the solver, we also require that the first two conditions hold for x′adv = float64 (xadv) and x ′ 0 = float64 (x0) that are double precision representations of xadv and x0." }, { "heading": "3.3 MILP FORMULATION FOR COMPLETE VERIFICATION", "text": "We adopt the small CNN architecture from Xiao et al. (2019) and the MIPVerify complete verifier of Tjeng et al. (2019) to demonstrate our attack method. We can also deploy our method against other complete verifiers as long as the property being verified involves thresholding continuous variables whose floating point arithmetic is not exactly modeled in the verification process.\nThe MIPVerify verifier formulates the verification problem as an MILP problem for networks composed of linear transformations and piecewise-linear functions (Tjeng et al., 2019). An MILP problem optimizes a linear objective function subject to linear equality and linear inequality constraints over a set of variables, where some variables take real values while others are restricted to be integers. The MILP formulation of the robustness of a neural network involves three parts: introducing free variable x for the adversarial input subject to the constraint x ∈ Adv (x0), formulating the computation y = NN (x; W ), and formulating the attack goal LCW (NN (x; W ) , t0) ≤ 0. The network is robust with respect to x0 if the MILP problem is infeasible, and x serves as an adversarial image otherwise. The MILP problem typically optimizes one of the two objective functions: (i) min ‖x− x0‖∞ to find an adversarial image closest to x, or (ii) minLCW (NN (x; W ) , t0) to find an adversarial image that causes the network to produce a different prediction with the highest confidence. Note that although the above constraints and objective functions are nonlinear, most modern MILP solvers can handle them by automatically introducing necessary auxiliary decision variables to convert them into linear forms." }, { "heading": "4 EXPLOITING A COMPLETE VERIFIER", "text": "" }, { "heading": "4.1 EMPIRICAL CHARACTERIZATION OF IMPLEMENTATION NUMERICAL ERROR", "text": "To guide the design of our attack algorithm we present statistics about numerical error of different implementations.\nTo investigate end-to-end error behavior, we select an image x and present in Figure 1a a plot of ‖NN (x + δ; W )−NN (x; W )‖∞ against −10−6 ≤ δ ≤ 10−6, where the addition of x + δ is only applied on the single input element that has the largest gradient magnitude. To minimize the effect of numerical instability due to nonlinearity in the network and focus on fluctuations caused by numerical error, the image x is chosen to be the first MNIST test image on which the network produces a verified robust prediction. We have also checked that the pre-activation values of all the ReLU units do not switch sign. We observe that the change of the logits vector is highly nonlinear with respect to the change of the input, and a small perturbation could result in a large fluctuation. The WINOGRAD_NONFUSED algorithm on NVIDIA GPU is much more unstable and its variation is two orders of magnitude larger than the others.\nWe also evaluate all of the implementations on the whole MNIST test set and compare the outputs of the first layer (i.e., with only one linear transformation applied to the input) against that of NNC,M, and present the histogram in Figure 1b. It is clear that different implementations usually manifest different error behavior, and again NNG,CWG induces much higher numerical error than others.\nThese observations inspire us to construct adversarial images for each implementation independently by applying small random perturbations on an image close to the robustness decision boundary. We present the details of our method in Section 4.2." }, { "heading": "4.2 CONSTRUCTING ADVERSARIAL EXAMPLES", "text": "Given a network and weights NN (·; W ), there exist image pairs (x0,x1) such that the network is verifiably robust with respect to x0, while x1 ∈ Adv (x0) and LCW (NN (x1; W ) , t0) is less than the numerical fluctuation introduced by tiny input perturbations. We call x0 a quasi-safe image and x1 the corresponding quasi-adversarial image. We then apply small random perturbations on the quasi-adversarial image to obtain an adversarial image. The process is illustrated in Figure 2.\nWe propose the following proposition for a more formal and detailed description:\nProposition 1. Let E > 0 be an arbitrarily small positive number. If a continuous neural network NN (·; W ) can produce a verifiably robust classification for class t, and it does not constantly classify all inputs as class t, then there exists an input x0 such that\n0 < min x∈Adv (x0) LCW (NN (x; W ) , t) < E\nLet x1 = argminx∈Adv (x0) LCW (NN (x; W ) , t) be the minimizer of the above function. We call x0 a quasi-safe image and x1 a quasi-adversarial image.\nProof. Let f(x) := minx′∈Adv (x) LCW (NN (x ′; W ) , t). Since f(·) is composed of continuous functions, f(·) is continuous. Suppose NN (·; W ) is verifiably robust with respect to x+ that belongs to class t. Let x− be be any input such that LCW (NN (x−; W ) , t) < 0, which exists because NN (·; W ) does not constantly classify all inputs as class t. We have f(x+) > 0 and f(x−) < 0, and therefore x0 exists such that 0 < f(x0) < E due to continuity.\nOur method works by choosingE to be a number smaller than the average fluctuation of logits vector introduced by tiny input perturbations as indicated in Figure 1a, and finding a quasi-safe image by adjusting the brightness of a natural image. An adversarial image is then likely to be obtained by applying random perturbations on the corresponding quasi-adversarial image.\nGiven a particular implementation NNimpl (·; W ) and a natural image xseed which the network robustly classifies as class t0 according to the verifier, we construct an adversarial input pair (x0, xadv) that meets the constraints described in Section 3.2 in three steps:\n1. We search for a coefficient α ∈ [0, 1] such that x0 = αxseed serves as the quasi-safe image. Specifically, we require the verifier to claim that the network is robust for αxseed but not so for (α − δ)xseed with δ being a small positive value. Although the function is not guaranteed to be monotone, we can still use a binary search to find α while minimizing δ because we only need one such value. However, we observe that in many cases the MILP solver becomes extremely slow for small δ values, so we start with a binary search and switch to grid search if the solver exceeds a time limit. We set the target of δ to be 1e−7 in our experiments and divide the best known δ to 16 intervals if grid search is needed.\n2. We search for the quasi-adversarial image x1 corresponding to x0. We define a loss function with a tolerance of τ as L(x, τ ; W , t0) := LCW (NN (x; W ) , t0) − τ , which can be incorporated in any verifier by modifying the bias of the Softmax layer. We aim to find τ0 which is the minimal confidence of all images in the perturbation space of x0, and τ1 which is slightly larger than τ0 with x1 being the corresponding adversarial image: ∀x ∈ Adv (x0) : L(x0, τ0; W , t0) > 0 x1 ∈ Adv (x0) L(x1, τ1; W , t0) < 0 τ1 − τ0 < 1e−7\nNote that x1 is produced by the complete verifier as a proof for nonrobustness given the tolerance τ1. The above values are found via a binary search with initialization τ0 ← 0 and τ1 ← τmax where τmax := LCW (NN (x0; W ) , t0). If the verifier is able to compute the worst objective τw = minx∈Adv (x0) LCW (NN (x; W ) , t0), the binary search can be accelerated by initializing τ0 ← τw − δs and τ1 ← τw + δs. We empirically set δs = 3e−6 to incorporate the numerical error in the verifier so that L(x0, τw − δs; W , t0) > 0 and L(x0, τw + δs; W , t0) < 0. The binary search is aborted if the solver times out.\n3. We minimize LCW (NN (x1; W ) , t0) with hill climbing via applying small random perturbations on the quasi-adversarial image x1 while projecting back to Adv (x0) to find an adversarial example. The perturbations are applied on patches of x1, as described in Appendix A. The random perturbations are on the scale of 2e−7, corresponding to the input perturbations that cause a change in Figure 1a." }, { "heading": "4.3 EXPERIMENTS", "text": "We conduct our experiments on a workstation equipped with two GPUs (NVIDIA Titan RTX and NVIDIA GeForce RTX 2070 SUPER), 128 GiB of RAM and an AMD Ryzen Threadripper 2970WX\n24-core processor. We train the small architecture from Xiao et al. (2019) with the PGD adversary and the RS Loss on MNIST and CIFAR10 datasets. The trained networks achieve 94.63% and 44.73% provable robustness with perturbations of `∞ norm bounded by 0.1 and 2/255 on the two datasets respectively, similar to the results reported in Xiao et al. (2019). Our code will be made publicly available after the review process.\nAlthough our method only needs O(− log ) invocations of the verifier where is the gap in the binary search, the verifier is too slow to run a large benchmark in a reasonable time. Therefore, for each dataset we only test our method on 32 images randomly sampled from the verifiably robustly classified test images. The time limit of MILP solving is 360 seconds. Out of these 32 images, we have successfully found quasi-adversarial images (x1 from Section 4.2 Step 2, where failed cases are solver timeouts) for 18 images on MNIST and 26 images on CIFAR10. We apply random perturbations to these quasi-adversarial images to obtain adversarial images within the perturbation range of the quasi-safe image (x0 = αxseed from Section 4.2 Step 1). All the implementations that we have considered are successfully attacked. We present the detailed numbers in Table 1. We also present in Figure 3 the quasi-safe images on which our attack method succeeds for all implementations and the corresponding adversarial images." }, { "heading": "5 EXPLOITING AN INCOMPLETE VERIFIER", "text": "The relaxation adopted in certification methods renders them incomplete but also makes their verification claims more robust to floating point error compared to complete verifiers. In particular, we evaluate the CROWN framework (Zhang et al., 2018) on our randomly selected test images and\ncorresponding quasi-safe images from Section 4.3. CROWN is able to verify the robustness of the network on 29 out of the 32 original test images, but it is unable to prove the robustness for any of the quasi-safe images. Note that MIPVerify claims that the network is robust with respect to all the original test images and corresponding quasi-safe images.\nGiven the above situation, we demonstrate that incomplete verifiers are still prone to floating point error. We build a neural network that takes a 13 × 13 single-channel input image, followed by a 5 × 5 convolutional layer with a single output channel, two fully connected layers with 16 output neurons each, a fully connected layer with one output neuron denoted as u = max(Wuhu + bu, 0), and a final linear layer that computes y = [u, 1e− 7] as the logits vector. All the hidden layers have ReLU activation. The input x0 is taken from a Gaussian distribution. The hidden layers have random Gaussian coefficients, and the biases are chosen so that (i) the ReLU neurons before u are always activated for inputs in the perturbation space of x0, (ii) u = 0 always holds for these inputs, and (iii) bu is maximized with all other parameters fixed. CROWN is able to prove that all ReLU neurons before u are always activated but u is never activated, and therefore it claims that the network is robust with respect to perturbations around x0. However, by initializing the quasi-adversarial input x1 ← x0 + sign(Wequiv) where Wequiv is the product of all the coefficient matrices of the layers up to u, we successfully find adversarial inputs for all the five implementations considered in this work by randomly perturbing x1 in a way similar to Step 3 of Section 4.2." }, { "heading": "6 DISCUSSION", "text": "We agree with the security expert Window Snyder, “One single vulnerability is all an attacker needs”. Unfortunately, most previous work on neural network verification abstains from discussing possible vulnerabilities in their methods. We have demonstrated that neural network verifiers, although meant to provide security guarantees, are systematically exploitable. The underlying tradeoff between soundness and scalability in the verification of floating point programs is fundamental but has not received enough attention in the neural network verification literature.\nOne appealing remedy is to introduce floating point error relaxations into complete verifiers, such as by verifying for a larger or setting a threshold for accepted confidence score. However, a tight and sound relaxation is extremely challenging to find. We are unaware of prior attempt to formally prove error bounds for practical and accelerated neural network implementations or verifiers.\nSome incomplete verifiers have incorporated floating point error by maintaining upper and lower rounding bounds of internal computations (Singh et al., 2018; 2019), which is also potentially applicable to complete verifiers. However, this approach relies on the specific implementation details of the inference algorithm — optimizations such as Winograd (Lavin & Gray, 2016) or FFT (Abtahi et al., 2018) would either invalidate the robustness guarantees or require changes to the analysis algorithm.\nAnother approach is to quantize the computation to align the inference implementation with the verifier. For example, if we require all activations to be multiples of s0 and all weights to be multiples of s1, where s0s1 > 2E and E is a very loose bound of possible implementation error, then the output can be rounded to multiples of s0s1 to completely eliminate numerical error. Binarized neural networks (Hubara et al., 2016) are a family of extremely quantized networks, and their verification (Narodytska et al., 2018; Shih et al., 2019) is sound and complete. However, the problem of robust training and verification of quantized neural networks (Jia & Rinard, 2020) is relatively under-examined compared to that of real-valued neural networks (Madry et al., 2018; Mirman et al., 2018; Tjeng et al., 2019; Xiao et al., 2019)." }, { "heading": "7 CONCLUSION", "text": "Floating point error should not be overlooked in the verification of real-valued neural networks, as we have presented techniques that construct adversarial examples for neural networks claimed to be robust by a verifier. We hope our results will help to guide future neural network verification research by providing another perspective for the tradeoff between soundness, completeness, and scalability." }, { "heading": "A RANDOM PERTURBATION ALGORITHM", "text": "We present the details of our random perturbation algorithm below. Note that the Winograd convolution computes a whole output patch in one iteration, and therefore we handle it separately in the algorithm.\nInput: quasi-safe image x0 Input: target class number t Input: quasi-adversarial image x1 Input: input perturbation bound Input: a neural network inference implementation NNimpl (·; W ) Input: number of iterations N (default value 1000) Input: perturbation scale u (default value 2e−7) Output: an adversarial image xadv or FAILED\nfor Index i of x0 do . Find the weakest bounds xl and xu for allowed perturbations xl[i]← max(nextafter(x0[i]− , 0), 0) xu[i]← min(nextafter(x0[i] + , 1), 1) while x0[i]− xl[i] > or float64 (x0[i])− float64 (xl[i]) > do\nxl[i]← nextafter(xl[i], 1) end while while xu[i]− x0[i] > or float64 (xu[i])− float64 (x0[i]) > do\nxu[i]← nextafter(xu[i], 0) end while\nend for\nif NNimpl (·; W ) is NNG,CWG (·; W ) then (offset, stride)← (4, 9) . The Winograd algorithm in cuDNN produces 9 × 9 out-\nput tiles for 13 × 13 input tiles and 5 × 5 kernels. The offset and stride here ensure that perturbed tiles contribute independently to the output.\nelse (offset, stride)← (0, 4) . Work on small tiles to avoid random errors get cancelled end if\nfor i← 1 to N do for (h,w)← (0, 0) to (height(x1), width(x1)) step (stride, stride) do\nδ ← uniform(−u, u, (stride− offset, stride− offset)) x′1 ← x1[:] x′1[h+ offset : h+ stride, w + offset : w + stride] + = δ x′1 ← max(min(x′1, xu), xl) if LCW (NNimpl (x′1; W ) , t) < LCW (NNimpl (x1; W ) , t) then\nx1 ← x′1 end if\nend for end for if LCW (NNimpl (x1; W ) , t) < 0 then\nreturn xadv ← x1 else\nreturn FAILED end if" } ]
2,020
null
SP:efea29871d33fd89de348bc243a5ee0265b2e051
[ "The main contribution of the paper is to highlight the similarity between two active areas in ML namely \"domain generalization\" and \"fairness\". Further, the paper proposes an approach inspired by recent developments in the fairness literature for domain generalization. The high-level idea is that similarly to the way that fair algorithm are able to improve the worst-case accuracy of predictors across different groups without knowing the sensitive attributes, perhaps we can use these ideas to domain generalization when environment partitions are not known to the algorithm. In some sense, in both of these research areas the goal is to design robust algorithms. Similarly, the paper uses the idea from domain generalization to design fair algorithms w.r.t. a notion called \"group sufficiency\". The idea is to somehow infer the \"worst-case\" subgroup (i.e., the one that our algorithm has the worst accuracy on it) and then using a round of auditing improve the performance of the algorithm across all subgroups." ]
Standard learning approaches are designed to perform well on average for the data distribution available at training time. Developing learning approaches that are not overly sensitive to the training distribution is central to research on domainor out-of-distribution generalization, robust optimization and fairness. In this work we focus on links between research on domain generalization and algorithmic fairness—where performance under a distinct but related test distributions is studied—and show how the two fields can be mutually beneficial. While domain generalization methods typically rely on knowledge of disjoint “domains” or “environments”, “sensitive” label information indicating which demographic groups are at risk of discrimination is often used in the fairness literature. Drawing inspiration from recent fairness approaches that improve worst-case performance without knowledge of sensitive groups, we propose a novel domain generalization method that handles the more realistic scenario where environment partitions are not provided. We then show theoretically and empirically how different partitioning schemes can lead to increased or decreased generalization performance, enabling us to outperform Invariant Risk Minimization with handcrafted environments in multiple cases. We also show how a re-interpretation of IRMv1 allows us for the first time to directly optimize a common fairness criterion, groupsufficiency, and thereby improve performance on a fair prediction task.
[]
[ { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "In Conference on fairness, accountability and transparency,", "year": 2018 }, { "authors": [ "Alexandra Chouldechova" ], "title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments", "venue": "Big data,", "year": 2017 }, { "authors": [ "Alexandra Chouldechova", "Aaron Roth" ], "title": "The frontiers of fairness in machine learning", "venue": "arXiv preprint arXiv:1810.08810,", "year": 2018 }, { "authors": [ "Sam Corbett-Davies", "Sharad Goel" ], "title": "The measure and mismeasure of fairness: A critical review of fair machine learning", "venue": "arXiv preprint arXiv:1808.00023,", "year": 2018 }, { "authors": [ "Alex J DeGrave", "Joseph D Janizek", "Su-In Lee" ], "title": "Ai for radiographic covid-19 detection selects shortcuts over signal", "venue": "medRxiv,", "year": 2020 }, { "authors": [ "John Duchi", "Peter Glynn", "Hongseok Namkoong" ], "title": "Statistics of robust optimization: A generalized empirical likelihood approach", "venue": "arXiv preprint arXiv:1610.03425,", "year": 2016 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Proceedings of the 3rd innovations in theoretical computer science conference,", "year": 2012 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Jacob Steinhardt", "Aleksander Madry" ], "title": "Identifying statistical bias in dataset replication", "venue": "arXiv preprint arXiv:2005.09619,", "year": 2020 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "Nature Machine Intelligence,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "In search of lost domain generalization", "venue": "arXiv preprint arXiv:2007.01434,", "year": 2020 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tatsunori B Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness without demographics in repeated loss minimization", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ursula Hébert-Johnson", "Michael P Kim", "Omer Reingold", "Guy N Rothblum" ], "title": "Calibration for the (computationally-identifiable) masses", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei Efros", "Trevor Darrell" ], "title": "Cycada: Cycle-consistent adversarial domain adaptation", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Michael Kearns", "Seth Neel", "Aaron Roth", "Zhiwei Steven Wu" ], "title": "Preventing fairness gerrymandering: Auditing and learning for subgroup fairness", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Michael P Kim", "Amirata Ghorbani", "James Zou" ], "title": "Multiaccuracy: Black-box post-processing for fairness in classification", "venue": "In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2019 }, { "authors": [ "David Krueger", "Ethan Caballero", "Joern-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Remi Le Priol", "Aaron Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": "arXiv preprint arXiv:2003.00688,", "year": 2020 }, { "authors": [ "Preethi Lahoti", "Alex Beutel", "Jilin Chen", "Kang Lee", "Flavien Prost", "Nithum Thain", "Xuezhi Wang", "Ed H Chi" ], "title": "Fairness without demographics through adversarially reweighted learning", "venue": "In Neural Informational Processing Systems,", "year": 2020 }, { "authors": [ "Ya Li", "Xinmei Tian", "Mingming Gong", "Yajing Liu", "Tongliang Liu", "Kun Zhang", "Dacheng Tao" ], "title": "Deep domain generalization via conditional invariant adversarial networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Lydia T Liu", "Max Simchowitz", "Moritz Hardt" ], "title": "The implicit fairness criterion of unconstrained learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Christos Louizos", "Kevin Swersky", "Yujia Li", "Max Welling", "Richard Zemel" ], "title": "The variational fair autoencoder", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard Zemel" ], "title": "Learning adversarially fair and transferable representations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": null, "year": 2018 }, { "authors": [ "Ziad Obermeyer", "Brian Powers", "Christine Vogeli", "Sendhil Mullainathan" ], "title": "Dissecting racial bias in an algorithm used to manage the health of populations", "venue": null, "year": 2019 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "R Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Conditional value-at-risk for general loss distributions", "venue": "Journal of banking & finance,", "year": 2002 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andrew D Selbst", "Danah Boyd", "Sorelle A Friedler", "Suresh Venkatasubramanian", "Janet Vertesi" ], "title": "Fairness and abstraction in sociotechnical systems", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Don Bambino Geno Tai", "Aditya Shah", "Chyke A Doubeni", "Irene G Sia", "Mark L Wieland" ], "title": "The disproportionate impact of covid-19 on racial and ethnic minorities in the united states", "venue": "Clinical Infectious Diseases,", "year": 2020 }, { "authors": [ "Serena Wang", "Wenshuo Guo", "Harikrishna Narasimhan", "Andrew Cotter", "Maya Gupta", "Michael I Jordan" ], "title": "Robust optimization for fairness with noisy protected groups", "venue": "In Neural Informational Processing Systems,", "year": 2020 }, { "authors": [ "Robert C Williamson", "Aditya Krishna Menon" ], "title": "Fairness risk measures", "venue": null, "year": 2019 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Brian Hu Zhang", "Blake Lemoine", "Margaret Mitchell" ], "title": "Mitigating unwanted biases with adversarial learning", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "Yuan Zhang", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Aspect-augmented adversarial networks for domain adaptation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Han Zhao", "Remi Tachet des Combes", "Kun Zhang", "Geoffrey J Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ep(z|x′) [p(y|z" ], "title": "x′).] 10This was previously used in a fairness setting by Liu et al. (2019) to measure differing calibration curves across groups", "venue": null, "year": 2019 }, { "authors": [ "Krueger" ], "title": "2020) discussed the pitfalls of achieving good test performance on CMNIST by using test data to tune hyperparameters. Because our primary interest is in the properties of the inferred environment rather than the final test performance, we sidestep this issue in the Synthetic Regression and CMNIST experiments by using the default parameters of IRM without further tuning. However for the ConfoundedAdult dataset a specific strategy for model selection", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning achieves super-human performance on many tasks when the test data is drawn from the same distribution as the training data. However, when the two distributions differ, model performance can severely degrade to even below chance predictions (Geirhos et al., 2020). Tiny perturbations can derail classifiers, as shown by adversarial examples (Szegedy et al., 2014) and common-corruptions in image classification (Hendrycks & Dietterich, 2019). Even new test sets collected from the same data acquisition pipeline induce distribution shifts that significantly harm performance (Recht et al., 2019; Engstrom et al., 2020). Many approaches have been proposed to overcome model brittleness in the face of input distribution changes. Robust optimization aims to achieve good performance on any distribution close to the training distribution (Goodfellow et al., 2015; Duchi et al., 2016; Madry et al., 2018). Domain generalization on the other hand tries to go one step further, to generalize to distributions potentially far away from the training distribution.\nThe field of algorithmic fairness meanwhile primarily focuses on developing metrics to track and mitigate performance differences between different sub-populations or across similar individuals (Dwork et al., 2012; Corbett-Davies & Goel, 2018; Chouldechova & Roth, 2018). Like domain generalization, evaluation using data related to but distinct from the training set is needed to characterize model failure. These evaluations are curated through the design of audits, which play a central role in revealing unfair algorithmic decision making (Buolamwini & Gebru, 2018; Obermeyer et al., 2019).\nWhile the ultimate goals of domain generalization and algorithmic fairness are closely aligned, little research has focused on their similarities and how they can inform each other constructively. One of their main common goals can be characterized as:\nLearning algorithms robust to changes across domains or population groups.\nAchieving this not only allows models to generalize to different and unobserved but related distributions, it also mitigates unequal treatment of individuals solely based on group membership.\nIn this work we explore independently developed concepts from the domain generalization and fairness literatures and exchange lessons between them to motivate new methodology for both fields. Inspired by fairness approaches for unknown group memberships (Kim et al., 2019; Hébert-Johnson et al., 2018; Lahoti et al., 2020), we develop a new domain generalization method that does not require domain identifiers and yet can outperform manual specification of domains (Table 1). Leveraging domain generalization insights in a fairness context, we show the regularizer from IRMv1 (Arjovsky et al., 2019) optimizes a fairness criterion termed “group-sufficiency” which for the first time enables us to explicitly optimize this criterion for non-convex losses in fair classification.\nThe following contributions show how lessons can be exchanged from the two fields:\n• We draw several connections between the goals of domain generalization and those of algorithmic fairness, suggesting fruitful research directions in both fields (Section 2).\n• Drawing inspiration from recent methods on inferring worst-case sensitive groups from data, we propose a novel domain generalization algorithm—Environment Inference for Invariant Learning (EIIL)—for cases where training data does not include environment partition labels (Section 3). Our method outperforms IRM on the domain generalization benchmark ColorMNIST without access to environment labels (Section 4).\n• We also show that IRM, originally developed for domain generalization tasks, affords a differentiable regularizer for the fairness notion of group sufficiency, which was previously hard to optimize for non-convex losses. On a variant of the UCI Adult dataset where confounding bias is introduced, we leverage this insight with our method EIIL to improve group sufficiency without knowledge of sensitive groups, ultimately improving generalization performance for large distribution shifts compared with a baseline robust optimization method (Section 4).\n• We characterize both theoretically and empirically the limitations of our proposed method, concluding that while EIIL can correct a baseline ERM solution that uses a spurious feature or “shortcut” for prediction, it is not suitable for all settings (Sections 3 and 4)." }, { "heading": "2 DOMAIN GENERALIZATION AND ALGORITHMIC FAIRNESS", "text": "Here we lay out some connections between the two fields. Table 2 provides a high-level comparison of the objectives and assumptions of several relevant methods. Loosely speaking, recent approaches from both areas share the goal of matching some chosen statistic across a conditioning variable e, representing sensitive group membership in algorithmic fairness or an environment/domain indicator in domain generalization. The statistic in question informs the learning objective for the resulting model, and is motivated differently in each case. In domain generalization, learning is informed by the properties of the test distribution where good generalization should be achieved. In algorithmic fairness the choice of statistic is motivated by a context-specific fairness notion, that likewise encourages a particular solution that achieves “fair” outcomes (Chouldechova & Roth, 2018). Empty spaces in Table 2 suggest areas for future work, and bold-faced entries suggest connections we show in this paper.\nNotation Let X be the input space, E the set of environments (a.k.a. “domains”), Y the target space. Let x, y, e ∼ pobs(x, y, e) be observational data, with x ∈ X , y ∈ Y , and e ∈ E . H denotes a representation space, from which a classifier w ◦ Φ (that maps to the pre-image of ∆(Y) via a linear map w) can be applied. Φ : X → H denotes the parameterized mapping or “model” that we optimize. We refer to Φ(x) ∈ H as the “representation” of example x. ŷ ∈ Y denotes a hard prediction derived from the classifier by stochastic sampling or probability thresholding. ` : H× Y → R denotes the scalar loss, which guides the learning. The empirical risk minimization (ERM) solution is found by minimizing the global risk, expressed as the expected loss over the observational distribution:\nCERM (Φ) = Epobs(x,y,e)[`(Φ(x), y)]. (1)\nDomain Generalization Domain generalization is concerned with achieving low error rates on unseen test distributions. One way to achieve domain generalization is by casting it as a robust optimization problem (Ben-Tal et al., 2009) where one aims to minimize the worst-case loss for every subset of the training set, or other well-defined perturbation sets around the data (Duchi et al., 2016; Madry et al., 2018). Other approaches tackle domain generalization by adversarially learning representations invariant (Zhang et al., 2017; Hoffman et al., 2018; Ganin et al., 2016) or conditionally invariant (Li et al., 2018) to the environment.\nDistributionally Robust Optimization (DRO) (Duchi et al., 2016), seeks good performance for all nearby distributions by minimizing the worst-case loss supq Eq[`] s.t.D(q||p) < , whereD denotes similarity between two distributions (e.g. χ2 divergence) and is a hyperparameter. The objective can be computed as an expectation over p via per-example importance weights γi =\nq(xi,yi) p(xi,yi) .\nRecently, Invariant Learning approaches such as Invariant Risk Minimization (IRM) (Arjovsky et al., 2019) and Risk Extrapolation (REx) (Krueger et al., 2020) were proposed to overcome the limitations of domain invariant representation learning (Zhao et al., 2019) by discovering invariant relationships between inputs and targets across domains. Invariance serves as a proxy for causality, as features representing “causes” of target labels rather than effects will generalize well under intervention. In IRM, a representation Φ(x) is learned that performs optimally within each environment—and is thus invariant to the choice of environment e ∈ E—with the ultimate goal of generalizing to an unknown test dataset p(x, y|etest). Because optimal classifiers under standard loss functions can be realized via a conditional label distribution (f∗(x) = E[y|x]), then an invariant representation Φ(x) must satisfy the following Environment Invariance Constraint:\nE[y|Φ(x) = h, e1] = E[y|Φ(x) = h, e2] ∀h ∈ H ∀ e1, e2 ∈ E . (EI-CONSTR) Intuitively, the representation Φ(x) encodes features of the input x that induce the same conditional distribution over labels across each environment.\nBecause trivial representations such as mapping all x onto the same value may satisfy environment invariance, other objectives must be introduced to encourage the predictive utility of Φ. Arjovsky et al. (2019) propose IRM as a way to satisfy (EI-CONSTR) while achieving a good overall risk. As a practical instantiation, the authors introduce IRMv1, a gradient-penalty regularized objective enforcing simultaneous optimality of the same classifier w ◦ Φ in all environments.1 Denoting by Re = Epobs(x,y|e)[`] the per-environment risk, the objective to be minimized is\nCIRM (Φ) = ∑ e∈E Re(Φ) + λ||∇w|w=1.0Re(w ◦ Φ)||2. (2)\nKrueger et al. (2020) propose the related Risk Extrapolation (REx) principle, which dictates a stronger preference to exactly equalize Re ∀ e (e.g. by penalizing variance across e), which is shown to improve generalization in several settings.2\nFairness Early approaches to learning fair representations (Zemel et al., 2013; Edwards & Storkey, 2015; Louizos et al., 2015; Zhang et al., 2018; Madras et al., 2018) leveraged statistical independence regularizers from domain adaptation (Ben-David et al., 2010; Ganin et al., 2016), noting that marginal or conditional independence from domain to prediction relates to the fairness notions of demographic parity ŷ ⊥ e (Dwork et al., 2012) and equal opportunity ŷ ⊥ e|y (Hardt et al., 2016). Recall that (EI-CONSTR) involves an environment-specific conditional label expectation given a data representation E[y|Φ(x) = h, e]. Objects of this type have been closely studied in the fair machine learning literature, where e now denotes a “sensitive” indicating membership in a protected demographic group (age, race, gender, etc.), and the vector representation Φ(x) is typically replaced by a scalar score S(x) ∈ R. E[y|S(x), e] can now be interpreted as a calibration curve that must be regulated according to some fairness constraint. Chouldechova (2017) showed that equalizing this calibration curve across groups is often incompatible with a common fairness constraint, demographic parity, while Liu et al. (2019) studied “group sufficiency’‘—satisfied when E[y|S(x), e] = E[y|S(x), e′]∀e, e′—of classifiers with strongly convex losses, concluding that ERM naturally finds group sufficient solutions without fairness constraints.\nBecause Liu et al. (2019) consider convex losses, their theoretical results do not hold for neural network representations. However, by noting the link between group sufficiency and the constraint from (EI-CONSTR), we observe that the IRMv1 regularizer (applicable to neural nets) in fact minimizes the group sufficiency gap in the case of a scalar representation Φ(x) ⊆ R, and when e indicates sensitive group membership. It is worth noting that Arjovsky et al. (2019) briefly discuss using groups as environments, but without specifying a particular fairness criterion.\nReliance on sensitive demographic information is cumbersome since it often cannot be collected without legal or ethical repercussions. Hébert-Johnson et al. (2018) discussed the problem of mitigating subgroup unfairness when group labels are unknown, and proposed Multicalibration as a way of ensuring a classifier’s calibration curve is invariant to efficiently computable environment splits. Since the proposed algorithm requires brute force enumeration over all possible environments/groups, Kim et al. (2019) suggested a more practical algorithm by relaxing the calibration constraint to an accuracy constraint, yielding a Multiaccurate classifier.3 The goal here is to boost the predictions of a pre-trained classifier through multiple rounds of auditing (searching for worstcase subgroups using an auxiliary model) rather than learning an invariant representation.\nA related line of work also leverages inferred subgroup information to improve worst-case model performance using the framework of DRO. Hashimoto et al. (2018) applied DRO to encourage long-term fairness in a dynamical setting where the average loss for a subpopulation influences their propensity to continue engaging with the model. Lahoti et al. (2020) proposed Adversarially Reweighted Learning, which extends DRO using an auxiliary model to compute the importance weights γi mentioned above. Amortizing this computation mitigates the tendency of DRO to overfit its reweighting strategy to noisy outliers. Wang et al. (2020) proposed a group DRO method for adaptively estimating soft assignments to groups suitable for the setting where group labels are noisy.\n1w ◦ Φ yields a classification decision via linear weighting on the representation features. 2Analogous to REx, Williamson & Menon (2019) adapt Conditional Variance at Risk (CVaR) (Rockafellar\n& Uryasev, 2002) to equalize risk across demographic groups. 3Kearns et al. (2018) separately utilize boosting to equalize subgroup errors without sensitive attributes.\nLimitations of generalization-first fairness One exciting direction for future work is to apply methods developed in the domain generalization literature to tasks where distribution shift is related to some societal harm that should be mitigated. However, researchers should be wary of blind “solutionism”, which can be ineffectual or harmful when the societal context surrounding the machine learning system is ignored (Selbst et al., 2019). Moreover, many aspects of algorithmic discrimination are not simply a matter of achieving few errors on unseen distributions. Unfairness due to task definition or dataset collection, as discussed in the study of target variable selection by Obermeyer et al. (2019), may not be reversible by novel algorithmic developments." }, { "heading": "3 INVARIANCE WITHOUT DEMOGRAPHICS OR ENVIRONMENTS", "text": "In this section we draw inspiration from recent work on fair prediction without sensitive labels (Kearns et al., 2018; Hébert-Johnson et al., 2018; Hashimoto et al., 2018; Lahoti et al., 2020) to propose a novel domain generalization algorithm that does not require a priori domain/environment knowledge. To motivate the study of this setting and show the fairness and invariance considerations at play, consider the task of using a high dimensional medical image x to predict a target label y ∈ {0, 1} indicating the presence of COVID-19 in the imaged patient. DeGrave et al. (2020) describe the common use of a composite dataset for this task, where the process of aggregating data across two source hospitals e ∈ {H1, H2} leads to a brittle neural net classifier w ◦Φ(x) that fixates on spurious low-level artifacts in x as predictive features.\nNow we will consider a slightly different scenario. Consider a single hospital serving two different demographic populations e ∈ {P1, P2}. While P1 has mostly sick patients at time t = 0 due to the prevalence of COVID-19 in this subpopulation, P2 currently has mostly well patients. Then p(x, y|e = P1, t = 0) and p(x, y|e = P2, t = 0) will differ considerably, and moreover a classifier using a spurious feature indicative of subpopulation membership—either a low-level image artifact or attribute of the medical record—may achieve low average error on the available data. Of course such a classifier may generalize poorly. Consider temporal distribution shifts: suppose at time t = 1, due to the geographic density of the virus changing over time, P1 has mostly well patients while patients from P2 are now mostly sick. Now the spurious classifier may suffer worse-than-chance error rates and imply unfair outcomes for disadvantaged groups. In reality the early onset and frequency of exposure to COVID-19 has been unequally distributed along many social dimensions (class, race, occupation, etc.) that could constitute protected groups (Tai et al., 2020), raising concerns of additional algorithmic discrimination.\nLearning to be invariant to spurious features encoding demographics would prevent errors due to such temporal shifts. While loss reweighting as in DRO/ARL can upweight error cases, without an explicit invariance regularizer the model may still do best on average by making use of the spurious feature. IRM can remove the spurious feature in this particular case, but a method for discovering environment partitions directly may occasionally be needed.4 This need is clear when demographic makeup is not directly observed and a method to sort each example into the maximally separating the spurious feature, i.e. inferring populations {P1, P2}, is needed for effective invariant learning." }, { "heading": "3.1 ENVIRONMENT INFERENCE FOR INVARIANT LEARNING", "text": "We now derive a principle for inferring environments from observational data. Our exposition extends IRMv1 (Equation 2), but we emphasize that our method EIIL is applicable more broadly to any environment-based learning objective. We begin by introducing ui(e\n′) = pobs(e′|xi, yi) = 1(ei = e′) as an indicator of the hand-crafted environment assignment per-example. Noting that Ne := ∑ i ui(e) represents the number of examples in environment e, we can re-express this objective to make its dependence on environment labels explicit\nCIRM (Φ,u) = ∑ e∈E 1 Ne ∑ i ui(e)`(Φ(xi), yi) + ∑ e∈E λ ∣∣∣∣∣∣∇w|w=1.0 1 Ne ∑ i ui(e)`(w ◦ Φ(xi), yi) ∣∣∣∣∣∣ 2 .\n(3)\n4In a variant of the first example where hospitals e ∈ {H1, H2} are known, the given environments could be improved by a method that sorts whether spurious artifacts are present, i.e. inferring equipment type.\nOur general strategy is to replace the binary indicator ui(e), with a probability distribution q(e|xi, yi), representing a soft assignment of the i-th example to the e-th environment. q(e|xi, yi) should capture worst-case environments w.r.t the invariant learning objective; rewriting q(e|xi, yi) as qi(e) for consistency with the above expression, we arrive at the following bi-level optimization:\nmin Φ max q\nCIRM (Φ,q). (EIIL)\nWe leave the full exploration of this bi-level optimization to future work, but for now propose the following practical sequential approach, which we call EIILv1 (See Appendix A for pseudocode):\n1. Input reference model Φ̃;\n2. Fix Φ ← Φ̃ and fully optimize the inner loop of (EIIL) to infer environments q̃i(e) = q̃(e|xi, yi);\n3. Fix q← q̃ and fully optimize the outer loop to yield the new model Φ.\nInstead of requiring hand-crafted environments, we instead require a trained reference model Φ̃, which is arguably easier to produce and could be found using ERM on pobs(x, y), for example. In our experiments we consider binary environments5 and explicitly parameterize the q(e|x, y) as a vector of probabilities for each example in the training data.6" }, { "heading": "3.2 ANALYZING THE EIIL SOLUTION", "text": "To characterize the ability of EIILv1 to generalize to unseen test data, we now examine the inductive bias for generalization provided by the reference model Φ̃. We state the main result here and defer the proofs to Appendix B. Consider a dataset with some feature(s) z which are spurious, and other(s) v which are valuable/causal w.r.t. the label y. Our proof considers binary features/labels and two environments, but the same argument extends to other cases. Our goal is to find a model Φ whose representation Φ(v, z) is invariant w.r.t. z and focuses solely on v.\nTheorem 1 Consider environments that differ in the degree to which the label y agrees with the spurious features z: P(1(y = z)|e1) 6= P(1(y = z)|e2): then a reference model Φ̃Spurious that is invariant to valuable features v and solely focusing on spurious features z maximally violates the Invariance Principle (EI-CONSTR). Likewise, consider the case with fixed representation Φ that focuses on the spurious features: then a choice of environments that maximally violates (EICONSTR) is e1 = {v, z, y|1(y = z)} and e2 = {v, z, y|1(y 6= z)}.\nIf environments are split according to agreement of y and z, then the constraint from (EI-CONSTR) is satisfied by a representation that ignores z: Φ(x) ⊥ z. Unfortunately this requires a priori knowledge of either the spurious feature z or a reference model Φ̃Spurious that extracts it. When the wrong solution Φ̃Spurious is not a priori known, it can sometimes be recovered directly from the training data; for example in CMNIST we find that Φ̃ERM approximates Φ̃Color. This allows EIIL to find environment partitions providing the starkest possible contrast for invariant learning.\nEven if environment partitions are available, it may be possible to improve performance by inferring new partitions from scratch. It can be shown (see Appendix B.2) that the environments provided in the CMNIST dataset (Arjovsky et al., 2019) do not maximally violate (EI-CONSTR) for a reference model Φ̃Color, and are thus not maximally informative for learning to ignore color. Accordingly, EIIL improves test accuracy for IRM compared with the hand-crafted environments (Table 1).\n5The theoretical analysis of IRM suggests that the more (statistically independent) environments the better in term of generalization guarantees. This suggests in the setting where these analyses apply, extending EIIL to find more than two environments (with a term to promote diversity amongst inferred environments) may further help out-of-domain generalization, which we leave for future investigation.\n6Note that under this parameterization, when optimizing the inner loop with fixed Φ the number of parameters equals the number of data points (which is small relative to standard neural net training). We leave amortization of q to future work." }, { "heading": "4 EXPERIMENTS", "text": "We defer a proof-of-concept synthetic regression experiment to Appendix E for lack of space. We proceed with the established domain generalization benchmark ColorMNIST, and then discuss a variant of the algorithmic fairness dataset UCI Adult. We note that benchmarking model performance on a shifted test distribution without access to validation samples—especially during model selection—is a difficult open problem, a solution to which is beyond the scope of this paper. Accordingly we use the default IRM hyperparameters wherever appropriate, and otherwise follow a recently proposed model selection strategy (Gulrajani & Lopez-Paz, 2020) (see Appendix D).7" }, { "heading": "4.1 COLORMNIST", "text": "ColorMNIST (CMNIST) is a noisy digit recognition task8 where color is a spurious feature that correlates with the label at train time but anti-correlates at test time, with the correlation strength at train time varying across two pre-specified environments (Arjovsky et al., 2019). Crucially, label noise is applied by flipping y with probability θy; the default setting (θy = 0.25) implies that shape (the correct feature) is marginally less reliable than color in the train set, so naive ERM ignores shape to focus on color and suffers from below-chance performance at test time.\nWe evaluated the performance of the following methods: ERM: A naive MLP that does not make use of environment labels e, but instead optimizes the average loss on the aggregated environments; IRM(eHC): the method of Arjovsky et al. (2019) using hand-crafted environment labels; IRM(eEIIL): our proposed method (a.k.a. EIILv1) that infers useful environments (not using handcrafted environment labels) based on the naive ERM, then applies IRM to the inferred environments.\nAfter noting that EIILv1—denoted IRM(eEIIL) above—outperforms IRM without access to environment labels in the default setting (See Tables 1 and 6), we examine how the various methods perform as a function of θy . This parameter influences the ERM solution since low θy implies shape is more reliable than color in the aggregated training data (thus ERM generalizes well), while the opposite trend holds for high θy . Because EIILv1 relies on a reference model Φ̃, its performance is also affected when Φ̃ = ERM (Figure 1). We find that IRM(eEIIL) generalizes better than IRM(eHC) with sufficiently high label noise θy > .2, but generalizes poorly under low label noise. This is precisely due to the success of ERM in this setting, where shape is a more reliable feature in the training data than color. We verify this conclusion by evaluating IRM(eEIIL) when Φ̃ = ΦColor, i.e. a hand-coded color-based predictor as reference. This does relatively well across all settings of θy , approaching the performance of the (oracle) baseline that classifies using grayscale inputs.\n7Following the suggestion of Gulrajani & Lopez-Paz (2020), we note that Section 4.2 contains “oracle” results that are overly optimistic for each method (see Appendix D for model selection details).\n8MNIST digits are grouped into {0, 1, 2, 3, 4} and {5, 6, 7, 8, 9} so the CMNIST target label y is binary." }, { "heading": "4.2 CENSUS DATA", "text": "We now study a fair prediction problem using a variant of the UCI Adult dataset,9 which comprises 48, 842 individual census records collected from the United States in 1994. The task commonly used as an algorithmic fairness benchmark is to predict a binarized income indicator (thresholded at $50, 000) as the target label, possibly considering sensitive attributes such as age, sex, and race. Because the original task measures in-distribution test performance, we instead construct a variant of this dataset suitable for measuring out-of-distribution test performance, which we call ConfoundedAdult.\nLahoti et al. (2020) demonstrate the benefit of per-example loss reweighting on UCI Adult using their method ARL to improve predictive performance for undersampled subgroups. Following Lahoti et al. (2020), we consider the effect of four sensitive subgroups—defined by composing binarized race and sex labels—on model performance, assuming the model does not know a priori which features are sensitive. However, we focus on a distinct generalization problem where a pernicious dataset bias confounds the training data, making subgroup membership predictive of the label on the training data. At test time these correlations are reversed, so a predictor that infers subgroup membership to make predictions will perform poorly at test time (see Appendix C for details). Dwork et al. (2012) described a similar motivating scenario where the conditional distribution mapping features to target labels varies across demographic groups due to cultural differences, so the most predictive predictor for one group may not generalize to the others. The large distribution shift of our test set can be understood as a worst-case audit to determine whether the classifier uses subgroup information in its predictions.\nUsing EIILv1—to first infer worst-case environments then ensure invariance across them—performs favorably on the audit test set, compared with ARL and a baseline MLP (Table 3). We also find that, without access to sensitive group information, using the IRMv1 penalty on the EIIL environments improves subgroup sufficiency (Figure 2). Appendix E.3 provides an ablation showing that all components of the EIILv1 approach are needed to achieve the best performance.\n9https://archive.ics.uci.edu/ml/datasets/adult" }, { "heading": "5 CONCLUSION", "text": "We discussed the common goals of algorithmic fairness and domain generalization, compared related methods from each literature, and suggested how lessons can be exchanged between the two fields to inform future research. The most concrete outcome of this discussion was our novel domain generalization method, Environment Inference for Invariant Learning (EIIL). Drawing inspiration from fairness methods that optimize worst-case performance without access to demographic information, EIIL improves the performance of IRM on CMNIST without requiring a priori knowledge of the environments. On a variant of the UCI Adult dataset, EIIL makes use of the IRMv1 regularizer to improve group sufficiency—a fairness criterion previously difficult to optimize for non-convex losses—without requiring knowledge of the sensitive groups." }, { "heading": "A EIILV1 PSUEDOCODE", "text": "Algorithm 1 The first stage of EIILv1 infers two environments that maximally violate the IRM objective. The inferred environments are then used to train an IRM solution from scratch.\nInput: Reference model Φ̃, dataset D = {xi, yi} with xi, yi ∼ pobs iid, loss function `, duration Nsteps Output: Worst case data splits D1, D2 for use with IRM. Randomly initialize e ∈ R|D| as vectorized logit of posterior with σ(ei) := q(e|xi, yi). for n ∈ 1 . . . Nsteps do R1 = 1∑\ni′ σ(ei′ )\n∑ i σ(ei)`(Φ̃(xi), yi) ; // D1 risk\nG1 = ∇w|w=1|| 1∑ i′ σ(ei′ ) ∑ i σ(ei)`(w ◦ Φ̃(xi), yi)||2 ; // D1 invariance\nregularizer\nR2 = 1∑ i′ 1−σ(ei′ ) ∑ i(1− σ(ei))`(Φ̃(xi), yi) ; // D2 risk\nG2 = ∇w|w=1|| 1∑ i′ 1−σ(ei) ∑ i(1− σ(ei))`(w ◦ Φ̃(xi), yi)||2 ; // D2 invariance\nregularizer L = 12 ∑ e∈{1,2}R e + λGe\ne← OptimUpdate(e,∇eL) end ê ∼ Bernoulli(σ(e)) ; // sample splits D1 ← {xi, yi|êi = 1}, D2 ← {xi, yi|êi = 0} ; // split data" }, { "heading": "B PROOFS", "text": "B.1 PROOF OF THEOREM 1\nConsider a dataset with some feature(s) z which are spurious, and other(s) v which are valuable/causal w.r.t. the label y. This includes data generated by models where v → y → z, such that P (y|v, z) = P (y|v). Assume further that the observations x are functions of both spurious and valuable features: x := f(v, z). The aim of invariant learning is to form a classifier that predicts y from x that focuses solely on the causal features, i.e., is invariant to z and focuses solely on v.\nConsider a classifier that produces a score S(x) for example x. In the binary classification setting S is analogous to the model Φ, while the score S(x) is analogous to the representation Φ(x). To quantify the degree to which the constraint in the Invariant Principle (EI-CONSTR) holds, we introduce a measure called the group sufficiency gap10:\n∆(S, e) = E[[E(y|S(x), e1)− E(y|S(x), e2)]]\nNow consider the notion of an environment: some setting in which the x → y relationship varies (based on spurious features). Assume a single binary spurious feature z. We restate Theorem 1 as follows:\nClaim: If environments are defined based on the agreement of the spurious feature z and the label y, then a classifier that predicts based on z alone maximizes the group-sufficiency gap (and vice versa – if a classifier predicts y directly by predicting z, then defining two environments based on agreement of label and spurious feature—e1 = {v, z, y|1(y = z)} and e2 = {v, z, y|1(y 6= z)}—maximizes the gap).\nWe can show this by first noting that if the environment is based on spurious feature-label agreement, then with e ∈ {0, 1} we have e = 1(y = z). If the classifier predicts z, i.e. S(x) = z, then we have\n∆(S, e) = E[E[y|z(x),1(y = z)]− E[y|z(x),1(y 6= z)]]\nFor each instance of x either z = 0 or z = 1. Now we note that when z = 1 we have E(y|z,1(y = z)) = 1 and E(y|z,1(y 6= z)) = 0, while when z = 0 E(y|z, I[y == z]) = 0 and E[y|z,1(y 6= z)] = 1. Therefore for each example |E(y|z(x),1(y = z))− E(y|z(x),1(y 6= z)| = 1, contributing to an overall ∆(S, e) = 1, which is the maximum value for the sufficiency gap.\nB.2 GIVEN CMNIST ENVIRONMENTS ARE SUBOPTIMAL W.R.T. SUFFICIENCY GAP\nThe regularizer from IRMv1 encourages a representation for which sufficiency gap is minimized between the available environments. Therefore when faced with a new task it is natural to measure the natural sufficiency gap between these environments, mediated through a naive or baseline method. Here we show that for CMNIST, when considering a naive color-based classifier as the reference model, the given environment splits are actually suboptimal w.r.t. sufficiency gap, which motivates the proposed EIIL approach for inferring environments that have a more sever sufficiency gap for the reference model.\nWe begin by computing ∆(S, e), the sufficiency gap for color-based classifier g over the given train environments {e1, e2}. We introduce an auxiliary color variable z, which is not observed but can be sampled from via the color based classifier g:\np(y|g(x) = x′, e) = Ep(z|x′) [p(y|z, e, x′).]\n10This was previously used in a fairness setting by Liu et al. (2019) to measure differing calibration curves across groups.\nDenote by GREEN and RED the set of green and red images, respectively. I.e. we have z ∈ G iff z = 1 and x ∈ GREEN iff z(x) = 1. The the sufficiency gap is expressed as\n∆(S, e) = Ep(x,e) [∣∣∣Ep(y|x,e1)[y|g(x), e1]− Ep(y|x,e2)[y|g(x), e2]∣∣∣]\n= Ep(z,e) [∣∣∣Ep(y|z,e1)[y|z, e1]− Ep(y|z,e2)[y|z, e2]∣∣∣]\n= 1\n2 ∑ z∈{GREEN,RED} [∣∣∣Ep(y|z,e1)[y|z, e1]− Ep(y|z,e2)[y|z, e2]∣∣∣] = 1\n2 (|E[y|z = GREEN, e1]− E[y|z = GREEN, e2]|+ |E[y|z = RED, e1]− E[y|z = RED, e2]|)\n= 1\n2 (|0.1− 0.2| − |0.9− 0.8|)\n= 1\n10 .\nThe regularizer in IRMv1 is trying to reduce the sufficiency gap, so in some sense we can think about this gap as a learning signal for the IRM learner. A natural question would be whether a different set of environment partition {e} can be found such that this learning signal is stronger, i.e. the sufficiency gap is increased. We find the answer is yes. Consider an environment distribution q(e|x, y, z) that assigns each data point to one of two environments. Any assignment suffices so far as its marginal matches the observed data: ∫ z ∫ e q(x, y, z, e) = pobs(x, y).\nWe can now express the sufficiency gap (given a color-based classifier g) as a function of the environment assignment q:\n∆(S, e ∼ q) = Eq(x,e)[|Eq(y|x,e,x)[y|g(x), e1]− Eq(y|x,e,x)[y|g(x), e2]|] = Eq(x,e)[|Eq(y|z,e,x)p(z|x)[y|z, e1]− Eq(y|z,e,x)p(z|x)[y|z, e2]|]\nWhere we use the same change of variables trick as above to replace g(x) with samples from p(z|x) (note that this is the color factor from the generative process p according with our assumption that g matches this distribution).\nWe want to show that there exists a q yielding a higher sufficiency gap than the given environments. Consider q that yields the conditional label distribution\nq(y|x, e, z) := q(y|e, z) = { 1(y = z) if e = e1, 1(y 6= z) if e = e2.\nThis can be realized by an encoder/auditor q(e|x, y, z) that ignores image features in x and partitions the example based on whether or not the label y and color z agree. We also note that z is deterministically the color of the image in the generative process: p(z|x) = 1(x = RED)\nNow we can compute the sufficiency gap:\n∆(S, e ∼ q) = Eq(x,e)[|Eq(y|z,e,x)p(z|x)[y|z, e1]− Eq(y|z,e,x)p(z|x)[y|z, e2]|]\n= 1\n2 Ex∈RED|Eq(y|z,e,x)p(z|x)[y|z, e1]− Eq(y|z,e,x)p(z|x)[y|z, e2]|\n+ 1\n2 Ex∈GREEN|Eq(y|z,e,x)p(z|x)[y|z, e1]− Eq(y|z,e,x)p(z|x)[y|z, e2]|\n= 1\n2 Ex∈RED(| ∑ y ∑ z (y ∗ 1(y = z) ∗ 1(g(x) = z))− ∑ y ∑ z (y ∗ 1(y 6= z) ∗ 1(g(x) = z))|)\n+ Ex∈GREEN 1 2 (| ∑ y ∑ z (y ∗ 1(y = z) ∗ 1(g(x) = z))− ∑ y ∑ z (y ∗ 1(y 6= z) ∗ 1(g(x) = z))|)\n= 1\n2 Ex∈RED(| ∑ y (y ∗ 1(y = 1) ∗ 1(x ∈ RED))− ∑ y (y ∗ 1(y 6= 1) ∗ 1(x ∈ RED))|)\n+ Ex∈GREEN 1 2 (| ∑ y ∑ z (y ∗ 1(y = 0) ∗ 1(x ∈ GREEN))− ∑ y ∑ z (y ∗ 1(y 6= 0) ∗ 1(x ∈ GREEN))|)\n= 1\n2 Ex∈RED[|1− 0|] + Ex∈GREEN[\n1 2 |0− 1|] = 1 2 + 1 2 = 1.\nNote that 1 is the maximal sufficiency gap, meaning that the described environment partition maximizes the sufficiency gap w.r.t. the color-based classifier g." }, { "heading": "C DATASET DETAILS", "text": "Constructing the ConfoundedAdult dataset To create our semi-synthetic dataset, called ConfoundedAdult, we start by observing that the conditional distribution over labels varies across the subgroups, and in some cases subgroup membership is very predictive of the target label. We construct a test set (a.k.a. the audit set) where this relationship between subgroups and target label is reversed.\nThe four sensitive subgroups are defined following the procedure of Lahoti et al. (2020), with sex (recorded as binary: Male/Female) and binarized race (Black/non-Black) attributes compose to make four possible subgroups: Non-Black Males (SG1), Non-Black Females (S2), Black Males (SG3), and Black Females (SG4).\nWe start with the observation that each subgroup has a different correlation strength with the target label, and in some cases subgroup membership alone can be used to achieve relatively low error rates in prediction. As these correlations should be considered “spurious” to mitigate unequal treatment across groups, we create a semi-synthetic variant of the UCI Adult dataset, which we call ConfoundedAdult, where these spurious correlations are exaggerated. Table 4 shows various conditional label distributions for the original dataset and our proposed variant. The test set for ConfoundedAdult revereses the correlation strengths, which can be thought of as a worst-case audit to ensure the model is not relying on subgroup membership alone in its predictions. We generate samples for ConfoundedAdult using importance sampling, keeping the original train/test splits from UCI Adult as well as the subgroup sizes, but sampling individual examples under/over-sampled according to importance weights p ConfoundedAdult\npUCIAdult ." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "Model selection Krueger et al. (2020) discussed the pitfalls of achieving good test performance on CMNIST by using test data to tune hyperparameters. Because our primary interest is in the properties of the inferred environment rather than the final test performance, we sidestep this issue in the Synthetic Regression and CMNIST experiments by using the default parameters of IRM without further tuning. However for the ConfoundedAdult dataset a specific strategy for model selection is needed.\nWe refer the interested reader to Gulrajani & Lopez-Paz (2020) for an extensive discussion of possible model selection strategies. They also provide a large empirical study showing that ERM is difficult baseline to beat when all methods are put on equal footing w.r.t. model selection.\nIn our case, we use the most relaxed model selection method proposed by Gulrajani & Lopez-Paz (2020), which amounts to allowing each method a 20 test evaluations using hyperparameter chosen at random from a reasonable range, with the best hyperparameter setting selected for each method. While none of the methods is given an unfair advantage in the search over hyperparameters, the basic model selection premise does not translate to real-world applications, since information about the test-time distribution is required to select hyperparameters. Thus these results can be understood as being overly optimistic for each method, although the relative ordering between the methods can still be compared.\nCMNIST IRM is trained on these two environments and tested on a holdout environment constructed from 10, 000 test images in the same way as the training environments, where colour is predictive of the noisy label 10% of the time. So using color as a feature to predict the label will lead to an accuracy of roughly 10% on the test environment, while it yields 80% and 90% accuracy respectively on the training environments.\nTo evaluate IRM(eEIIL) we remove the environment identifier from the training set and thus have one training set comprised of 50, 000 images from both original training environments. We then train an MLP with binary cross-entropy loss on the training environments, freeze its weights and use the obtained model to learn environment splits that maximally violate the IRM penalty. When optimizing the inner loop of EIIL, we use Adam with learning rate 0.001 for 10, 000 steps with full data batches used to computed gradients.\nThe obtained environment partitions are then used to train a new model from scratch with IRM. Following Arjovsky et al. (2019), we allow the representation to train for several hundred annealing steps before applying the IRMv1 penalty.\nCensus data Following Lahoti et al. (2020), we use a two-hidden-layer MLP architecture for all methods, with 64 and 32 hidden units respectively, and a linear adversary for ARL. We optimize all methods using Adagrad; learning rates, number of steps, and batch sizes chosen by the model selection strategy described above (with 20 test evaluations per method), as are penalty weights for IRMv1 regularizer and standard weight decay. For the inner loop of EIIL (inferring the environments), we use the same settings as in CMNIST. We find that the performance of EIIL is somewhat sensitive to the number of steps taken with the IRMv1 penalty applied. To limit the number of test queries needed during model selection, we use an early stopping heuristic by enforcing the IRMv1 penalty only during the final 500 steps of training, with the previous steps serving as annealing period to learn a baseline representation to be regularized. Unlike the previous datasets, here we use minibatches to compute gradients during IRM training (for consistency with the ARL method, which uses minibatches). However, full batch gradients are still used for inferring environments in EIIL." }, { "heading": "E ADDITIONAL EMPIRICAL RESULTS", "text": "E.1 SYNTHETIC DATA\nWe begin with a regression setting originally used as a toy dataset for evaluating IRM (Arjovsky et al., 2019). The features x ∈ RN comprise a “causal” feature v ∈ RN/2 concatenated with a “non-causal” feature z ∈ RN/2: x = [v, z]. Noise varies across hand-crafted environments e:\nv = v v ∼ N (0, 25) y = v + y y ∼ N (0, e2) z = y + z z ∼ N (0, 1).\nWe evaluated the performance of the following methods:\n• ERM: A naive regressor that does not make use of environment labels e, but instead optimizes the average loss on the aggregated environments;\n• IRM(eHC): the method of Arjovsky et al. (2019) using hand-crafted environment labels; • ICP(eHC): the method of Peters et al. (2016) using hand-crafted environment labels; • IRM(eEIIL): our proposed method (which does use hand-crafted environment labels) that\ninfers useful environments based on the naive ERM, then applies IRM to the inferred environments.\nThe regression methods fit a scalar target y = 1Ty via a regression model ŷ ≈ wTx to minimize ||y − ŷ|| w.r.t. w, plus an invariance penalty as needed. The optimal (causally correct) solution is w∗ = [1,0] Given a solution [ŵv, ŵz] from one of the methods, we report the mean squared error for the causal and non-causal dimensions as ||ŵv − 1||22 and ||ŵz − 0||22 (Table 5). Because v is marginally noisier than z, ERM focuses on the spurious z. IRM using hand-crafted environments, denoted IRM(eHC), exploits variability in noise level in the non-causal feature (which depends on the variability of σy) to achieve lower error. Using EIILv1 instead of hand crafted environments yields an improvement on the resulting IRM solution (denoted IRM(eEIIL)) by learning worst-case environments for invariant training.\nWe show in a follow-up experiment that the EIILv1 solution is indeed sensitive to the choice of reference representation, and in fact, can only discover useful environments (environments that allow IRM(eEIIL)to learn the correct causal representation) when the reference representation encodes the incorrect inductive bias by focusing on the spurious feature. We can explore this dependence of EIILv1 on the mix of spurious and non-spurious features in the reference model by constructing a Φ̃ that varies in the degree it focuses on the spurious feature, according to convex mixing parameter α ∈ [0, 1]. α = 0 indicates focusing entirely on the correct causal feature , while α = 1 indicates focusing on the spurious feature . We refer to this variant as IRM(eEIIL|Φ̃ = Φα−SPURIOUS), and measure its performance as a function of α (Figure 3). Environment inference only yields good testtime performance for high values of α, where the reference model captures the incorrect inductive bias.\nE.2 COLORMNIST\nTable 6 expands on the results from Table 1 by adding additional methods discussed in Section 3.\nIn Table 7 we measure the performance of some alternative strategies for optimizing the bi-level problem from Equation (EIIL). In particular, we consider alternating updates to the representation Φ and environment assignments q, as well as solving the inner/outer loop of EIIL multiple times. On the CMNIST dataset, none of these variants offers a performance benefit above the method used in Section 4.\nEIILloops=k indicates that the inner and outer loops of the EIIL objective in Equation (EIIL) are successively optimized k times, with k = 1 corresponding to IRM(eEIIL), the method studied in the main experiments section. In other words, Φloops=1 is solved using IRM(eEIIL), then this representation is used as a reference classifier to find Φloops=k+1 = IRM(eEIIL|Φ̃ = Φloops=k) in the\nnext “loop” of computation. This also means that the training time needed is k times the training time of IRM(eEIIL). As we expect from our theoretical analysis, using the IRM(eEIIL) solution as a reference classifier for another round of EIIL is detrimental: since the reference classifier already relies on the correct shape feature, environments that encourage invariance to this feature are found in the second round, so the EIILloops=2 classifier uses color rather than shape.\nEIILAltUpdates consists of optimizing Equation 1 using alternating steps to Φ and q. Unforatuntely, whereas this strategy works well for other bi-level optimization problems such as GANs, it seems to do poorly in this setting. This method outperforms ERM but does not exceed chance-level predictions on the test set.\nE.3 CENSUS DATA\nAblation Here we provide an ablation study extending the results from Section 4.2 to demonstrate that both ingredients in the EIILv1 solution—finding worst-case environment splits and regularizing using the IRMv1 penalty—are necessary to achieve good test-time performance on the ConfoundedAdult dataset.\nFrom Lahoti et al. (2020) we see that ARL can perform favorably compared with DRO (Hashimoto et al., 2018) in adaptively computing how much each example should contribute to the overall loss, i.e. computing the per-example γi in C = Exi,yi∼p[γi`(Φ(xi), yi)]. Because all per-environment risks in IRM are weighted equally (see Equation 2), and each per-environment risk comprises an average across per-example losses within the environment, each example contributes its loss to the overall objective in accordance with the size of its assigned environment. For example with two environments e1 and e2 of sizes |e1| and |e2|, we implicitly have the per-example weights of γi =\n1 |e1| for i ∈ e1 and γi = 1 |e2| for i ∈ e2, indicating that examples in the smaller environment count more towards the overall objective. Because EIILv1 is known to discover worst-case environments of unequal sizes, we measure the performance of EIILv1 using only this reweighting, without adding the gradient-norm penalty typically used in IRM (i.e. setting λ = 0). To determine the benefit of worst-case environment discovery, we also measure IRM with random assignment of environments. Table 8 shows the results, confirming that both ingredients are required to attain good performance using EIILv1." }, { "heading": "F ADDITIONAL THEORETICAL RESULTS", "text": "F.1 OPTIMAL SOFT PARTITIONS MAXIMIMALLY VIOLATE THE INVARIANCE PRINCPLE\nWe want to show that finding environment assignments that maximize the violation of the softened version of the regularizer from Equation 3 also maximally violates the invariance princple. Because the invaraince principle E[Y |Φ(X), e] = E[Y |Φ(X), e′]∀e, e′ is difficult to quantify for continuous Φ(X), we consider a binned version of the representation, with b denoting the discrete index of the bin in representation space. Let qi ∈ [0, 1] denote the soft assignment of example i to environment 1, and 1−qi denote its converse, the assignment of example i to environment 2. Denote by yi ∈ {0, 1} the binary target for example i, and ŷ ∈ [0, 1] as the model prediction on this example. Assume that ` represents a cross entropy or squared error loss so that∇w`(ŷ, y) = (ŷ − y)Φ(x).\nConsider the IRMv1 regularizer with soft assignment, expressed as\nD(q) = ∑ e ||∇w|w=1.0 1 Ne ∑ i qi(e)`(w ◦ Φ(xi), yi)||2\n= ∑ e || 1 Ne ∑ i qi(e)(ŷi − yi)Φ(xi)||2 = || 1∑′ i q ′ i ∑ i qi(ŷi − yi)Φ(xi)||2 + || 1∑′ i 1− q′i ∑ i (1− qi)(ŷi − yi)Φ(xi)||2\n= || ∑ i qiŷiΦ(xi)∑ i′ qi′ − ∑ i qiyiΦ(xi)∑ i′ qi′ ||2 + || ∑ i(1− qi)ŷiΦ(xi)∑ i′ 1− qi′ − ∑ i(1− qi)yiΦ(xi)∑ i′ 1− qi′ ||2.\n(4)\nNow consider that the space of Φ(X) is discretized into disjoint bins b over its support, using zi,b ∈ {0, 1} to indicate whether example i falls into bin b according to its mapping Φ(xi). Thus we have\nD(q) = ∑ b (|| ∑ i zi,bqiŷiΦ(xi)∑ i′ zi,bqi′ − ∑ i zi,bqiyiΦ(xi)∑ i′ zi,bqi′ ||2 + || ∑ i zi,b(1− qi)ŷiΦ(xi)∑ i′ zi,b(1− qi′) − ∑ i zi,b(1− qi)yiΦ(xi)∑ i′ zi,b(1− qi′) ||2)\n(5)\nThe important point is that within a bin, all examples have roughly the same Φ(xi) value, and the same value for ŷi as well. So denoting K (1) b := ∑ i zi,bqiŷiΦ(xi)∑ i′ zi,bqi′ and K(2)b := ∑ i zi,b(1−qi)ŷiΦ(xi)∑ i′ zi,b(1−qi′ ) as the relevant constant within-bin summations, we have the following objective to be maximized by EIIL:\nD(q) = ∑ b (||K(1)b − ∑ i zi,bqiyiΦ(xi)∑ i′ zi,bqi′ ||2 + ||K(2)b − ∑ i zi,b(1− qi)yiΦ(xi)∑ i′ zi,b(1− qi′) ||2.\nOne way to maximize this is to assign all yi = 1 values to environment 1 (qi = 1 for these examples) and all yi = 0 to the other environment (qi = 0). We can show this is maximized by considering all of the examples except the i-th one have been assigned this way, and then that the loss is maximized by assigneing the i-th example according to this rule.\nNow we want to show that the same assignment maximially violates the invariance principle (showing that this soft EIIL solution provides maximal non-invariance). Intuitively within each bin the difference between E[y|e = 1] and E[y|e = 2] is maximized (within the bin) if one of these expected label distributions is 1 while the other is 0. This can be achieved by assigning all the yi = 1 values to the first environment and the yi = 0 values to the second.\nThus a global optimum for the relaxed version of EIIL (using the IRMv1 regularizer) also maximally violates the invariance principle." } ]
2,020
null
SP:ca9a9e8d0066ca55d4cd760df661bec09cdeb8eb
[ "In this paper, the author presented an advanced autoencoder framework LAE. Instead of element-wise MCMC, LAE collected samples from the posterior using the amortized Langevin dynamics of a potential energy distribution. In CLAE, an extended version of LAE, the author used an intractable energy function as the prior, and collected samples using its Langevin function. The author claims that LAE and CLAE are more efficient in large scale data and have better performance compared with traditional autoencoders and variational autoencoders." ]
How can we perform posterior inference for deep latent variable models in an efficient and flexible manner? Markov chain Monte Carlo (MCMC) methods, such as Langevin dynamics, provide sample approximations of such posteriors with an asymptotic convergence guarantee. However, it is difficult to apply these methods to large-scale datasets owing to their slow convergence and datapointwise iterations. In this study, we propose amortized Langevin dynamics, wherein datapoint-wise MCMC iterations are replaced with updates of an inference model that maps observations into latent variables. The amortization enables scalable inference from large-scale datasets. Developing a latent variable model and an inference model with neural networks, yields Langevin autoencoders (LAEs), a novel Langevin-based framework for deep generative models. Moreover, if we define a latent prior distribution with an unnormalized energy function for more flexible generative modeling, LAEs are extended to a more general framework, which we refer to as contrastive Langevin autoencoders (CLAEs). We experimentally show that LAEs and CLAEs can generate sharp image samples. Moreover, we report their performance of unsupervised anomaly detection.1
[]
[ { "authors": [ "Jinwon An", "Sungzoon Cho" ], "title": "Variational autoencoder based anomaly detection using reconstruction probability", "venue": "Special Lecture on IE,", "year": 2015 }, { "authors": [ "Christopher M Bishop" ], "title": "Latent variable models", "venue": "In Learning in graphical models,", "year": 1998 }, { "authors": [ "Miguel A Carreira-Perpinan", "Geoffrey E Hinton" ], "title": "On contrastive divergence learning", "venue": "In Aistats,", "year": 2005 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689,", "year": 2019 }, { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Hao Fu", "Chunyuan Li", "Xiaodong Liu", "Jianfeng Gao", "Asli Celikyilmaz", "Lawrence Carin" ], "title": "Cyclical annealing schedule: A simple approach to mitigating kl vanishing", "venue": null, "year": 1903 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Tian Han", "Yang Lu", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Alternating back-propagation for generator network", "venue": "arXiv preprint arXiv:1606.08571,", "year": 2016 }, { "authors": [ "Tian Han", "Erik Nijkamp", "Linqi Zhou", "Bo Pang", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Joint training of variational auto-encoder and latent energy-based model", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Harry H Harman" ], "title": "Modern factor analysis", "venue": "University of Chicago press,", "year": 1976 }, { "authors": [ "Hao He", "Hao Wang", "Guang-He Lee", "Yonglong Tian" ], "title": "Probgan: Towards probabilistic gan with theoretical guarantees", "venue": "In ICLR (Poster),", "year": 2019 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 1902 }, { "authors": [ "Matthew D Hoffman" ], "title": "Learning deep latent gaussian models with markov chain monte carlo", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "Chin-Wei Huang", "David Krueger", "Alexandre Lacoste", "Aaron Courville" ], "title": "Neural autoregressive flows", "venue": "arXiv preprint arXiv:1804.00779,", "year": 2018 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Composite functional gradient learning of generative adversarial models", "venue": "arXiv preprint arXiv:1801.06309,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter E Kloeden", "Eckhard Platen" ], "title": "Numerical solution of stochastic differential equations, volume 23", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Ananya Kumar", "SM Eslami", "Danilo J Rezende", "Marta Garnelo", "Fabio Viola", "Edward Lockhart", "Murray Shanahan" ], "title": "Consistent generative query networks", "venue": null, "year": 1807 }, { "authors": [ "Chunyuan Li", "Changyou Chen", "David Carlson", "Lawrence Carin" ], "title": "Preconditioned stochastic gradient langevin dynamics for deep neural networks", "venue": "arXiv preprint arXiv:1512.07666,", "year": 2015 }, { "authors": [ "Yingzhen Li", "Richard E Turner", "Qiang Liu" ], "title": "Approximate inference with amortised mcmc", "venue": "arXiv preprint arXiv:1702.08343,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "Mcmc using hamiltonian dynamics", "venue": "Handbook of Markov Chain Monte Carlo,", "year": 2011 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning non-convergent nonpersistent short-run mcmc toward energy-based model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Bo Pang", "Tian Han", "Erik Nijkamp", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning latent space energy-based prior model", "venue": "arXiv preprint arXiv:2006.08205,", "year": 2020 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Yunus Saatci", "Andrew G Wilson" ], "title": "Bayesian gan", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Rui Shu", "Hung H Bui", "Shengjia Zhao", "Mykel J Kochenderfer", "Stefano Ermon" ], "title": "Amortized inference regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Michalis K Titsias", "Francisco Ruiz" ], "title": "Unbiased implicit variational inference", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "Rnade: The real-valued neural autoregressive density-estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Rianne Van Den Berg", "Leonard Hasenclever", "Jakub M Tomczak", "Max Welling" ], "title": "Sylvester normalizing flows for variational inference", "venue": "In 34th Conference on Uncertainty in Artificial Intelligence 2018,", "year": 2018 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1987 }, { "authors": [ "Jianwen Xie", "Ruiqi Gao", "Zilong Zheng", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Learning dynamic generator model by alternating back-propagation through time", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xianglei Xing", "Ruiqi Gao", "Tian Han", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Deformable generator network: Unsupervised disentanglement of appearance and geometry", "venue": "arXiv preprint arXiv:1806.06298,", "year": 2018 }, { "authors": [ "Biao Zhang", "Deyi Xiong", "Jinsong Su", "Hong Duan", "Min Zhang" ], "title": "Variational neural machine translation", "venue": "arXiv preprint arXiv:1605.07869,", "year": 2016 }, { "authors": [ "Jing Zhang", "Jianwen Xie", "Nick Barnes" ], "title": "Learning noise-aware encoder-decoder from noisy labels by alternating back-propagation for saliency detection", "venue": "arXiv preprint arXiv:2007.12211,", "year": 2020 }, { "authors": [ "Yizhe Zhu", "Jianwen Xie", "Bingchen Liu", "Ahmed Elgammal" ], "title": "Learning feature-to-feature translator by alternating back-propagation for generative zero-shot learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Latent variable models are widely used for generative modeling (Bishop, 1998; Kingma & Welling, 2013), principal component analysis (Wold et al., 1987), and factor analysis (Harman, 1976). To learn a latent variable model, it is essential to estimate the latent variables, z, from the observations, x. Bayesian inference is a probabilistic approach for estimation, wherein the estimate is represented as a posterior distribution, i.e., p (z | x) = p (z) p (x | z) /p (x). A major challenge while using the Bayesian approach is that the posterior distribution is typically intractable. Markov chain Monte Carlo (MCMC) methods such as Langevin dynamics (LD) provide sample approximations for posterior distribution with an asymptotic convergence guarantee. However, MCMC methods converge slowly. Thus, it is inefficient to perform time-consuming MCMC iterations for each latent variable, particularly for large-scale datasets. Furthermore, when we obtain new observations that we would like to perform inference for, we would need to re-run the sampling procedure for them.\nIn the context of variational inference, a method to amortize the cost of datapoint-wise optimization known as amortized variational inference (AVI) (Kingma & Welling, 2013; Rezende et al., 2014) was recently proposed. In this method, the optimization of datapoint-wise parameters of variational distributions is replaced with the optimization of an inference model that predicts the variational parameters from observations. This amortization enables posterior inference to be performed efficiently on large-scale datasets. In addition, inference for new observations can be efficiently performed using the optimized inference model. AVI is widely used for the training of deep generative models, and such models are known as variational autoencoders (VAEs). However, methods based on variational inference have less approximation power, because distributions with tractable densities are used for approximations. Although there have been attempts to improve their flexibility (e.g., normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016; Van Den Berg et al., 2018; Huang et al., 2018)), such methods typically have constraints in terms of the model architectures (e.g., invertibility in normalizing flows).\n1An implementation is available at: https://bit.ly/2Shmsq3\nTherefore, we propose an amortization method for LD, amortized Langevin dynamics (ALD). In ALD, datapoint-wise MCMC iterations are replaced with updates of an inference model that maps observations into latent variables. This amortization enables simultaneous sampling from posteriors over massive datasets. In particular, when a minibatch training is used for the inference model, the computational cost is constant with data size. Moreover, when inference is performed for new test data, the trained inference model can be used as initialization of MCMC to improve the mixing, because it is expected that the properly trained inference model can map data into the high-density area of the posteriors. We experimentally show that the ALD can accurately perform sampling from posteriors without datapoint-wise iterations. Furthermore, we demonstrate its applicability to the training of deep generative models. Neural networks are used for both generative and inference models to yield Langevin autoencoders (LAEs). LAEs can be easily extended for more flexible generative modeling, in which the latent prior distribution, p (z), is also intractable and defined with unnormalized energy function, by combining them with contrastive divergence learning (Hinton, 2002; Carreira-Perpinan & Hinton, 2005). We refer to this extension of LAEs as contrastive Langevin autoencoders (CLAEs). We experimentally show that our LAEs and CLAEs can generate sharper images than existing explicit generative models, such as VAEs. Moreover, we report their performance of unsupervised anomaly detection." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 PROBLEM DEFINITION", "text": "Consider a probabilistic model with observations x, continuous latent variables z, and model parameters θ, as described by the probabilistic graphical model shown in Figure 1(A). Although the posterior distribution over the latent variable is proportional to the product of the prior and likelihood: p (z | x) = p (z) p (x | z) /p (x), this is intractable owing to the normalizing constant p (x) = ∫ p (z) p (x | z) dz. This study aims to approximate the posterior p (z | x) for all n ob-\nservations x(1), . . .x(n) efficiently by obtaining samples from it." }, { "heading": "2.2 LANGEVIN DYNAMICS", "text": "Langevin dynamics (LD) (Neal, 2011) is a sampling algorithm based on the following Langevin equation: dz = −∇zU (x, z) dt+ √\n2β−1dB, (1) where U is a potential function that is Lipschitz continuous and satisfies an appropriate growth condition, β is an inverse temperature parameter, and B is a Brownian motion. This stochastic differential equation has exp (−βU (x, z)) / ∫ exp (−βU (x, z′)) dz′ as its equilibrium distribution. We set β = 1 and define the potential as follows to obtain the target posterior p (z | x) as its equilibrium:\nU (x, z) = − log p (z)− log p (x | z) . (2)\nAlgorithm 1 Amortized Langevin dynamics (training time) φ← Initialize parameters Z(1), . . . ,Z(n) ← ∅ . Initialize sample sets for all n datapoints repeat φ← φ′ ∼ N ( φ′;φ− ηφ ∑n i=1∇φU ( x(i), z(i) = fz|x ( x(i);φ )) , 2ηφI\n) Z(1), . . . ,Z(n) ← Z(1) ∪ { fφ ( x(1) )} , . . . ,Z(N) ∪ { fφ ( x(n) )} . Add samples\nuntil convergence of parameters return Z(1), . . . ,Z(n)\nAlgorithm 2 Amortized Langevin dynamics (test time) z ← fz|x (x;φ∗) . Initialize a sample using a trained inference model Z← ∅ . Initialize a sample set repeat z ← z′ ∼ N (z′; z − η∇zU (x, z) , 2ηI) . Update the sample using traditional LD Z← Z ∪ {z} . Add samples\nuntil convergence of parameters return Z\nWe can obtain samples from the posterior by simulating Eq. (1) using the Euler–Maruyama method (Kloeden & Platen, 2013) as follows:\nz ← z′ ∼ N (z′; z − η∇zU (x, z) , 2ηI) , (3) where η is the step size for the discretization. When the step size is sufficiently small, the samples asymptotically move to the target posterior by repeating this sampling iteration. LD can be applied to any posterior inference problems for continuous latent variables provided the potential energy is differentiable on the latent space. However, to obtain samples of the posterior p (z | x) for all observations x(1), . . .x(n), we should perform an iteration on Eq. (3) per datapoint as shown in Figure 1(B1). It is inefficient particularly if the dataset is large. In the next section, we demonstrate a method that addresses the inefficiency by amortization." }, { "heading": "3 AMORTIZED LANGEVIN DYNAMICS", "text": "In traditional LD, we perform MCMC iterations for each latent variable per datapoint. This is inefficient particularly if managing massive datasets. As an alternative to performing the simulation of latent dynamics directly, we define an inference model, fz|x, which is a differentiable mapping from observations into latent variables, and consider the dynamics of its parameter φ as follows:\ndφ = − n∑ i=1 ∇φU ( x(i), z(i) = fz|x ( x(i);φ )) + √ 2dB. (4)\nBecause function fz|x outputs latent variables, the stochastic dynamics on the parameter space induces other dynamics on the latent space and is represented as the total gradient of fz|x:\ndz(i) = dimφ∑ k=1 ∂z(i) ∂φk dφk\n= − dimφ∑ k=1 ∂z(i) ∂φk ( ∂ ∂φk U ( x(i), fz|x ( x(i);φ ))) dt\n− dimφ∑ k=1 ∂z(i) ∂φk n∑ j=1,j 6=i ∂ ∂φk U ( x(j), fz|x ( x(j);φ )) dt+√2 dimφ∑ k=1 ∂z(i) ∂φk dB. (5)\nMSE ESS (1 std.) MSCE\nLD 0.00145 570.24 (25.25) 0.00136 ALD 0.00233 634.27 (40.94) 0.00151\nTable 1: Quantitative comparison of the sample quality between traditional LD and our ALD. The mean squared error (MSE) between the true mean and the sample average, the effective sample size (ESS), and the Monte Carlo standard error (MSCE) are provided as evaluation metrics.\nThe first term of Eq. (5) approximates −∇z(i)U ( x(i), z(i) ) dt in Eq. (1), and the remaining terms introduce a random walk behavior to the dynamics as in the Brownian term of Eq. (1). For the simulation of Eq. (4), we use the Euler–Maruyama method, as in traditional LD:\nφ← φ′ ∼ N ( φ′;φ− ηφ\nn∑ i=1 ∇φU ( x(i), z(i) = fz|x ( x(i);φ )) , 2ηφI ) , (6)\nwhere ηφ is the step size. Through the iterations, the posterior sampling is implicitly performed by collecting outputs of the inference model for all datapoints in the training set as described in Algorithm 1. When we perform inference for new test data, the trained inference model can be used as initialization of a MCMC method (e.g., traditional LD) as shown in Algorithm 2, because it is expected that the trained inference model can map data into the high-density area of the posteriors.\nFor minibatch training, we can substitute the minibatch statistics of m datapoints for the derivative for all n data in Eq. (6):\nn∑ i=1 ∇φU ( x(i), z(i) = fz|x ( x(i);φ )) ≈ n m m∑ i=1 ∇φU ( x(i), z(i) = fz|x ( x(i);φ )) . (7)\nIn this case, we refer to the algorithm as stochastic gradient amortized Langevin dynamics (SGALD). SGALD enables us to sample from posteriors of a massive dataset with a constant computational cost. By contrast, performing traditional LD requires a linearly increasing cost with data size. For minibatch training of LD, adaptive preconditioning is known to be effective to improve convergence, which is referred to as preconditioned stochastic gradient Langevin dynamics (pSGLD) (Li et al., 2015). This preconditioning technique is also applicable to our SGALD, and we employ it throughout our experiments.\nFigure 2 shows a simple example of sampling from a posterior distribution, where its prior and likelihood are defined using conjugate bivariate Gaussian distributions (see Appendix F for more details). ALD produces samples that match well the shape of the target distributions. The mean squared error (MSE) between the true mean and the sample average, the effective sample size (ESS), and the Monte Carlo standard error (MSCE) are provided for quantitative comparison, as shown in Table 1. It can be observed that the sample quality of ALD is competitive to standard LD, even though ALD does not perform direct update of samples in the latent space. Figure 3 shows the evolution of obtained sample values by traditional LD and our SGALD for posteriors defined by a\nsimple univariate conjugate Gaussian (see Appendix F.2 for more experimental details). SGALD’s samples converges much faster than traditional LD.\nThe advantage of our ALD over amortized variational inference (AVI) is the flexibility of posterior approximation. Figure 4 is an example where the likelihood p (x | z) is defined using a neural network, therefore the posterior p (z | x) is highly multimodal. AVI methods typically approximate posteriors using variational distributions, which have tractable density function (e.g., Gaussian distributions). Hence, their approximation power is limited by the choice of variational distribution family, and they often fail to approximate such complex posteriors. On the other hand, ALD can capture well such posteriors by obtaining samples. The results in other examples are summarized in Appendix F." }, { "heading": "4 LANGEVIN AUTOENCODERS", "text": "Suppose we consider sampling the model parameter θ in addition to the local latent variables z from the joint posterior p (z, θ | x); then, we can ingenuously extend ALD to a full Bayesian approach by combining it with standard Langevin dynamics. Herein, the prior of the model parameter p (θ) is added to the potential U , and θ is sampled using standard LD or its minibatch version, stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011) as follows:\nU (X,Z,θ) = − log p (θ)− n∑ i=1 log p ( z(i) | θ ) + log p ( x(i) | z(i),θ ) , (8)\nθ ← θ′ ∼ N ( θ′;θ − ηθ∇θU ( X,Z = fz|x (X;φ) ) , 2ηθI ) , (9)\nwhere ηθ is a step size. If we omit the Gaussian noise injection in Eq. (9), it corresponds to gradient descent for maximum a posteriori (MAP) estimation of θ; if we additionally use a flat prior for p (θ), it yields the maximum likelihood estimation (MLE). In this study, we assume a flat prior for p (θ) and omit the notation for simplicity.\nTypically, the latent prior p (z | θ) and the likelihood p (x | z, θ) are defined as diagonal Gaussians: p (z | θ) = N ( z;µz,diag ( σ2z )) , p (x | z,θ) = N ( x;µx = fx|z (z;θ) , σ 2 xI ) , (10) where µz,µx and σ2z, σ 2 x are mean and variance parameters of Gaussian distributions respectively. fx|z (z;θ) is a mapping from the latent space to the observation space. The parameters of the latent prior µz and σ2z can be included to θ as learnable model parameters, or be fixed to manually decided values (e.g., µz = 0,σ2z = 1). For the observation variance σ 2 x, many existing works treat it as a hyperparameter. However, its tuning is difficult, and often requires heuristic techniques for proper training (Fu et al., 2019). Instead, here, we apply a different approach, in which the variance parameter is marginalized out, and the likelihood can be calculated only with the mean parameterµx (see Appendix B for further details). Furthermore, when the original data are quantized into discrete representation (e.g., 8-bit RGB images), it is not desirable to use continuous distributions, such as Gaussians, as likelihood functions. Thus, we should map quantized data into the continuous space in advance. This process is often referred to as dequantization (Salimans et al., 2017; Ho et al., 2019). Our ALD is also applicable as a dequantization method by formulating it as the posterior inference problem. Further detailed explanations are provided in Appendix C.\nWe can choose arbitrary differentiable functions for the generative model fx|z and the inference model fz|x. If neural networks are chosen for both, we achieve Langevin autoencoder (LAE), a new deep generative model within the auto-encoding scheme. The algorithm of LAEs is summarized in Algorithm 3 in the appendix." }, { "heading": "5 CONTRASTIVE LANGEVIN AUTOENCODERS", "text": "Currently, we have dealt with the case where the latent prior distribution p (z | θ) is tractable. To enable more flexible modeling, here, we consider that an energy-based model (EBM) (Du & Mordatch,\n2019; Pang et al., 2020; Han et al., 2020) is used for the latent prior as follows.\np (z | θ) = exp (−fz (z;θ)) Z (θ) , (11)\nwhere fz (z;θ) is an energy function that maps the latent variable into a scalar value, and Z is a normalizing constant, i.e., Z (θ) = ∫ exp (−fz (z;θ)) dz. In this case, the derivative of the potential energy∇θU (X,Z,θ) is intractable owing to the normalizing constant. However, we can obtain the unbiased estimator of the derivative by obtaining samples from the prior p (z | θ).\n∇θU (X,Z,θ) = n∑ i=1 ∇θfz ( z(i);θ ) +∇θ logZ (θ)−∇θ log p ( x(i) | z(i),θ ) (12)\n= n∑ i=1 ∇θfz ( z(i);θ ) − Ep(z|θ) [∇θfz (z;θ)]−∇θ log p ( x(i) | z(i),θ ) . (13)\nSee Appendix D for the derivation. This algorithm used for the training of EBM is known as contrastive divergence learning (Hinton, 2002; Carreira-Perpinan & Hinton, 2005). To obtain samples from the latent prior, we can use standard LD as follows:\nz ← z′ ∼ N (z′; z − ηz∇zfz (z;θ) , 2ηzI) , (14)\nwhere ηz is a step size. However, we found that our amortized Langevin algorithm works well even for the case of sampling from an unconditional prior distribution. In the unconditional case, we prepare a sampler function fz|u (u;ψ) that maps its input u into the latent variable z. Here, the input vector u is fixed, because the prior distribution does not have conditional variables as in the posterior inference case except the model parameters θ. To run multiple MCMC chains in parallel, we prepare k fixed inputs u(1), . . . ,u(k), and update the function fz|u as follows.\nψ ← ψ′ ∼ N ( ψ − ηψ\nk∑ i=1 ∇ψfz ( z(i) = fz|u ( u(i);ψ ) ;θ ) , 2ηψI ) , (15)\nwhere ηψ is a step size. Typically, the fixed input vector is chosen from samples of a standard Gaussian distribution (i.e., u(1), . . . ,u(k) ∼ N (u; 0, I)). Figure 5 shows an example of sampling from a mixture of eight Gaussians using ALD. We can observe that ALD properly captures the multimodality of the true density and works well also in the unconditional case. For minibatch training, we can substitute the gradient for all k chains with the stochastic gradient of m minibatch chains:\nk∑ i=1 ∇ψfz ( z(i) = fz|u ( u(i);ψ ) ;θ ) ≈ k m m∑ i=1 ∇ψfz ( z(i) = fz|u ( u(i);ψ ) ;θ ) . (16)\nThe advantage of using amortization in the unconditional case is that we can run massive chains in parallel with a constant computational cost using minibatch training. Here, we assume that the number of chains is equal to the number of datapoints for simplicity, i.e., k = n.\nIn summary, the encoder fz|x, the decoder fx|z, and the latent energy function fz are trained by minimizing the following loss functionL, whereas the latent sampler fz|u are trained by maximizing it, while stochastic noise of Brownian motion is injected in their update to avoid shrinking to MAP estimates (or MLE).\nL (θ,φ,ψ) = n∑ i=1 fz ( fz|x ( x(i);φ ) ;θ ) − fz ( fz|u ( u(i);ψ ) ;θ )\n− log p ( x(i) | z(i) = fz|x ( x(i);φ ) ,θ ) . (17)\nFurthermore, when the energy function fz and the sampler fz|u are parameterized using neural networks, we refer to the whole model as contrastive Langevin autoencoders (CLAEs). At the\nconvergence of the CLAE’s training, the inference model fz|x and the model parameter θ match to the true posterior p (z | x,θ) and p (θ |X), respectively. Typically, when the number of datapoints gets infinity (i.e., n → ∞), the generative model p (x | θ) = ∫ p (z | θ) p (x | z,θ) dz converges to the data distribution pdata (x). Moreover, when the sampler function’s outputs corresponds to the marginal latent distribution Epdata(x) [p (z | x,θ)], the first and second terms on the right hand side in Eq. (17) are canceled out; therefore the energy function fz and the sampler function fz|u also converge to equilibrium." }, { "heading": "6 RELATED WORKS", "text": "Amortized inference is well-investigated in the context of variational inference, and it is often referred to as amortized variational inference (AVI) (Rezende & Mohamed, 2015; Shu et al., 2018). The basic idea of AVI is to replace the optimization of the datapoint-wise variational parameters with the optimization of shared parameters across all datapoints by introducing an inference model that predicts latent variables from observations. Currently, the AVI is commonly used in fields, such as the training of generative models (Kingma & Welling, 2013), semi-supervised learning (Kingma et al., 2014), anomaly detection (An & Cho, 2015), machine translation (Zhang et al., 2016), and neural rendering (Eslami et al., 2018; Kumar et al., 2018). However, in the MCMC literature, there are few works on such amortization. (Han et al., 2016) uses traditional LD to obtain samples from posteriors for the training of deep latent variable models. Such Langevinbased algorithms for deep latent variable models are known as alternating back-propagation (ABP) and are widely applied in several fields (Xie et al., 2019; Zhang et al., 2020; Xing et al., 2018; Zhu et al., 2019). However, ABP requires datapoint-wise Langevin iterations, causing slow convergence. Moreover, when we perform inference for new data in test time, ABP requires to re-run MCMC iterations from randomly initialized samples. Although (Li et al., 2017; Hoffman, 2017) propose amortization methods for MCMC, they only amortize the cost of initialization in MCMC by using an inference model. Therefore, they do not completely remove datapoint-wise MCMC iterations.\nAutoencoders (AEs) (Hinton & Salakhutdinov, 2006) are a special case of LAEs, wherein the Gaussian noise injection to the update of the inference model (encoder) is omitted in Eq. (6), and a flat prior is used for p (z | θ). When a different distribution is used as a latent prior, it is known as sparse autoencoders (SAEs) (Ng et al.). In these cases, the latent dynamics in Eq. (5) are dominated by gradient∇φU ; thereafter, the latent variables converge to MLE or MAP estimates, arg maxz p (z | x), or other stationary points. That is, AEs (and SAEs) can be regarded as a MLE (and MAP) algorithms for both the parameter θ and the latent variables z. Conversely, LAEs can be considered as a special case of (S)AEs, in which whole models are trained with SGLD instead of stochastic optimization methods like stochastic gradient decent (SGD).\nVariational Autoencoders (VAEs) are based on AVI, wherein an inference model (encoder) is defined as a variational distribution q (z | x;φ) using a neural network. Its parameter φ is optimized by maximizing the evidence lower bound (ELBO) Eq(z|x;φ) [ log exp(−U(x,z))q(z|x;φ) ] = −Eq(z|x;φ) [U (x, z)]−H (q). There is a contrast between VAEs and LAEs relative to when stochastic noise is used. In VAEs, noise is used to sample from the variational distribution in the calculation of potential U , i.e., in forward calculation. However, in LAEs, noise is used for calculating gradient ∇φU , i.e., in backward calculation. This contrast characterizes their two different approaches to approximate posteriors: the optimization-based approach of VAEs and the sampling-based approach of LAEs. The advantage of LAEs over VAEs is that LAEs can flexibly approximate complex posteriors by obtaining samples, whereas VAEs’ approximation ability is limited by the choice of variational distribution q (z | x;φ) because it requires a tractable density. Although there are several considerations in the improvement of the approximation flexibility, these methods typically have constraints in terms of model architectures (e.g., invertibility and ease of Jacobian calculation in normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016; Van Den Berg et al., 2018; Huang et al., 2018; Titsias & Ruiz, 2019)), or they incur more computational costs (e.g., MCMC sampling for the reverse conditional distribution in unbiased implicit variational inference (Titsias & Ruiz, 2019)).\nEnergy-based Models’ training is challenging, and many researchers have been studying methodology for its stable and practical training. A major challenge is that it requires MCMC sampling from EBMs, which is difficult to perform in high dimensional data space. Our CLAEs avoid this difficulty by defining the energy function in latent space rather than data space. A similar approach is taken by\n(Pang et al., 2020), but they do not use amortization for the sampling of the latent prior and posterior as in CLAEs. On the other hand, (Han et al., 2020) proposes to learn VAEs and EBMs in latent space, but their energy function is defined for the joint distribution of the observation and the latent variable rather than the latent prior. For a more direct approach, in which EBMs are directly defined in observation space, (Du & Mordatch, 2019) uses spectral normalization (Miyato et al., 2018) for the energy function to smoothen its density, and stabilize its training. (Nijkamp et al., 2019) shows short-run MCMC is effective for the training of EBMs.\nGenerative adversarial networks (GANs) (Goodfellow et al., 2014) can be regarded as a special case of CLAEs by interpreting their discriminator and generator as energy function and sampler function, respectively (see Appendix E for further details)." }, { "heading": "7 IMAGE GENERATION", "text": "To demonstrate the applicability of our framework to the generative model training, we perform an experiment on image generation tasks using binarized MNIST (BMNIST), MNIST, SVHN, CIFAR10, and CelebA datasets. As baselines, we use VAEs and the ABP (Han et al., 2016), which is an algorithm to train deep latent variable models using LD without amortization. We also provide the performance of deep latent Gaussian models (DLGMs) (Hoffman, 2017), in which VAE-like encoders are used to initialize MCMC for posterior inference, as an alternative approach of amortization. For quantitative evaluation, we report the reconstruction error (RE) as an alternative of marginal likelihood p (x | θ), which cannot be calculated for LAEs and CLAEs. Because the RE cannot be a measure of sample quality, we provide the Fréchet Inception Distance (FID) (Heusel et al., 2017) for SVHN, CIFAR-10 and CelebA. The results are summarized in Table 2. We also provide the performance on denoising by trained VAEs, LAEs and CLAEs in Table 5 in the appendix. CLAEs consistently outperform the others, and LAEs also provide competitive results to the baselines.\nIn addition, LAEs’ training is faster than ABP due to amortization as shown in Figure 6. ABP cannot update the inference for datapoints that are not included in a minibatch, whereas LAEs can through the update of their inference model (encoder). This amortization enables scalable inference for large scale datasets, and accelerates the training of generative models. Qualitatively, images generated by LAEs and CLAEs are sharper than those of VAEs and ABPs as shown in Figure 7. Other examples are summarized in the appendix." }, { "heading": "8 ANOMALY DETECTION", "text": "In addition to image generation, the potential energy in Eq. (2) can be useful for performing unsupervised anomaly detection, because it can be a measure of the probability density. For CLAEs, the potential energy itself cannot be calculated, because it includes the logarithm of the normalizing\nconstant of their latent prior logZ (θ). However, it can be ignored because it is constant with values of the observation x and the latent z. Therefore, we use the pseudo potential energy, Ũ , as a measure as follows.\nŨ (x, z) = − log p (x | z;θ)− fz (z;θ) . (18)\nWe test the efficacy of our LAEs and CLAEs for anomaly detection using MNIST. We assume that each digit class is normal and treat the remaining nine digits as anomaly examples. We use AEs and VAEs as baselines, and provide the area under the precision-recall curve (AUPRC) as the metric for comparing the models. We use the RE as a measure of anomaly for AEs and the negative ELBO for VAEs. From Table 3, it can be observed that our LAEs and CLAEs outperforms AEs and VAEs." }, { "heading": "9 CONCLUSION", "text": "We proposed amortized Langevin dynamics (ALD), which is an efficient MCMC method for latent variable models. The ALD amortizes the cost of datapoint-wise iteration by using inference models. By experiments, we demonstrated that the ALD can accurately approximate posteriors. Using ALD, we derived a novel scheme of deep generative models called Langevin autoencoders (LAEs). LAEs are extended to a more general setting, where the latent prior is defined with an unnormalized energy function, and we refer to it as contrastive Langevin autoencoders (CLAEs). We demonstrated that our LAEs and CLAEs can generate sharp images, and they can be used for unsupervised anomaly detection. Furthermore, we investigated the relationship between our framework and existing models, and showed that traditional autoencoders (AEs) and generative adversarial networks (GANs) can be regarded as special cases of our LAEs and CLAEs. For future research on ALD, theories providing a solid proof of convergence, deriving a Metropolis-Hastings rejection step, and deriving algorithms based on more sophisticated Hamiltonian Monte Carlo approaches should be investigated." }, { "heading": "Numbers and Arrays", "text": "a A scalar (integer or real)\na A vector\nI Identity matrix with dimensionality implied by context\ndiag(a) A square, diagonal matrix with diagonal entries given by a\na A scalar random variable\na A vector-valued random variable" }, { "heading": "Sets and Graphs", "text": "A A set R The set of real numbers {0, 1} The set containing 0 and 1 {0, 1, . . . , n} The set of all integers between 0 and n [a, b] The real interval including a and b\n(a, b] The real interval excluding a but including b" }, { "heading": "Indexing", "text": "ai or a[i] Element i of vector a, with indexing starting at 1\nai or a[i] Element i of the random vector a\nCalculus\ndy dx Derivative of y with respect to x\n∂y ∂x Partial derivative of y with respect to x ∇xy Gradient of y with respect to x∫ f(x)dx Definite integral over the entire domain of x" }, { "heading": "Probability and Information Theory", "text": "P (a) A probability distribution over a discrete variable\np(a) A probability distribution over a continuous variable, or over a variable whose type has not been specified\na ∼ P Random variable a has distribution P Ex∼P [f(x)] or Ef(x) Expectation of f(x) with respect to P (x) H(x) or H(P ) Shannon entropy of the random variable x that has distri-\nbution P\nDKL(P‖Q) Kullback-Leibler divergence of P and Q N (x;µ,Σ) Gaussian distribution over x with mean µ and covariance\nΣ\nU(x;a, b) Uniform distribution over x with lower range a and upper range b\nGam(x;α, β) Gamma distribution over x with shape α and rate range β" }, { "heading": "Functions", "text": "f : A→ B The function f with domain A and range B f(x;θ) A function of x parametrized by θ. (Sometimes we write\nf(x) and omit the argument θ to lighten notation)\nlog x Natural logarithm of x\nσ(x) Logistic sigmoid, 1 1 + exp(−x) Γ(z) Gamma function, ∫ ∞ 0 tz−1e−tdt bxc Integer part of x, i.e., bxc = max{n ∈ N | n ≤ x} 1condition is 1 if the condition is true, 0 otherwise" }, { "heading": "B MARGINALIZING OUT OBSERVATION VARIANCE", "text": "When we use a diagonal Gaussian distribution for the likelihood function (i.e., p (x | z,θ) = N ( x;µx = fx|z (z;θ) , σ 2 xI ) ), we have to decide the parameter of observation variance σ2x. A simple and popular way is to manually choose the parameter in advance (e.g., σ2x = 1). However, it is difficult to choose a proper value, and it is desirable that the variance is calibrated for each datapoint. To address it, we use an alternative approach, in which the variance is marginalized out and the likelihood can be calculated only with the mean parameter. First, we define the precision parameter, which is the reciprocal of variance, i.e., λx = 1/σ2x. The precision is defined per datapoint, and shared across all dimension of the observation. When we define the prior distribution of the precision using an uninformative flat prior (e.g., p (λx) = Gam (λx; 0, 0)), the marginal distribution\nAlgorithm 3 Langevin Autoencoders θ,φ← Initialize parameters repeat θ ← θ′ ∼ N ( θ′;θ − ηθ∇θU ( X,Z = fz|x (X;φ) ,θ ) , 2ηθI\n) φ← φ′ ∼ N ( φ′;φ− ηφ∇φU ( X,Z = fz|x (X;φ) ,θ ) , 2ηφI\n) until convergence of parameters return θ,φ\nAlgorithm 4 Contrastive Langevin Autoencoders θ,φ,ψ ← Initialize parameters repeat θ ← θ′ ∼ N (θ′;θ − ηθ∇θL (θ,φ,ψ) , 2ηθI) φ← φ′ ∼ N (φ′;φ− ηφ∇φL (θ,φ,ψ) , 2ηφI) ψ ← ψ′ ∼ N (ψ′;ψ + ηψ∇ψL (θ,φ,ψ) , 2ηψI)\nuntil convergence of parameters return θ,φ,ψ\nwill be simply the integral of the Gaussian likelihood over λx:∫ N ( x;µx = fx|z (z;θ) , λ −1 x I ) dλx (19)\n= ∫ d∏ i=1 √ λx 2π exp ( −λx(xi − µx[i]) 2 2 ) dλx (20)\n= ∫ ( λx 2π )d/2 exp ( − λx ∑d i=1(xi − µx[i])2 2 ) dλx (21)\n= 2π−d/2 ( d∑ i=1 (xi − µx[i])2 )− d+22 Γ ( d+ 2 2 ) , (22)\nwhere d is the dimensionality of x and Γ is a gamma function. This marginalized distribution is an improper distribution, whose integral over x diverges and does not correspond to 1. We refer to the distribution as the marginalized Gaussian distribution. Marginalized Gaussians can be widely used for mean parameter estimation of a Gaussian distribution, especially when its variance is unknown." }, { "heading": "C LANGEVIN DEQUANTIZATION", "text": "Many image datasets, such as MNIST and CIFAR10, are recordings of continuous signals quantized into discrete representations. For example, standard 8-bit RGB images of the resolution of H ×W are represented as {0, 1, . . . , 255}3×H×W . If we naively train a continuous density model to such discrete data, the model will converge to a degenerate solution that places all probability mass on discrete datapoints (Uria et al., 2013). A common solution to this problem is to first convert the discrete data distribution into a continuous distribution via a process called dequantization, and then model the resulting continuous distribution using the continuous density model. Here, we denote the random variable of original discretized data as x ∈ Nd (d = 3×H ×W ), dequantized continuous variable as x̃ ∈ Rd, and the continuous density model as p (x̃). The simplest way of dequantization is to add uniform noise to the discrete data, and deal with it as a continuous distribution, which we refer to as uniform dequantization:\nx̃ ∼ U (x̃;x,x+ 1) . (23) However, this approach introduces flat step-wise regions into the data distribution, and it is unnatural and difficult to fit parametric continuous distributions. Moreover, the range of the dequantized data\nis still bounded (i.e., x̃ ∈ (0, 256)d), therefore it is not desirable to fit the continuous density model defined for unbounded range, e.g., Gaussian distributions.\nTo investigate a more sophisticated approach, we first consider the quantization process, the inverse process of dequantization, in which the continuous data x ∈ Rd is discretized into {0, 1, . . . , 255}d. This process is represented as a conditional distribution P (x | x̃). For example, it is defined as follows:\nP (x | x̃) = 1x=b256·σ(x̃)c. (24) In this definition, the continuous data is first compressed into (0, 256)d using the logistic sigmoid, then discretized into {0, 1, . . . , 255}d with its integer part. Although the quantization process could be formulated with another definition, we here discuss based on this formulation. When we have a density model of continuous data p (x̃), the dequantization process can be formulated as a posterior inference problem of p (x̃ | x) ∝ p (x̃)P (x | x̃). Although this posterior is typically intractable, we can obtain samples from it using our ALD algorithm in the same way as the posterior sampling of latent variable models. When we construct the inference model fx̃|x : {0, 1, . . . , 255}\nd → Rd as follows, the likelihood will be constant, i.e., P ( x | x̃ = fx̃|x (x; ξ) ) = 1.\nfx̃|x (x; ξ) = σ −1 ( x+ σ (g (x; ξ))\n256\n) , (25)\nwhere g (x; ξ) is a mapping from discretized data x into real d-space. Therefore, the potential energy corresponds to the negative log likelihood.\nU ( x, x̃ = fx̃|x (x; ξ) ) = − log p ( x̃ = fx̃|x (x; ξ) ) − (((( (((( ((( logP ( x | x̃ = fx̃|x (x; ξ) ) . (26)\nThe parameter of inference model fx̃|x is updated using our ALD algorithm as in Eq. (6).\nξ ← ξ′ ∼ N ( ξ′; ξ − ηξ\nn∑ i=1 ∇ξU ( x̃(i) = fx̃|x ( x(i); ξ )) , 2ηξI ) , (27)\nwhere ηξ is a step size.\nWhen we use this Langevin dequantization for latent variable models like LAEs, the potential energy is rewritten as follows, and all parameters {θ, φ, ξ} are trained in an end-to-end fashion:\nU ( X, X̃ = fx̃|x (X; ξ) ,Z = fz|x ( X̃;φ ) ,θ ) = − log p (θ)− n∑ i=1 log p ( z(i) = fz|x ( x̃(i);φ\n)) + log p ( x̃(i) = fx̃|x ( x(i); ξ ) | z(i) = fz|x ( x̃(i);φ )) .\n(28)\nFigure 8 shows the comparison of SVHN data distributions of the top left corner pixel before and after dequantization. Before dequantization, the data are concentrated on discrete points. After dequantization, it can be observed that the distribution gets continuous enough to fit continuous density models. Furthermore, Figure 9 shows the comparison of generated MNIST samples by LAEs to see the effect of Langevin dequantization. The dequantization seems to affect the sharpness of generated samples." }, { "heading": "D DERIVATION OF EQ. (13)", "text": "∇θ logZ (z) = 1\nZ (θ) ∇θZ (θ) (29)\n= 1\nZ (θ)\n∫ ∇θ exp (−fz (z;θ)) dz (30)\n= − ∫\nexp (−fz (z;θ)) Z (θ) ∇θfz (z;θ) dz (31)\n= − ∫ p (z | θ)∇θfz (z;θ) dz (32)\n= −Ez∼p(z|θ) [∇θfz (z;θ)] (33)" }, { "heading": "E ADDITIONAL RELATED WORK", "text": "Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are similar to CLAEs in that both are trained by minimax game between two functions (i.e., the energy function and the sampler function in CLAEs; the discriminator and the generator in GANs). However, there are some differences between them. First, the minimax game is performed in the latent space in CLAEs, while it is performed in the observation space in GANs. In other words, the latent variable is identical to the observation (i.e., p (x | z) = 1x=z) in GANs2. Second, the loss function is slightly different. In GANs, the loss function is as follows:\nLGAN (θ,ψ) = − n∑ i=1 logD ( x(i);θ ) + log ( 1−D ( G ( u(i);ψ ) ;θ )) , (34)\nwhere G denotes the generator that maps its inputs u into the observation space, and D denotes the discriminator that maps from the observation space into (0, 1), andu(i) ∼ N (u; 0, I). The discriminator is trained to minimize this loss function, whereas the generator is trained to maximize it. The main difference to Eq. (17) is the second term. When we substitute it with− logD ( G ( u(i);ψ ) ;θ ) , it becomes more similar to Eq. (17). This modification is known as the− logD trick, and often used to stabilize the training of GAN’s generator (Goodfellow et al., 2014; Johnson & Zhang, 2018). In this formulation, the counter parts of the energy function and the sampler function are− logD (·;θ) and G (·;ψ), respectively. However, there is still a difference that the range of − logD (·;θ) is bounded within (0,∞), whereas the range of CLAE’s energy function is unbounded. Another difference between CLAEs and GANs is that the input vector of the sampler function is fixed through the training in CLAEs, whereas the input of the generator changes per iteration by sampling fromN (u; 0, I) in GANs. Furthermore, CLAEs are trained using noise injected gradient, whereas GANs are trained with a standard stochastic optimization method like SGD. In the training of CLAEs, as the number of inputs of the sampler function increases, the noise magnitude in Eq. (15) will relatively decreases. Therefore, in the infinite case (i.e., k → ∞), Eq. (15) corresponds to standard (stochastic) gradient descent. Thus, GANs can be interpreted as the infinite case of CLAEs with regard to their training of generators. In the infinite case, the generator (sampler) may converge to a solution where it always generates maximum density points rather than samples from the distribution defined by the energy function. This nature can cause mode collapsing, which is known as a major challenge of GAN’s training. In GANs, the discriminator is also trained with\n2Note that the latent variable z is different from the input of GAN’s generators. Here, the input of the GAN’s generators is denoted as u for the analogy with CLAEs.\nstandard stochastic optimization, which corresponds to the MLE case where noise injection in Eq. (9) is omitted. Although there are some investigations to apply Bayesian approach to GANs (Saatci & Wilson, 2017; He et al., 2019), their discriminators are not defined as energy functions.\nIn summary, GANs with − logD trick can be considered as a special case of CLAEs, where the latent variable is identical to the observation (i.e., p (x | z) = 1x=z); the energy function and the sampler function are respectively defined as − logD (·;θ) and G (·;ψ); the number of inputs of the sampler function tends to infinity; and the model parameter θ is point-estimated with MLE.\nWasserstein GANs (WGANs) (Arjovsky et al., 2017) also has a loss function similar to CLAE’s:\nLWGAN (θ,ψ) = − n∑ i=1 D ( x(i);θ ) −D ( G ( u(i);ψ ) ;θ ) , (35)\nwhere D denotes the discriminator of WGANs that maps from the observation space into the real space R. In this case, the counter part of the energy function is −D (x;θ), although D has a constraint of 1-Lipschitz continuity, which the energy function of CLAE does not has." }, { "heading": "F EXPERIMENTAL SETTINGS", "text": "Unless we explicitly state otherwise, we use tanh activation instead of ReLU for all experiments, because it is desirable that the whole model is differentiable at all points when performing sampling algorithms based on Langevin dynamics, which require the differentiability of the potential energy. In fact, we found that our ALD experimentally performs better when using tanh rather than ReLU." }, { "heading": "F.1 CONJUGATE BIVARIATE GAUSSIAN EXAMPLE", "text": "In the experiment of conjugate bivariate Gaussian example in Section 3, we initially generate three synthetic data x(1),x(2),x(3), where each x(i) is sampled from a bivariate Gaussian distribution as follows:\np (z) = N (z;µz,Σz) , p (x | z) = N (x; z,Σx) .\nIn this experiment, we set µz =\n[ 0\n0\n] , Σz = [ 1 0\n0 1\n] , and Σx = [ 0.7 0.6\n0.7 0.8\n] . We can\ncalculate the exact posterior as follows: p (z | x) = N ( z;µz|x,Σz|x ) ,\nwith µz|x = Σz|x ( Σ−1z µz + Σ −1 x x ) , Σz|x = ( Σ−1z + Σ −1 x )−1 .\nIn this experiment, we obtain 10,000 samples using ALD. We use four fully-connected layers of 128 units with tanh activation for the inference model and set the step size ηφ to 0.003." }, { "heading": "F.2 CONJUGATE UNIVARIATE GAUSSIAN EXAMPLE", "text": "In the experiment of conjugate univariate Gaussian example in Section 3, we initially generate 100 synthetic data x(1),x(2), . . . ,x(100), where each x(i) is sampled from a univariate Gaussian distribution as follows:\np (z) = N ( z;µz, σ 2 z ) , p (x | z) = N ( x; z, σ2x ) .\nIn this experiment, we set µz = 0, σ2z = 1, σ 2 x = 0.01. In this case, we can calculate the exact posterior as follows:\np (z | x) = N ( z;\n1 1 σ2z + 1σ2x ( µz σ2z + x σ2x ) , ( 1 σ2z + 1 σ2x )−1) In this experiment, we obtain 20,000 samples using SGALD. We use four fully-connected layers of 128 units with tanh activation for the inference model and set the step size ηφ to 0.001. We set the batch size to 10." }, { "heading": "F.3 NEURAL LIKELIHOOD EXAMPLE", "text": "We perform an experiment with a complex posterior, wherein the likelihood is defined with a randomly initialized neural network fθ. Particularly, we parameterize fθ by four fully-connected layers of 128 units with ReLU activation and two dimensional outputs like p (x | z) = N ( fθ (z) , σ 2 xI ) . We initialize the weight and bias parameters with N (0, 0.2I) and N (0, 0.1I), respectively. In addition, we set the observation variance σx to 0.25. We used the same neural network architecture for the inference model fφ. Other settings are same as the previous conjugate Gaussian experiment.\nThe results are shown in Figure 10. The left three columns show the density visualizations of the ground truth or approximation posteriors of AVI methods; the right two columns show the visualizations of 2D histograms and samples obtained using ALD. For AVI method, we use two different models. One uses diagonal Gaussians, i.e., N ( µ (x;φ) ,diag ( σ2 (x;φ) )) , for the variational distribution, and the oher uses Gaussians with full covariance N (µ (x;φ) ,Σ (x;φ)). From the density visualization of GT, the true posterior is multimodal and skewed; this leads to the failure of the Gaussian AVI methods notwithstanding considering covariance. In contrast, the samples of ALD accurately capture such a complex distribution, because ALD does not need to assume any tractable distributions for approximating the true posteriors. The samples of ALD capture well the multimodal and skewed posterior, while Gaussian AVI methods fail it even when considering covariance.\nF.4 IMAGE GENERATION\nIn the experiment of image generation, we resize the original image into 32 × 32 for all datasets. For MNIST, we pad original image with zeros to make the size 32 × 32. We use a diagonal Gaussian N ( µz,diag ( σ2z ))\nas a latent prior for VAEs, ABP, DLGM and LAEs, and treat the parameter µz,σ2z as learnable parameters. We use a diagonal Gaussian for the approximate posterior of VAEs. The architecture of neural networks is summarized in Table 4. Conv k x s x p x c denotes a convolutional layer with k × k kernel, s × s stride, p × p padding, and c output channels. ConvTranpose k x s x p x c denotes a transposed convolutional layer with k×k kernel, s×s stride, p× p padding, and c output channels. Upsample denotes a nearest neighbor upsampling with scale factor of 2. Lineard is a fully connected layer of output dimension d. We apply tanh activation after each convolution, transposed convolution or linear layer except the last one. dx, dz and du are the\ndimensionality of x, z and u respectively. dout is equal to dz for LAEs and CLAEs, and 2dz for VAEs. For all datasets, we set the minibatch size m and the step size (ηθ, ηφ, ηψ) to 1000, 0.01/n respectively, where n is a size of training set. We set dz = 32, du = 8 throughout the experiments. We use the same setting for the experiment of anomaly detection." }, { "heading": "VAE ABP", "text": "" }, { "heading": "VAE ABP", "text": "" }, { "heading": "VAE ABP", "text": "" }, { "heading": "VAE ABP", "text": "" } ]
2,020
null
SP:bf07fc882fe3aca4c50e07df79f22d4b8b3abb56
[ "The authors report a novel application of GANs to validate the maximum likelihood estimator (MLE) of the intrinsic dimension (ID) of image data sets. Then they use the MLE ID estimator to characterize the intrinsic dimension of several commonly used computer vision data sets, and link the data set ID to the generalizability of trained classifiers. They provide additional experiments that support the notion that it is intrinsic dimension, and not extrinsic dimension (i.e. # of pixels), that governs the performance of a binary classifier on these data sets. Also, they verify that dimension plays a large role in learning on natural data." ]
It is widely believed that natural image data exhibits low-dimensional structure despite the high dimensionality of conventional pixel representations. This idea underlies a common intuition for the remarkable success of deep learning in computer vision. In this work, we apply dimension estimation tools to popular datasets and investigate the role of low-dimensional structure in deep learning. We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images. Additionally, we find that low dimensional datasets are easier for neural networks to learn, and models solving these tasks generalize better from training to test data. Along the way, we develop a technique for validating our dimension estimation tools on synthetic data generated by GANs allowing us to actively manipulate the intrinsic dimension by controlling the image generation process. Code for our experiments may be found here.
[ { "affiliations": [], "name": "Phillip Pope" }, { "affiliations": [], "name": "Chen Zhu" }, { "affiliations": [], "name": "Ahmed Abdelkader" }, { "affiliations": [], "name": "Micah Goldblum" }, { "affiliations": [], "name": "Tom Goldstein" } ]
[ { "authors": [ "Alessio Ansuini", "Alessandro Laio", "Jakob H Macke", "Davide Zoccolan" ], "title": "Intrinsic dimension of data representations in deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Franz Besold", "Vladimir Spokoiny" ], "title": "Adaptive manifold clustering", "venue": "arXiv preprint arXiv:1912.04869,", "year": 2019 }, { "authors": [ "Matthew Brand" ], "title": "Charting a manifold", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Wieland Brendel", "Matthias Bethge" ], "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet", "venue": "arXiv preprint arXiv:1904.00760,", "year": 2019 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Gunnar Carlsson", "Tigran Ishkhanov", "Vin de Silva", "Afra Zomorodian" ], "title": "On the local behavior of spaces of natural images", "venue": "International Journal of Computer Vision,", "year": 2008 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Li Deng" ], "title": "The MNIST database of handwritten digit images for machine learning research [best of the web", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Mathieu Desbrun", "Mark Meyer", "Pierre Alliez" ], "title": "Intrinsic Parameterizations of Surface Meshes", "venue": "Computer Graphics Forum,", "year": 2002 }, { "authors": [ "David L. Donoho", "Carrie Grimes" ], "title": "Image manifolds which are isometric to euclidean space", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2005 }, { "authors": [ "Elena Facco", "Maria d’Errico", "Alex Rodriguez", "Alessandro Laio" ], "title": "Estimating the intrinsic dimension of datasets by a minimal neighborhood information", "venue": "Scientific Reports,", "year": 2017 }, { "authors": [ "Charles Fefferman", "Sanjoy Mitter", "Hariharan Narayanan" ], "title": "Testing the manifold hypothesis", "venue": "Journal of the American Mathematical Society,", "year": 2016 }, { "authors": [ "Imola K Fodor" ], "title": "A survey of dimension reduction techniques", "venue": "Technical report, Lawrence Livermore National Lab., CA (US),", "year": 2002 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Marina Gomtsyan", "Nikita Mokrov", "Maxim Panov", "Yury Yanovich" ], "title": "Geometry-aware maximum likelihood estimation of intrinsic dimension", "venue": "In Asian Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sixue Gong", "Vishnu Naresh Boddeti", "Anil K Jain" ], "title": "On the intrinsic dimensionality of image representations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Francisco J Gonzalez", "Maciej Balajewicz" ], "title": "Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems", "venue": "arXiv preprint arXiv:1808.01346,", "year": 2018 }, { "authors": [ "Daniele Granata", "Vincenzo Carnevale" ], "title": "Accurate estimation of the intrinsic dimension using graph distances: Unraveling the geometric complexity of datasets", "venue": "Scientific reports,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "W. Ronny Huang", "Zeyad Emam", "Micah Goldblum", "Liam Fowl", "Justin K. Terry", "Furong Huang", "Tom Goldstein" ], "title": "Understanding generalization through visualizations, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jisu Kim", "Alessandro Rinaldo", "Larry Wasserman" ], "title": "Minimax Rates for Estimating the Dimension of a Manifold", "venue": "Journal of Computational Geometry,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2009 }, { "authors": [ "Daniel C. Laughlin" ], "title": "The intrinsic dimensionality of plant traits and its relevance to community assembly", "venue": "Journal of Ecology,", "year": 2014 }, { "authors": [ "Ann B. Lee", "Kim S. Pedersen", "David Mumford" ], "title": "The nonlinear statistics of high-contrast patches in natural images", "venue": "International Journal of Computer Vision,", "year": 2003 }, { "authors": [ "Elizaveta Levina", "Peter J Bickel" ], "title": "Maximum likelihood estimation of intrinsic dimension", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft COCO: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "David J.C. MacKay", "Zoubin Ghahramani" ], "title": "Comments on ‘maximum likelihood estimation of intrinsic dimension", "venue": "by e. levina and p. bickel", "year": 2004 }, { "authors": [ "Hariharan Narayanan", "Sanjoy Mitter" ], "title": "Sample complexity of testing the manifold hypothesis", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Hariharan Narayanan", "Partha Niyogi" ], "title": "On the sample complexity of learning smooth cuts on a manifold", "venue": "In COLT,", "year": 2009 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Bruno A Olshausen", "David J Field" ], "title": "Natural image statistics and efficient coding", "venue": "Network: Computation in Neural Systems,", "year": 1996 }, { "authors": [ "Gabriel Peyré" ], "title": "Manifold models for signals and images", "venue": "Computer Vision and Image Understanding,", "year": 2009 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Daniel L Ruderman" ], "title": "The statistics of natural images. Network: Computation", "venue": "Neural Systems,", "year": 1994 }, { "authors": [ "Arthur Sard" ], "title": "The measure of the critical values of differentiable maps", "venue": "Bulletin of the American Mathematical Society,", "year": 1942 }, { "authors": [ "Bernhard Schölkopf", "Alexander Smola", "Klaus-Robert Müller" ], "title": "Nonlinear component analysis as a kernel eigenvalue problem", "venue": "Neural computation,", "year": 1998 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Disentangling adversarial robustness and generalization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society,", "year": 2019 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Jonathan Vacher", "Ruben Coen-Cagli" ], "title": "Combining mixture models with linear mixing updates: multilayer image segmentation and synthesis", "venue": "arXiv preprint arXiv:1905.10629,", "year": 2019 }, { "authors": [ "Jonathan Vacher", "Aida Davila", "Adam Kohn", "Ruben Coen-Cagli" ], "title": "Texture interpolation for probing visual perception", "venue": "arXiv preprint arXiv:2006.03698,", "year": 2020 }, { "authors": [ "Mario Valle", "Artem R. Oganov" ], "title": "Crystal fingerprint space – a novel paradigm for studying crystalstructure sets", "venue": "Acta Crystallographica Section A,", "year": 2010 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol", "Léon Bottou" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Wei Zhu", "Qiang Qiu", "Jiaji Huang", "Robert Calderbank", "Guillermo Sapiro", "Ingrid Daubechies" ], "title": "LDMNet: Low dimensional manifold regularized neural networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The idea that real-world data distributions can be described by very few variables underpins machine learning research from manifold learning to dimension reduction (Besold & Spokoiny, 2019; Fodor, 2002). The number of variables needed to describe a data distribution is known as its intrinsic dimension (ID). In applications, such as crystallography, computer graphics, and ecology, practitioners depend on data having low intrinsic dimension (Valle & Oganov, 2010; Desbrun et al., 2002; Laughlin, 2014). The utility of representations which are low-dimensional has motivated a variety of deep learning techniques including autoencoders and regularization methods (Hinton & Salakhutdinov, 2006; Vincent et al., 2010; Gonzalez & Balajewicz, 2018; Zhu et al., 2018).\nIt is also known that dimensionality plays a strong role in learning function approximations and non-linear class boundaries. The exponential cost of learning in high dimensions is easily captured by the trivial case of sampling a function on a cube; in d dimensions, sampling only the cube vertices would require 2d measurements. Similar behaviors emerge in learning theory. It is known that learning a manifold requires a number of samples that grows exponentially with the manifold’s intrinsic dimension (Narayanan & Mitter, 2010). Similarly, the number of samples needed to learn a well-conditioned decision boundary between two classes is an exponential function of the intrinsic dimension of the manifold on which the classes lie (Narayanan & Niyogi, 2009). Furthermore, these learning bounds have no dependence on the ambient dimension in which manifold-structured datasets live.\nIn light of the exponentially large sample complexity of learning high-dimensional functions, the ability of neural networks to learn from image data is remarkable. Networks learn complex decision boundaries from small amounts of image data (often just a few hundred or thousand samples per class). At the same time, generative adversarial networks (GANs) are able to learn image “manifolds” from merely a few thousand samples. The seemingly low number of samples needed to learn these manifolds strongly suggests that image datasets have extremely low-dimensional structure.\nDespite the established role of low dimensional data in deep learning, little is known about the intrinsic dimension of popular datasets and the impact of dimensionality on the performance of neural networks. Computational methods for estimating intrinsic dimension enable these measurements.\nWe adopt tools from the dimension estimation literature to shed light on dimensionality in settings of interest to the deep learning community. Our contributions can be summarized as follows:\n• We verify the reliability of intrinsic dimension estimation on high-dimensional data using generative adversarial networks (GANs), a setting in which we can a priori upper-bound the intrinsic dimension of generated data by the dimension of the latent noise vector.\n• We measure the dimensionality of popular datasets such as MNIST, CIFAR-10, and ImageNet. In our experiments, we find that natural image datasets whose images contain thousands of pixels can, in fact, be described by orders of magnitude fewer variables. For example, we estimate that ImageNet, despite containing 224 × 224 × 3 = 150528 pixels per image, only has intrinsic dimension between 26 and 43; see Figure 1.\n• We train classifiers on data, synthetic and real, of various intrinsic dimension and find that this variable correlates closely with the number of samples needed for learning. On the other hand, we find that extrinsic dimension, the dimension of the ambient space in which data is embedded, has little impact on generalization.\nTogether, these results put experimental weight behind the hypothesis that the unintuitively low dimensionality of natural images is being exploited by deep networks, and suggest that a characterization of this structure is an essential building block for a successful theory of deep learning." }, { "heading": "2 RELATED WORK", "text": "While the hypothesis that natural images lie on or near a low-dimensional manifold is controversial, Goodfellow et al. (2016) argue that the low-dimensional manifold assumption is at least approximately correct for images, supported by two observations. First, natural images are locally connected, with each image surrounded by other highly similar images reachable through image transformations (e.g., contrast, brightness). Second, natural images seem to lie on a low-dimensional structure, as the probability distribution of images is highly concentrated; uniformly sampled pixels can hardly assemble a meaningful image. It is widely believed that the combination of natural scenes and sensor properties yields very sparse and concentrated image distributions, as has been supported by several empirical studies on image patches (Lee et al., 2003; Donoho & Grimes, 2005; Carlsson et al., 2008). This observation motivated work on efficient coding (Olshausen & Field, 1996) and served as a prior in computer vision (Peyré, 2009). Further, rigorous experiments have been conducted clearly supporting the low-dimensional manifold hypothesis for many image datasets (Ruderman, 1994; Schölkopf et al., 1998; Roweis & Saul, 2000; Tenenbaum et al., 2000; Brand, 2003); see also (Fefferman et al., 2016) for principled algorithms on verifying the manifold hypothesis.\nThe generalization literature seeks to understand why some models generalize better from training data to test data than others. One line of work suggests that the loss landscape geometry explains why neural networks generalize well (Huang et al., 2019). Other generalization work predicts that data with low dimension, along with other properties which do not include extrinsic dimension,\ncharacterize the generalization difficulty of classification problems (Narayanan & Niyogi, 2009). In the context of deep learning, Gong et al. (2019) found that neural network features are lowdimensional. Ansuini et al. (2019) further found that the intrinsic dimension of features decreases in late layers of neural networks and observed interesting trends in the dimension of features in early layers. In contrast to Gong et al. (2019) and Ansuini et al. (2019), who find that the intrinsic dimension of internal representations is inversely correlated with high performance, we study the dimensionality of data and its impact on performance, and we make a similar finding. Zhu et al. (2018) proposed a regularizer derived from the intrinsic dimension of images augmented with their corresponding feature vectors. Another line of work in deep learning has found that neural networks rely heavily on textures which are low-dimensional (Geirhos et al., 2018; Brendel & Bethge, 2019). Similarly, some have suggested that natural images can be represented as mixtures of textures which lie on a low-dimensional manifold (Vacher & Coen-Cagli, 2019; Vacher et al., 2020)." }, { "heading": "3 INTRINSIC DIMENSION ESTIMATION", "text": "Given a set of sample points P ⊂ RN , it is common to assume that P lies on or near a lowdimensional manifoldM ⊆ RN of intrinsic dimension dim(M) = d N . As a measure of the degrees of freedom in a dataset, as well as the information content, there is great interest in estimating the intrinsic dimension d. In the remainder of this section, we briefly describe the dimension estimation method we use in this paper; for further information, see (Kim et al., 2019) and references therein.\nOne of the main approaches to intrinsic dimension estimation is to examine a neighborhood around each point in the dataset, and compute the Euclidean distance to the kth nearest neighbor. Assuming that density is constant within small neighborhoods, the Maximum Likelihood Estimation (MLE) of Levina & Bickel (2005) uses a Poisson process to model the number of points found by random sampling within a given radius around each sample point. By relating the rate of this process to the surface area of the sphere, the likelihood equations yield an estimate of the ID at a given point x as:\nm̂k(x) = 1 k − 1 k−1∑ j=1 log Tk(x) Tj(x) −1 , (1) where Tj(x) is the Euclidean (`2) distance from x to its jth nearest neighbor. Levina & Bickel (2005) propose to average the local estimates at each point to obtain a global estimate m̄k = 1 n ∑n i=1 m̂k(xi). MacKay & Ghahramani (2005) suggestion a correction based on averaging of inverses\nm̄k =\n[ 1\nn n∑ i=1 m̂k(xi) −1\n]−1 = 1 n(k − 1) n∑ i=1 k−1∑ j=1 log Tk(xi) Tj(xi) −1 , (2) where n is the number of samples. We use Equation (2) as our MLE estimator throughout this paper.\nSince the geometry of natural images is complex and unknown, we face two challenges when verifying the accuracy of MLE on natural image datasets. First, we need to choose a proper value of k. As shown by MacKay & Ghahramani (2005), the positive bias of the corrected estimator Equation (2) increases as k increases, but the variance decreases. In order to navigate this bias-variance tradeoff, we try various values of k in Section 4. Second, in addition to the aforementioned local uniformity assumption, MLE assumes that data arises as a sequence of i.i.d. random variables which can be written as a continuous and sufficiently smooth function of a random variable with smooth density, which may or may not be true for natural image datasets. While the truth of these assumptions is unknown on natural images, we verify the accuracy of our MLE estimates in a controlled setting in the following section.\nWe briefly discuss other notable techniques for dimensionality estimation. GeoMLE (Gomtsyan et al., 2019) attempts to account for non-uniformity of density and nonlinearity of manifold using a polynomial regression of standard MLE based on distances to nearest neighbors in different sized neighborhoods. However, GeoMLE chooses to approximate averages of m̂k(xi), instead of\naveraging its reciprocal like Equation (2), resulting in a potentially wrong maximum likelihood estimator. As a result, we find its estimation deviates significantly from expected dimensionalities. TwoNN (Facco et al., 2017) is based on the ratio of the distances to the first and second nearest neighbors. Finally, the approach of (Granata & Carnevale, 2016) considers the distribution of geodesic distances over the data manifold, approximated by distances through kNN graphs, compared to the distribution of distances over hyperspheres of varying dimension. Unlike MLE, our preliminary experiments suggest that these techniques do not provide reasonable estimates for some natural and synthetic images which are key to this work; see Appendix A.5 for further discussion." }, { "heading": "4 VALIDATING DIMENSION ESTIMATION WITH SYNTHETIC DATA", "text": "Dimensionality estimates are often applied on “simple” manifolds or toy datasets where the dimensionality is known, and so the accuracy of the methods can be validated. Image manifolds, by contrast, are highly complex, may contain many symmetries and modes, and are of unknown dimension. In principle, there is no reason why MLE-based dimensionality estimates cannot be applied to image datasets. However, because we lack knowledge of the exact dimensionality of image datasets, we cannot directly verify that MLE-based dimensionality estimates scale up to the complexity of image structures.\nThere is an inherent uncertainty in estimating the ID of a given dataset. First, we cannot be sure if the dataset actually resembles a sampling of points on or near a manifold. Second, there are typically no guarantees that the sampling satisfies the conditions assumed by the ID estimators we are using.\nTowards a principled application of ID estimates in contexts of practical relevance to deep learning, we begin by validating that MLE methods can generate accurate dimensionality estimates for complex image data. We do this by generating synthetic image datasets using generative models for which the intrinsic dimensionality can be upper-bounded a priori. We believe such validations are essential to put recent findings in perspective (Gong et al., 2019; Ansuini et al., 2019).\nGAN Images We use the BigGAN variant with 128 latent entries and outputs of size 128×128×3 trained on the ImageNet dataset (Deng et al., 2009). Using this GAN, we generate datasets with a varying number of images, where we fix most entries of the latent vectors to zero leaving only d̄ free entries to be chosen at random. As we increase the number of free entries, we expect the intrinsic dimension to increase with d̄ as an upper bound; see Section A.1 for further discussion.\nIn particular, we create several synthetic datasets of varying intrinsic dimensionality using the ImageNet class, basenji, and check if the estimates match our expectation. As seen in Figure 2, we observe increasing diversity with increasing intrinsic dimension. In Figure 3, we show convergence of the MLE estimate on basenji data with dimension bounded above by d̄ = 10. We observe that the estimates can be sensitive to the choice of k as discussed in prior work; see Appendix A.2 for additional GAN classes.\nScaling to large datasets. We develop a practical approach for estimating the ID of large datasets such as ImageNet. In this approach, we randomly select a fraction α of the dataset as anchors. Then, we evaluate the MLE estimate using only the anchor points, where nearest-neighbors are computed over the entire dataset. Note that, when anchors are chosen randomly, this acceleration has no impact on the expected value of the result. See Appendix A.3 for an evaluation of this approach." }, { "heading": "5 THE INTRINSIC DIMENSION OF POPULAR DATASETS", "text": "In this section, we measure the intrinsic dimensions of a number of popular datasets including MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), ImageNet (Deng et al., 2009), MS-COCO (Lin et al., 2014), and CelebA (Liu et al., 2015). Using three different parameter settings for the MLE ID estimator, we find that the ID is indeed much smaller than the number of pixels; see Table 5. Notice that the rank order of datasets by dimension does not depend on the choice of k. A comparison of state-of-the-art (SOTA) classification accuracy on each respective dataset1 with the dimension estimates suggests a negative correlation\n1Values from https://paperswithcode.com/task/image-classification.\nbetween the intrinsic dimension and test accuracy. In the next section, we take a closer look at this phenomenon through a series of dedicated experiments." }, { "heading": "6 INTRINSIC DIMENSION AND GENERALIZATION", "text": "Learning theory work has established that learning a manifold requires a number of samples that grows exponentially with the manifold’s intrinsic dimension (Narayanan & Mitter, 2010), but the required number of samples is independent of the extrinsic dimension. Specifically, the number of\nsamples needed to learn a well-conditioned decision boundary between two classes is exponential in the intrinsic dimension of the manifold on which the classes lie (Narayanan & Niyogi, 2009).\nWe leverage dimension estimation tools to empirically verify these theoretical findings using a family of binary classification problems defined over both synthetic and real datasets of varying intrinsic dimension. In these experiments, we observe a connection between the intrinsic dimension of data and generalization. Specifically, we find that classification problems on data of lower intrinsic dimensionality are easier to solve." }, { "heading": "6.1 SYNTHETIC GAN DATA: SAMPLE COMPLEXITY DEPENDS ON INTRINSIC (NOT EXTRINSIC) DIMENSIONALITY", "text": "The synthetic GAN data generation technique described in Section 4 provides a unique opportunity to test the relationship between generalization and the intrinsic/extrinsic dimensionality of images. By creating datasets with controlled intrinsic dimensionality, we may compare their sample complexity, that is the number of samples required to obtain a given level of test error. Specifically we test the following two hypotheses (1) data of lower intrinsic dimensionality has lower sample complexity than that of higher intrinsic dimensionality and (2) extrinsic dimensionality is irrelevant for sample complexity.\nTo investigate hypothesis (1), we create four synthetic datasets of varying intrinsic dimensionality: 16, 32, 64, 128, fixed extrinsic dimensionality: 3×32×32, and two classes: basenji and beagle. For each dataset we fix a test set of size N = 1700. For all experiments, we use the ResNet-18 (width = 64) architecture (He et al., 2016). We then train models until they fit their entire training set with increasing amounts of training samples and measure the test error. We show these results in Figure 4. Observing the varying rates of growth, we see that data of higher intrinsic dimension requires more samples to achieve a given test error.\nFor hypothesis (2), we carry out the same experiment with the roles of intrinsic and extrinsic dimension switched. We create four synthetic datasets of varying extrinsic dimensionality by resizing the images with nearest-neighbor interpolation. Specifically we create 6 datasets of square, 3-channel images of sizes 16, 32, 64, 128, 256, fixed intrinsic dimensionality of size 128, and all other experimental details the same. We show these results in Figure 5. Observing the lack of variable growth rates, we see that extrinsic dimension has little to no effect on sample complexity.\nTo the best of our knowledge, this is the first experimental demonstration that intrinsic but not extrinsic dimensionality matters for the generalization of deep networks." }, { "heading": "6.2 REAL DATA: INTRINSIC DIMENSIONALITY MATTERS FOR GENERALIZATION", "text": "Next, we examine the sample complexity of binary classification tasks from four common image datasets: MNIST, SVHN, CIFAR-10, and ImageNet. This case differs from the synthetic case in that we have no control over each dataset’s intrinsic dimension. Instead, we estimate it via the MLE method discussed in Section 3. To account for variable difficulty of classes, we randomly sample 5 class pairs from each dataset and run the previously described sample complexity experiment. Note that these subsets differ from those used in Table 5, where the estimates are taken from the entire dataset and across all classes.\nOn these sampled subsets, we find the MLE estimates as shown in Table 2. Note that these estimates are consistent with expectation, e.g. MNIST is qualitatively simpler then SVHN or CIFAR-10.\nWe conduct the same sample complexity experiment as the previous section on the datasets. Because these datasets are ordinarily of varying extrinsic dimensionality, we resize all to size 32 × 32 × 3 (before applying MLE). We report results in Figure 6, where we overall observe trends ordered by intrinsic dimensionality estimate. These results are consistent with expectation of the relative hardness of each dataset. However, there are some notable differences from the synthetic case. Several unexpected cross-over points exist in the low-sample regime, and the gap between SVHN and CIFAR-10 is smaller than one may expect based on their estimated intrinsic dimension.\nFrom these observations we conclude that intrinsic dimensionality is indeed relevant to generalization on real data, but it is not the only feature of data that influences sample complexity." }, { "heading": "6.3 REAL DATA: ADDING NOISE CHANGES DIMENSIONALITY TO AFFECT GENERALIZATION", "text": "In this section, we examine an alternative technique for changing the intrinsic dimension of a real dataset: adding noise to images. Here we leverage the fact that uniformly sampled noise in [0, 1]d has dimension d. We thus add independent noise, drawn uniformly from a fixed randomly oriented d-dimensional unit hypercube embedded in pixel space, to each sample in a dataset. This procedure ensures that the dataset has dimension at least d. Since the natural data we use has low dimension, and the hypercubes have high dimension, this procedure specifically increases dimensionality. We note that estimation error may occur when there is an insufficient number of samples to achieve\na proper estimate. Since the variation in images in a dataset may still be dominated by non-noise directions, we expect to underestimate the new increased dimensions of these noised datasets.\nStarting with CIFAR-10 data, we add noise of varying dimensions, where we replace pixels at random in the image. We only add noise to an image once to keep the augmented dataset the same size as the original. We use the following noise dimensionalities: 256, 512, 1024, 2048, 2560. The estimated dimensions of the noised datasets are listed in Table 3. We see that intrinsic dimension increases with increasing noise dimensionality, but dimensionality does not saturate to the maximum true dimension, likely due to a poverty of samples.\nOn these noisy CIFAR-10 datasets, we again carry out the sample complexity experiment of the previous sections. We show results in Figure 7. We observe sample complexity largely in the same order as intrinsic dimension." }, { "heading": "6.4 MANIPULATING THE INTRINSIC DIMENSIONALITY OF FONTS", "text": "In this section, we describe a final technique for studying the effect of intrinsic dimensionality on sample complexity on the recently proposed FONTS dataset (Stutz et al., 2019). Beginning with a collection of characters and font types, termed a prototype set by the authors, FONTS datasets are constructed using a fixed set of data augmentations: scaling, translation, rotation, and sheering. In\nprinciple, these augmentations each increase the intrinsic dimension of the prototype set allowing us to synthetically alter the intrinsic dimension by varying the number of augmentations used.\nWe construct 5 FONTS datasets in this way, FONTS-{0, 1, 2, 3, 4}, where the suffix denotes the number of transformations used in the data generation process. The MLE estimates on each of the datasets are given in Table 4.\nConsistent with expectation, we observe that MLE methods consistently resolve the increased dimesionality of transformed datasets. Carrying out the sample complexity experiment of the previous section, we report results in Figure 8. We observe again that, on the whole, sample complexity is ordered by intrinsic dimension." }, { "heading": "7 DISCUSSION", "text": "In this work, we measure the intrinsic dimension of popular image datasets and show that the intrinsic dimension of data matters for deep learning. While there may be many factors, such as class separation and the number of classes, which determine generalization, we build the case that intrinsic dimension is one of these important factors. Along the way, we introduce a technique for using GANs to synthesize data while manipulating dimensionality. This technique is useful not only for validating dimension estimation methods but also for examining the learning behavior of neural networks under a dimension-controlled environment. In addition to synthetic data, we verify that dimension plays a large role in learning on natural data. Our results support the commonly held belief that low dimensional structure underpins the success of deep learning on high-resolution data.\nThese findings raise a number of salient directions for future work. Methods for enhancing neural network learning on high-dimensional data could improve generalization on hard vision problems. To this end, a deeper understanding of the mechanisms by which dimensionality plays a role in learning may enable such methods. Additionally, our work indicates that different dimensionality estimation tools are better suited for different settings. Future work on computing tighter and more reliable estimates specific to image data would allow the community to more precisely study the relationship between the dimensionality of image datasets and learning." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the DARPA GARD and DARPA QED programs. Further support was provided by the AFOSR MURI program, and the National Science Foundation’s DMS division. Computation resources were funded by the Sloan Foundation." }, { "heading": "3 1.1 2.6 6.1 10.5 16.0 20.0 20.0", "text": "" }, { "heading": "4 1.5 3.6 8.2 14.0 21.0 26.0 26.0", "text": "" }, { "heading": "5 1.7 4.1 9.3 15.7 23.5 28.7 28.5", "text": "" }, { "heading": "6 1.8 4.4 9.9 16.6 24.9 30.3 29.9", "text": "" }, { "heading": "7 1.9 4.6 10.4 17.2 25.8 31.2 30.6", "text": "" }, { "heading": "8 1.9 4.7 10.7 17.6 26.4 31.7 31.1", "text": "" }, { "heading": "9 2.0 4.9 10.9 18.0 26.8 31.9 31.5", "text": "" }, { "heading": "10 2.0 5.0 11.1 18.2 27.1 32.1 31.7", "text": "" }, { "heading": "15 2.1 5.3 11.6 18.8 27.8 32.3 31.7", "text": "" }, { "heading": "20 2.2 5.5 11.8 19.0 27.9 31.9 31.3", "text": "" }, { "heading": "25 2.2 5.7 12.0 19.2 27.9 31.5 30.8", "text": "A.5 COMPARING MLE TO OTHER ESTIMATORS IN A CONTROLLED SETTING\nTo validate MLE in comparison to other estimators, we evaluate three other dimensionality estimation methods: GeoMLE (Gomtsyan et al., 2019), TwoNN (Facco et al., 2017) and kNN graph distances (Granata & Carnevale, 2016). For GeoMLE, we sample a total of 20 bootstrap subsets (M = 20) and use k1 = 20, k2 = 55 as recommended by Gomtsyan et al. (2019). To extend the kNN graph distance method to large datasets, we randomly sample a subset of samples (fixed to 10,000 for datasets with more than 10,000 samples) and use shortest graph distances to their k nearest neighbors to estimate the IDs. Other settings are as default in the implementations of Granata & Carnevale (2016).\nFirst, we validate each method on datasets of uniformly sampled from d-dimensional hypercubes. We report these results in Figure 14. Each method works reasonable on low-dimensional cubes. We observed the Shortest Path method to give erratic estimates on cubes of higher dimension, and have omitted these. TwoNN has poor sample efficiency for higher dimensional cubes. Interestingly, GeoMLE estimates these high dimensional cubes well.\nNext, we report estimation results on basenji 10 for TwoNN, kNN graph distance, and GeoMLE methods in Figure 15. Comparing against the MLE results in Figure 3, we observe that each other method does not achieve an accurate estimate in this sample regime, thus motivating our focus on MLE. Notably, GeoMLE and TwoNN severely overestimate dimension, while the kNN graph distance method severely underestimates dimension.\nOn MNIST, CIFAR-10, CIFAR-100, and SVHN, the results of these other estimation methods also deviate from expectation (Table 6). For example, TwoNN assigns a significantly higher dimension\nestimate to MNIST than to CIFAR-100, which contradicts both intuition and the results of other estimators. We set k = 4 and number of bins to 1000, and use default settings for all other parameters including rMAX." } ]
2,021
null
SP:f1af5160de3da8d992ac6bba8fbb7b0086efdb12
[ "The paper explores the impact of different types of data augmentations for protein sequence data, and does a thorough benchmark analysis on them. The authors used a pre-trained transformer model, fine tuned the model on augmented data using two approaches, namely, contrastive learning and masked token prediction. This finetuned model was evaluated with an added linear layer on a range of tasks." ]
While protein sequence data is an emerging application domain for machine learning methods, small modifications to protein sequences can result in difficult-topredict changes to the protein’s function. Consequently, protein machine learning models typically do not use randomized data augmentation procedures analogous to those used in computer vision or natural language, e.g., cropping or synonym substitution. In this paper, we empirically explore a set of simple string manipulations, which we use to augment protein sequence data when fine-tuning semisupervised protein models. We provide 276 different comparisons to the Tasks Assessing Protein Embeddings (TAPE) baseline models, with Transformer-based models and training datasets that vary from the baseline methods only in the data augmentations and representation learning procedure. For each TAPE validation task, we demonstrate improvements to the baseline scores when the learned protein representation is fixed between tasks. We also show that contrastive learning fine-tuning methods typically outperform masked-token prediction in these models, with increasing amounts of data augmentation generally improving performance for contrastive learning protein methods. We find the most consistent results across TAPE tasks when using domain-motivated transformations, such as amino acid replacement, as well as restricting the Transformer attention to randomly sampled sub-regions of the protein sequence. In rarer cases, we even find that information-destroying augmentations, such as randomly shuffling entire protein sequences, can improve downstream performance.
[]
[ { "authors": [ "Ethan C Alley", "Grigory Khimulya", "Surojit Biswas" ], "title": "Unified rational protein engineering with sequence-based deep representation learning", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "Mohammed AlQuraishi" ], "title": "Proteinnet: a standardized data set for machine learning of protein structure", "venue": "BMC bioinformatics,", "year": 2019 }, { "authors": [ "HM Berman", "J Westbrook", "Z Feng" ], "title": "The protein data bank", "venue": "Nucleic acids research,", "year": 2000 }, { "authors": [ "T Chen", "S Kornblith", "M Norouzi", "G Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In Thirty-seventh International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey E Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "C Coulombe" ], "title": "Text data augmentation made simple by leveraging nlp cloud apis", "venue": "arXiv preprint arXiv:1812.04718,", "year": 2018 }, { "authors": [ "BC Cunningham", "JA Wells" ], "title": "High-resolution epitope mapping of hgh-receptor interactions by alanine-scanning mutagenesis", "venue": null, "year": 1989 }, { "authors": [ "J Devlin", "MW Chang", "K Lee", "K Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "A Dosovitskiy", "JT Springenberg", "M Riedmiller", "T Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "S El-Gebali", "J Mistry", "A Bateman" ], "title": "The pfam protein families database in 2019", "venue": "Nucleic acids research,", "year": 2019 }, { "authors": [ "NK Fox", "SE Brenner", "JM Chandonia" ], "title": "Scope: Structural classification of proteins—extended, integrating scop and astral data and classification of new structures", "venue": "Nucleic acids research,", "year": 2013 }, { "authors": [ "S French", "B Robson" ], "title": "What is a conservative substitution", "venue": "Journal of molecular Evolution,", "year": 1983 }, { "authors": [ "S Gidaris", "P Singh", "N Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": null, "year": 2018 }, { "authors": [ "R Hadsell", "S Chopra", "Y LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Michael Heinzinger", "Ahmed Elnaggar", "Yu Wang" ], "title": "Modeling aspects of the language of life through transfer-learning protein sequences", "venue": "BMC bioinformatics,", "year": 2019 }, { "authors": [ "MS Klausen", "MC Jespersen", "H Nielsen" ], "title": "Netsurfp-2.0: Improved prediction of protein structural features by integrated deep learning. Proteins: Structure, Function, and Bioinformatics, 2019", "venue": null, "year": 2019 }, { "authors": [ "J Moult", "K Fidelis", "A Kryshtafovych" ], "title": "Critical assessment of methods of protein structure prediction (CASP)-Round XII", "venue": "Proteins: Structure, Function, and Bioinformatics,", "year": 2018 }, { "authors": [ "M Noroozi", "P Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "R Rao", "N Bhattacharya", "N Thomas" ], "title": "Evaluating protein transfer learning with tape", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "AJ Riesselman", "JE Shin", "AW Kollasch" ], "title": "Accelerating protein design using autoregressive generative models", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "A Rives", "S Goyal", "J Meier" ], "title": "Biological structure and function emerge from scaling unsupervised learning to 250 million protein", "venue": "sequences. bioRxiv,", "year": 2019 }, { "authors": [ "GJ Rocklin", "TM Chidyausiku", "I Goreshnik" ], "title": "Global analysis of protein folding using massively parallel design, synthesis, and testing", "venue": null, "year": 2017 }, { "authors": [ "KS Sarkisyan", "DA Bolotin", "MV Meer" ], "title": "Local fitness landscape of the green fluorescent protein", "venue": "Nature,", "year": 2016 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "A van den Oord", "Y Li", "O Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "J Wei", "K Zou" ], "title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Q Xie", "Z Dai", "E Hovy" ], "title": "Unsupervised data augmentation for consistency training", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "Z Xie", "SI Wang", "J Li" ], "title": "Data noising as smoothing in neural network language models", "venue": "In 5th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "X Zhang", "J Zhao", "Y LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 } ]
[ { "heading": null, "text": "While protein sequence data is an emerging application domain for machine learning methods, small modifications to protein sequences can result in difficult-topredict changes to the protein’s function. Consequently, protein machine learning models typically do not use randomized data augmentation procedures analogous to those used in computer vision or natural language, e.g., cropping or synonym substitution. In this paper, we empirically explore a set of simple string manipulations, which we use to augment protein sequence data when fine-tuning semisupervised protein models. We provide 276 different comparisons to the Tasks Assessing Protein Embeddings (TAPE) baseline models, with Transformer-based models and training datasets that vary from the baseline methods only in the data augmentations and representation learning procedure. For each TAPE validation task, we demonstrate improvements to the baseline scores when the learned protein representation is fixed between tasks. We also show that contrastive learning fine-tuning methods typically outperform masked-token prediction in these models, with increasing amounts of data augmentation generally improving performance for contrastive learning protein methods. We find the most consistent results across TAPE tasks when using domain-motivated transformations, such as amino acid replacement, as well as restricting the Transformer attention to randomly sampled sub-regions of the protein sequence. In rarer cases, we even find that information-destroying augmentations, such as randomly shuffling entire protein sequences, can improve downstream performance." }, { "heading": "1 INTRODUCTION", "text": "Semi-supervised learning has proven to be an effective mechanism to promote generalizability for protein machine learning models, as task-specific labels are generally very sparse. However, with other common data types there are simple transformations that can be applied to the data in order to improve a model’s ability to generalize: for instance, vision models use cropping, rotations, or color distortion; natural language models can employ synonym substitution; and time series data models benefit from window restriction or noise injection. Scientific data, such as a corpus of protein sequences, have few obvious transformations that can be made to it that unambiguously preserve the meaningful information in the data. Often, an easily understood transformation to a protein sequence (e.g., replacing an amino acid with a chemically similar one) will unpredictably produce either a very biologically similar or very biologically different mutant protein.\nIn this paper, we take the uncertainty arising from the unknown effect of simple data augmentations in protein sequence modeling as an empirical challenge that deserves a robust assessment. To our knowledge, no study has been performed to find out whether simple data augmentation techniques improve a suite of protein tasks. We focus on fine-tuning previously published self-supervised models that are typically used for representation learning with protein sequences, viz. the transformerbased methods of Rao et al. (2019) which have shown the best ability to generalize on a set of biological tasks, which are referred to as Tasks Assessing Protein Embeddings (TAPE). We test one or more of the following data augmentations: replacing an amino acid with a pre-defined alternative; shuffling the input sequences either globally or locally; reversing the sequence; or subsampling the sequence to focus only on a local region (see Fig. 1).\n(a) Replacement (Dictionary) (b) Replacement (Alanine) (c) Global Random Shuffling\nWe demonstrate that the protein sequence representations learned by fine-tuning the baseline models with data augmentations results in relative improvements between 1% (secondary structure accuracy) and 41% (fluorescence ρ), as assessed with linear evaluation for all TAPE tasks we studied. When fine-tuning the same representations during supervised learning on each TAPE task, we show significant improvement as compared to baseline for 3 out of 4 TAPE tasks, with the fourth (fluorescence) within 1σ in performance. We also study the effect of increasingly aggressive data augmentations: when fine-tuning baseline models with contrastive learning (Hadsell et al., 2006; Chen et al., 2020a) we see a local maximum in downstream performance as a function of the quantity of data augmentation, with “no augmentations” generally under-performing modest amounts of data augmentations. Conversely, performing the same experiments but using masked-token prediction instead of contrastive learning, we detect a minor trend of decreasing performance on the TAPE tasks as we more frequently use data augmentations during fine-tuning. We interpret this as evidence that contrastive learning techniques, which require the use of data augmentation, are important methods that can be used to improve generalizibility of protein models." }, { "heading": "2 RELATED WORKS", "text": "Self-supervised and semi-supervised methods have become the dominant paradigm in modeling protein sequences for use in downstream tasks. Rao et al. (2019) have studied next-token and maskedtoken prediction, inspired by the BERT natural language model (Devlin et al., 2018). Riesselman et al. (2019) have extended this to autoregressive likelihoods; and Rives et al. (2019), Heinzinger et al. (2019) and Alley et al. (2019) have shown that unsupervised methods trained on unlabeled sequences are competitive with mutation effect predictors using evolutionary features.Of importance to this work are self-supervised learning algorithms employed for other data types that use or learn data augmentations. For example, Gidaris et al. (2018) learn image features through random rotations; Dosovitskiy et al. (2014) and Noroozi & Favaro (2016) study image patches and their correlations to the original samples. van den Oord et al. (2018) uses contrastive methods to predicts future values of an input sequence. We consider sequence augmentations in natural language as the most relevant comparison for the data augmentations we study in this paper. Some commonly applied augmentations on strings include Lexical Substitution (Zhang et al., 2015), Back Translation (Xie et al., 2019a), Text Surface TransformationPermalink (Coulombe, 2018), Random Noise Injection (Xie et al., 2019b; Wei & Zou, 2019), and Synonym Replacement, Random Swap, Random Deletion (RD) (Wei & Zou, 2019). However, sequence augmentations designed for natural languages often require the preservation of contextual meaning of the sentences, a factor that is less explicit for protein sequences.\nContrastive Learning is a set of approaches that learn representations of data by distinguishing positive data pairs from negative pairs (Hadsell et al., 2006). SimCLR (v1 & v2) (Chen et al., 2020a;b) describes the current state-of-the-art contrastive learning technique.; we use this approach liberally in this paper not only because it performs well, but because it requires data transformations to ex-\nist. Since we focus on protein sequence transformations, the contrastive learning part described in both (Chen et al., 2020a;b) is our focus. Following Chen et al. (2020a;b), we denote input data as x ∈ D, with D being our training set; we then define an embedding function, fω : x 7→ h with h ∈ RN , and a mapping function gθ : h 7→ z, with z ∈ RM , where ω and θ are the learned model weights. For any x, we form two copies x1 = t1(x) and x2 = t2(x) given functions t1, t2 ∼ T , where T denotes the distribution of the augmentation functions. Given D is of sizeN , the contrastive loss is written as:\nL = 1\n2N N∑ k=1 [l(z (1) k , z (2) k ) + l(z (2) k , z (1) k )] where l(u,v) ≡ − log esim(u,v)/τ∑ w 6=u e sim(u,w)/τ (1)\nHere, zi,k = gθ(fω(ti(xk))), sim(·, ·) is cosine similarity, and τ ∈ (0,∞) is a scalar temperature; we choose τ = 0.2. By minimizing the contrastive loss, we obtain the learned h as the encoded feature for other downstream tasks. Note, the contrastive loss takes z’s as inputs, whereas the encoded feature is h, which is the variable after the function fω(·) and before gθ(·)." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 EVALUATION PROCEDURE & APPROACH TO EXPERIMENT CONTROL", "text": "Our goal is to demonstrate that training self-supervised protein sequence models, with simple string manipulations as data augmentations, will lead to better performance on downstream tasks. To attempt to control external variables, we study the following restricted setting; we provide the procedural diagram in Figure 2 and the corresponding explanations of the four major steps below (See Appendix A for training setups in details.):\nBaseline.— A self-supervised model M0 is trained on non-augmented sequence data Dseq to do representation learning. To have a consistent baseline, we set M0 to the Transformer-based model trained and published in Rao et al. (2019), without modification. This was trained with maskedtoken prediction on Pfam protein sequence data (El-Gebali et al., 2019); it has 12 self-attention layers, 8 heads per layer, and 512 hidden dimensions, yielding 38M parameters in total.\nAugmented training on validation set.— We fine-tune M0 on augmented subsets Dval ⊂ Dseq, given a set of pre-defined data transformations Taug. We define Maug as the final trained model derived from Taug(Dseq) with M0 as the initial conditions for the model parameters. We explore two different methods of fine-tuning on augmented data — a contrastive task (as in Eq. 1) and a\nmasked-token task (exponentiated cross entropy loss) — as well as different combinations of data augmentations. We use reduced subsets |Dval| |Dseq| to both reduce the computational cost of running bulk experiments, as well as to protect against overfitting. For consistency, we inherit the choice of Dval from the cross-validation split used to train M0 in Rao et al. (2019). To adapt the same baseline model M0 to different self-supervised losses, we add a loss-specific randomlyinitialized layer to theM0 architecture: contrastive learning with a fully connected layer that outputs 256 dimensional vectors and masked-token uses fully connected layers with layer normalization to output one-hot vectors for each of the masked letters. We define our different choices of Taug in the next section.\nLinear evaluation on TAPE.— To assess the representations learned by Maug, we evaluate performance on four TAPE downstream training tasks (Rao et al., 2019): stability, fluorescence, remote homology, and secondary structure. For consistency, we use the same training, validation, and testing sets. The first two tasks are evaluated by Spearman correlation (ρ) to the ground truth and the latter two by classification accuracy. However, we do not consider the fifth TAPE task, contact map prediction, as this relies only on the single CASP12 dataset, which has an incomplete test set due to data embargoes (AlQuraishi, 2019). Secondary structure prediction is a sequence-to-sequence task where each input amino acid is classified to a particular secondary structure type (helix, beta sheet, or loop), which is evaluated on data from CASP12, TS115, and CB513 (Berman et al., 2000; Moult et al., 2018; Klausen et al., 2019), with specifically “3-class” classification accuracy being the metric in this paper. The remote homology task classifies sequences into one of 1195 classes, representing different possible protein folds, which are further grouped hierarchically into families, then superfamilies; the datasets are derived from Fox et al. (2013). The fluorescence task regresses a protein sequence to a real-valued log-fluorescence intensity measured in Sarkisyan et al. (2016). The stability task regresses a sequence to a real-valued measure of the protein maintaining its fold above a concentration threshold (Rocklin et al., 2017). We perform linear evaluation by training only a single linear layer for each downstream task for each contrastive-learning model Maug, but not changing the parameters of Maug, and its corresponding learned encodings, across all tasks. To compare the contrastive learning techniques to further fine-tuning with masked-token prediction, we identify the best-performing data augmentations per-task, then replace the MCLaug with the masked-token model with the same augmentation MMTaug , then also do linear evaluation on M MT aug .\nFull fine-tuning on TAPE.— For the best-performing augmented models in the linear evaluation task (eitherMCLaug orM MT aug ), we further study how the models improve when allowing the parameters of Maug to vary along with the linear model during the task-specific supervised model-tuning." }, { "heading": "3.2 DATA AUGMENTATIONS", "text": "We focus on random augmentations to protein primary sequences, both chemically and nonchemically motivated (see Fig. 1). Each of these data augmentations has reasons why it both might help generalize the model and might destroy the information contained in the primary sequence.\nReplacement (Dictionary/Alanine) [RD & RA].— We randomly replace, with probability p, the ith amino acid in the primary sequence S = {Ai}Ni=1 with the most similar amino acid to it A′i, according to a replacement rule; we do this independently for all i. We treat p as a hyperparameter and will assess how p affects downstream TAPE predictions. For Replacement (Dictionary), following French & Robson (1983), we pair each naturally-occuring amino acid with a partner that belongs to the same class (aliphatic, hydroxyl, cyclic, aromatic, basic, or acidic), but do not substitute anything for proline (only backbone cyclic), glycine (only an H side-chain), tryptophan (indole side chain), or histidine (basic side chain with no size or chemical reaction equivalent). We experimented with different pairings, finding little difference in the results; our best results were obtained with the final mappings: [[A,V], [S,T], [F,Y], [K,R], [C,M], [D,E], [N,Q], [V,I]]. We also study replacing residues with the single amino acid, alanine (A), as motivated by alanine-scanning mutagenesis (Cunningham & Wells, 1989). Single mutations to A are used experimentally to probe the importance of an amino acid because Alanine resembles a reduction of any amino acid to its Cβ , which eliminates functionality of other amino acids while maintaining a certain backbone rigidity and is thus considered to be minimally disruptive to the overall fold of a protein, although many interesting exceptions can still occur by these mutations.\nGlobal/Local Random Shuffling [GRS & LRS].— We reshuffle the protein sequence, both globally and locally. For S = {Ai}Ni=1, we define an index range i ∈ [α, β] with α < β ≤ N , then replace amino acids Ai in this range with a permutation chosen uniformly at random. We define Global Random Shuffling (GRS) with α = 1 and β = N and Local Random Shuffling (LRS) with the starting point of intervals chosen randomly between α ∈ [1, N − 2] and β = min(N,α + 50), ensuring at least two amino acids get shuffled. While shuffling aggressively destroys protein information, models trained with shuffling can focus more on permutation-invariant features, such as the overall amino acid counts and sequence length of the original protein.\nSequence Reversion & Subsampling [SR & SS].— For Sequence Reversion, we simply reverse the sequence: given S = {Ai}Ni=1 we map i→ i′ = N − i. Note that a protein sequence is oriented, proceeding sequentially from the N- to the C-terminii; reversing a protein sequence changes the entire structure and function of the protein. However, including reversed sequences might encourage the model to use short-range features more efficiently, as it has for seq2seq LSTM models (Sutskever et al., 2014). For Subsampling, we let the sequence index range from i ∈ [α, β], then uniformly sample α ∈ [1, N − 2] and then preserve Ai, for i = (α, α + 1, ...,min(N,α + 50)). While many properties pertaining to the global fold of a protein are due to long-range interactions between residues that are well-separated in the primary sequence, properties such as proteosomal cleavage or docking depend more heavily on the local sequence regime, implying that training while prioritizing local features might still improve performance.\nCombining Augmentations.— We consider applying augmentations to the fine-tuning of semisupervised models both individually and together. For Single Augmentations we consider only one of the augmentations at a time. With Leave-One-Out Augmentation we define lists of augmentations to compare together; for each list of augmentations, we iteratively remove one augmentation from the list and apply all other augmentations during training. Finally, in Pairwise Augmentation all pairs of augmentations are considered." }, { "heading": "4 RESULTS", "text": "Assessing data-augmented representations with linear evaluation.— The core result of this paper uses linear evaluation methods to probe how well data augmentations are able to improve the learned representation of protein sequences. In Table 1 we summarize the best results from various data augmentation procedures, highlighting all the cases that outperform the TAPE Baseline. We compare identical model architectures trained with the same data, but using various data augmen-\ntations, to two baselines: (1) the transformer-based self-supervised model from Rao et al. (2019), which we call the TAPE Baseline; and (2) a contrastive learning model trained in the SimCLR approach, but using no data augmentations, i.e., we only use the negative sampling part of SimCLR; we call this the Contrastive Baseline. We also report standard deviation (σ) for the major figures of this paper by bootstrapping the testing results 5,000 times, with convergence after∼ 3, 000 samples. We see broad improvement when using contrastive learning with data augmentations in comparison to both baselines for the stability, fluorescence, and remote homology tasks, and better-or-similar results for secondary structure prediction. For stability, we find +0.064 in Spearman correlation (ρ) (best augmentation: RD(p = 0.01/0.5)) in comparison to the TAPE Baseline and +0.050 compared to the Contrastive Baseline. For fluorescence and remote homology, we see improvement to both the TAPE and Contrastive Baselines, with the pairwise combination of RD(p = 0.01) & LSS) and RA(p = 0.01) & SS yielding the best results, respectively. Compared to the TAPE Baseline, we obtain +0.105 in ρ for fluorescence and [+1.9%, +9.3%, +2.4%] in classification accuracy for remote homology on the three test sets; compared to the Contrastive Baseline, we find +0.027 in ρ and [+7.3%, +18.9%, +9.2%] in classification accuracy. The masked-token prediction model trained with RA(p = 0.01) & SS performs the best on secondary structure, with [+1.7%, +1.5%, +0.8%] in the classification accuracy on the 3 test sets compared to the TAPE Baseline and [+4.9%, +4.6%, +5.7%] compared to the Contrastive Baseline. The best-performing contrastive learning model for secondary structure provides [+3.5%, +3.4%, +4.8%] in classification accuracy compared to Contrastive Baseline. Our interpretation is that augmentation and contrastive learning provide better encoded feature spaces that help improving the performance on protein’s downstream tasks. See Appendix B for the complete results of linear evaluation experiments.\nFigure 3 demonstrates our linear evaluation results using contrastive learning based on the composition of pairs of data augmentations. For stability, amino acid replacement (with either a dictionary (RD) or alanine alone (RA)) consistently improves performance compared to the TAPE baseline, as well as to other augmentation strategies, which typically underperform the baseline. Fluorescence sees improvements using all data augmentations, but random shuffling (LRS & GRS) as well as the binomial replacement of both types result in the best individual performance. For remote homology, it is apparent that subsampling plays an important role in model performance given the improvement it introduces on the three testing sets; the “family” homology level is included here and the other remote homology tasks are qualitatively similar. Similarly, we see that data augmentation\nprocedures that use subsampling tends to yield better performance than alternatives, with the best performing approach using subsampling alone. For complete heatmaps for the 6 remote homology and secondary structure testing sets, please refer to Fig. 5 in Appendix B.\nEffect of increasing augmentation rates.— Fig. 4 presents results on the effect of varying data augmentations in two cases: (1) increasing the amino acid replacement probability p for the Replacement Dictionary [RD] strategy with contrastive learning (top row); and (2) by augmenting increasingly larger fractions of the input data according to the best augmentations (see “Best Aug.” row in Table 2) found in linear evaluation for masked-token prediction (bottom row). We define the augmentation ratio γ as the fraction of the samples in the validation dataset that are randomly augmented per epoch; for contrastive learning fine-tuning, data augmentations are required for every data element.\nFor masked-token prediction, we see little change in performance for any task as a function of γ using the for any of the best corresponding augmentation strategies. However, there is a small, but consistent, reduction in performance with increasing γ, implying that masked-token prediction is not always able to significantly improve its performance by using data augmentations. It is clear that large augmentation ratios γ ∼ 1 hurt the model performance on Stability, Fluorescence and Remote Homology tasks. We also see that the TAPE Baseline model generally performs worse than further training with no data augmentation, indicating further training of the baseline model with the same data and procedure can improve the performance of Rao et al. (2019).\nHowever, for contrastive learning, we see clear evidence that data augmentations can help generalization. We see increasing Spearman correlation for stability and fluorescence tasks and increasing accuracy for remote homology and secondary structure with increasing p for p < 0.01. We see a consistent decrease in all metrics, to the lowest seen amount for each task, for replacement probability p = 0.1 and then a recovery to larger (sometimes the largest seen) for higher replacement p = 0.5. However, no augmentation, p = 0, consistently underperforms as compared to alternative values of p > 0.\nEffect of contrastive learning.— To assess the relative effects of contrastive learning and maskedtoken prediction, we compare results between the two approaches with or without data augmentations. All information for this comparison is in Fig. 4 and Table 1. It is unsurprising that using the identity function as a data transform in SimCLR (Eq. 1) yields little increase in generalizability; indeed, we see that masked-token prediction has better performance than contrastive learning for all tasks with no data augmentations (Fig. 4, γ = 0 vs p = 0). However, we see mixed results when comparing contrastive learning vs masked-token prediction methods with the same data augmentation techniques. As seen in Table 1 (“MR: Best Aug.” row vs highest/red numbers in “CL: *” rows): contrastive learning significantly improves over masked-token prediction for stability (+0.046 in ρ), fluorescence (+0.061 in ρ), and remote homology ([+1.2%, +8.1%, +1.4%] in classification accuracy); and masked-token prediction improves over contrastive learning for secondary structure ([+1.4%, +1.2%, +0.9%] in classification accuracy). Overall, we cannot conclude from these pairs of linear evaluation studies that contrastive learning definitively performs better or worse than masked-token prediction on all downstream tasks: different tasks benefit from different training procedures and different combinations of data augmentations. However, we observe that the overall best results from Table 1 utilize the combination of contrastive learning with pairs of data augmentation for all tasks besides secondary structure prediction.\nExploring the best performance via full fine-tuning.— We provide results of the best performing fine-tuned models (on downstream tasks) and the comparison to the TAPE’s original baselines in Table 2, in order to verify whether the learned representations of the best models provide good initialization points for transfer learning. Here, we have done full fine-tuning only on the best performing, per-task models found during the linear evaluation study (see Table 1). Notice that the baseline comparison changes in this table than for the linear evaluation results above because we allow the optimization to also adjust the parameters of the self-supervised models for every task (the TAPE baselines in Table 2 are from Rao et al. (2019)). The fine-tuned, data-augmented models outperform the TAPE baseline on stability (+0.018 in Spearman correlation ρ), remote homology and secondary structure; and they perform within one σ on fluorescence, although the large difference in performance between full fine-tuning and linear evaluation on the fluorescence tasks indicates that most of the model’s predictive capacity is coming from the supervised learning task itself. The random amino acid replacement strategy is a consistent approach that achieved our best performance for all tasks and subsampling performed well on tasks that depend on the structural properties of proteins (remote homology and secondary structure): [+0.0%, +4.1%, +3.7%] classification accuracy for remote homology and [+0.1%, +0.8%, +0.9%] classification accuracy for secondary structure." }, { "heading": "5 CONCLUSION", "text": "We experimentally verify that relatively naive string manipulations can be used as data augmentations to improve the performance of self-supervised protein sequence on the TAPE validation tasks. We demonstrate that, in general, augmentations will boost the model performance, in both linear evaluation and model fine-tuning cases. However, different downstream tasks benefit from different protein augmentations; no single augmentation that we studied was consistently the best. However, the approach that we have taken, where we fine-tune a pretrained model on the validation set re-\nquires significantly lower computational cost than training on the full training set. Consequently, a modeler interested in a small number of downstream tasks would not be over-burdened to attempt fine-tuning of the semi-supervised model on a broad range of data augmentation transformations." }, { "heading": "A TRAINING DETAILS", "text": "For the augmented training, we focused on training the self-supervised part of the model. Namely, we apply the hyperparameters in Table 3, Row 1, on the self-supervised part in either the SimCLR (contrastive learning) or the masked-toke prediction model. Here we train all the models for 30 epochs on the Pfam validation set in order to make a relatively fair comparison. As discussed in the main paper, after the augmented training is finished, we perform linear evaluations on the pre-trained model in the previous steps with the hyperparameters listed in Table 3, Row 2-5 for the 4 downstream tasks. All of the linear evaluation results shown in the main paper and appendix are based on the best results we find after the augmented training and linear evaluation with the hyperparameters in Table 3. For model fine-tuning, since models trained with contrastive learning and with TAPE’s semi-supervised learning models have different statistics (model parameters are different), we do not consider using the same set of hyperparameters. Instead, we report the best results in comparison to the TAPE’s result. The corresponding hyperparameter setups of the best cases described above, including the augmented training and linear evaluation, can be found in Table 4. In addition, the optimizer being applied is AdamW, which is identical to the one in TAPE. Given the NVIDIA V100 GPUs we use have 16 GB memory, a memory limitation, we constrain the sequence length ≤ 512 to enable the training. The batch size we report in appendix are the total batch that considers the number of GPUs. ”Gradient Acc” in Table 3 is short for ”Gradient Accumulation Steps”, which describes how many steps per gradient update to the model." }, { "heading": "B LINEAR EVALUATION RESULTS", "text": "Here we provide comprehensive linear evaluation results of contrastive learning with single augmentation, leave-one-out and pairwise augmentation. To Simplify the tables, we use the same abbreviations applied in the main paper to indicate the augmentations. We summarize the best results after the training and evaluation according to Section A for both single augmentation and pairwise augmentations in Figure 5. We use diverging palette with the center (gray/white) being the best linear evaluation baseline results with TAPE’s pre-trained model and warmer colors refering to the better-than-baseline results and cooler colors being worse-than-baseline results. The diagonal values come from single augmentation setup and all other values come from pairwise augmentation setup. Table 5 and Table 6 also include Figure 5’s corresponding values. By checking the values of the figure and tables, we observe the following: (1) For stability, the binomial replacement works well with single augmentation and pairwise augmentation. There is no improvement from leave-one-out augmentation cases. (2) For fluorescence, the pairwise augmentation with binomial replacement and shuffling can improve the model performance. (3) For remote homology, we clearly see improvements coming from subsampling on all of the three testing sets. And this is independent of other augmentations in the pairwise case. (4) For secondary structure, we do not observe gains from either single or pairwise augmentation, which is consistent with the discussion in the main paper. The best case for secondary structure comes from the masked-token prediction model with augmentations. Besides the results above, we also provide leave-one-out results in Table 8 with different leaveone-out cases that contain different augmentations. With leave-one-out augmentations, we observe only several cases where the leave-one-out cases outperform TAPE’s baselines across 4 downstream\ntasks. Nevertheless, the leave-one-out results provide insights that one should not combine arbitrary augmentations given the decrease in the performance we observe in leave-one-out cases.\nRD (0.0 1) RD (0.5 ) RA( 0.0 1) GR S LRS SR SS\nRD(0.01)\nRD(0.5)\nRA(0.01)\nGRS\nLRS\nSR\nSS\n0.562 0.537 0.149 0.134 0.265 0.390\n0.562 0.527 0.121 0.013 0.155 0.426\n0.537 0.527 0.528 0.218 0.284 0.203 0.309\n0.149 0.121 0.218 0.289 0.087 0.276\n0.134 0.013 0.284 0.338 0.157 0.302\n0.265 0.155 0.203 0.087 0.157 0.258 0.263\n0.390 0.426 0.309 0.276 0.302 0.263 0.355\nStability\n0.09 0.14 0.19 0.23 0.28 0.33 0.38 0.43 0.48 0.53\nRD (0.0 1) RD (0.5 ) RA( 0.0 1) GR S LRS SR SS\nRD(0.01)\nRD(0.5)\nRA(0.01)\nGRS\nLRS\nSR\nSS\n0.297 0.316 0.341 0.361 0.308 0.268\n0.259 0.271 0.341 0.341 0.335 0.289\n0.316 0.271 0.277 0.286 0.314 0.281 0.256\n0.341 0.341 0.286 0.336 0.332 0.283\n0.361 0.341 0.314 0.343 0.303 0.276\n0.308 0.335 0.281 0.332 0.303 0.333 0.296\n0.268 0.289 0.256 0.283 0.276 0.296 0.281\nFluorescence\n0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33 0.34 0.35\nRD (0.0 1) RD (0.5 ) RA( 0.0 1) GR S LRS SR SS\nRD(0.01)\nRD(0.5)\nRA(0.01)\nGRS\nLRS\nSR\nSS\n0.173 0.189 0.101 0.110 0.111 0.185\n0.153 0.159 0.092 0.095 0.092 0.181\n0.189 0.159 0.164 0.125 0.111 0.123 0.219\n0.101 0.092 0.125 0.110 0.131 0.115\n0.110 0.095 0.111 0.100 0.129 0.174\n0.111 0.092 0.123 0.131 0.129 0.104 0.186\n0.185 0.181 0.219 0.115 0.174 0.186 0.183\nRemote Homology (Fold)\n0.09 0.11 0.12 0.13 0.14 0.16 0.17 0.18 0.20 0.21\nRD (0.0 1) RD (0.5 ) RA( 0.0 1) GR S LRS SR SS\n0.660 0.696 0.419 0.449 0.422 0.730\n0.529 0.594 0.405 0.403 0.446 0.688\n0.696 0.594 0.491 0.449 0.349 0.444 0.718\n0.419 0.405 0.449 0.439 0.362 0.440\n0.449 0.403 0.349 0.475 0.434 0.674\n0.422 0.446 0.444 0.362 0.434 0.433 0.683\n0.730 0.688 0.718 0.440 0.674 0.683 0.720\nRemote Homology (Family)\n0.35 0.39 0.43 0.47 0.51 0.55 0.59 0.62 0.66 0.70\nRD (0.0 1) RD (0.5 ) RA( 0.0 1) GR S LRS SR SS\n0.210 0.223 0.095 0.089 0.091 0.262\n0.166 0.185 0.077 0.083 0.095 0.232\n0.223 0.185 0.160 0.103 0.082 0.104 0.255\n0.095 0.077 0.103 0.100 0.099 0.108\n0.089 0.083 0.082 0.095 0.098 0.212\n0.091 0.095 0.104 0.099 0.098 0.200 0.205\n0.262 0.232 0.255 0.108 0.212 0.205 0.243\nRemote Homology (Superfamily)\n0.08 0.10 0.12 0.13 0.15 0.17 0.19 0.21 0.23 0.25\nSecondary Structure (CASP12)\nSecondary Structure (TS115)\nSecondary Structure (CB513)" }, { "heading": "C SUPPLEMENTARY", "text": "In this section, we provide supplementary results. Specifically, we provide the complete comparison table on the masked-token prediction model with different augmentation ratio γ in Table 9. We also provide the corresponding plot in Figure 4 and analylsis in the main paper. Besides, Table 7 provides similar information as Table 2 in the main paper, except for values for remote homolody and secondary structure here being the cross entropy, rather than classification error." } ]
2,020
QUENCE MODELS WITH DATA AUGMENTATIONS
SP:52b51e46d40e554920d48625707a433db2dc233c
[ "This paper proposes a way to exploit relationships across tasks in episodic training with the goal of improving the trained models who might be susceptible to poor sampling in for few-shot learning scenarios. The proposed model consists of two components: a cross-attention transformer (CEAM) which is used to observe details across two episodes, and a regularization term (CECR) which imposes that two different instances of the same task (which have the exact same classes) are consistent in terms of prediction. Cross-attention is computed via a scaled-attention transformer using both support and query set. The consistency loss is a knowledge distillation that imposes an agreement on the two episodes. The soft target is chosen among the two predictions selecting the classifier with the highest accuracy." ]
Most recent few-shot learning (FSL) approaches are based on episodic training whereby each episode samples few training instances (shots) per class to imitate the test condition. However, this strict adhering to test condition has a negative side effect, that is, the trained model is susceptible to the poor sampling of few shots. In this work, for the first time, this problem is addressed by exploiting interepisode relationships. Specifically, a novel meta-learning via modeling episodelevel relationships (MELR) framework is proposed. By sampling two episodes containing the same set of classes for meta-training, MELR is designed to ensure that the meta-learned model is robust against the presence of poorly-sampled shots in the meta-test stage. This is achieved through two key components: (1) a Cross-Episode Attention Module (CEAM) to improve the ability of alleviating the effects of poorly-sampled shots, and (2) a Cross-Episode Consistency Regularization (CECR) to enforce that the two classifiers learned from the two episodes are consistent even when there are unrepresentative instances. Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves 1.0%–5.0% improvements over the baseline (i.e., ProtoNet) used for FSL in our model and outperforms the latest competitors under the same settings.
[ { "affiliations": [], "name": "FEW-SHOT LEARNING" }, { "affiliations": [], "name": "Nanyi Fei" }, { "affiliations": [], "name": "Zhiwu Lu" }, { "affiliations": [], "name": "Songfang Huang" } ]
[ { "authors": [ "Gagné Christian Afrasiyabi Arman", "Lalonde Jean-François" ], "title": "Associative alignment for few-shot image classification", "venue": null, "year": 2020 }, { "authors": [ "Kelsey R. Allen", "Evan Shelhamer", "Hanul Shin", "Joshua B. Tenenbaum" ], "title": "Infinite mixture prototypes for few-shot learning", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Maria-Luiza Antonie", "Osmar R Zaiane", "Alexandru Coman" ], "title": "Application of data mining techniques for medical image classification", "venue": "In International Conference on Multimedia Data Mining, pp", "year": 2001 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Luca Bertinetto", "João F. Henriques", "Philip H.S. Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L. Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE TPAMI,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Generating classification weights with GNN denoising autoencoders for few-shot learning", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Andrei Bursuc", "Nikos Komodakis", "Patrick Perez", "Matthieu Cord" ], "title": "Boosting few-shot visual learning with self-supervision", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Yiluan Guo", "Ngai-Man Cheung" ], "title": "Attentive weights generation for few shot learning via information maximization", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Bharath Hariharan", "Ross B. Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR, pp", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR, pp", "year": 2016 }, { "authors": [ "Geoffrey E. Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "CoRR, abs/1503.02531,", "year": 2015 }, { "authors": [ "Ruibing Hou", "Hong Chang", "Bingpeng Ma", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for fewshot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jongmin Kim", "Taesup Kim", "Sungwoong Kim", "Chang D. Yoo" ], "title": "Edge-labeling graph neural network for few-shot learning", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Hongyang Li", "David Eigen", "Samuel Dodge", "Matthew Zeiler", "Xiaogang Wang" ], "title": "Finding task-relevant features for few-shot learning by category traversal", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Huai-Yu Li", "Weiming Dong", "Xing Mei", "Chongyang Ma", "Feiyue Huang", "Bao-Gang Hu" ], "title": "Lgm-net: Learning to generate matching networks for few-shot learning", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Kai Li", "Yulun Zhang", "Kunpeng Li", "Yun Fu" ], "title": "Adversarial feature hallucination networks for few-shot learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Wenbin Li", "Lei Wang", "Jinglin Xu", "Jing Huo", "Yang Gao", "Jiebo Luo" ], "title": "Revisiting local descriptor based image-to-class measure for few-shot learning", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for few shot learning", "venue": null, "year": 2017 }, { "authors": [ "Bin Liu", "Yue Cao", "Yutong Lin", "Qi Li", "Zheng Zhang", "Mingsheng Long", "Han Hu" ], "title": "Negative margin matters: Understanding margin in few-shot classification", "venue": null, "year": 2020 }, { "authors": [ "Yanbin Liu", "Juho Lee", "Minseop Park", "Saehoon Kim", "Eunho Yang", "Sung Ju Hwang", "Yi Yang" ], "title": "Learning to propagate labels: Transductive propagation network for few-shot learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR, pp", "year": 2015 }, { "authors": [ "Van Nhan Nguyen", "Sigurd Lókse", "Kristoffer Wickstróm", "Michael Kampffmeyer", "Davide Roverso", "Robert Jenssen" ], "title": "Sen: A novel feature normalization dissimilarity measure for prototypical few-shot learning", "venue": null, "year": 2020 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Limeng Qiao", "Yemin Shi", "Jia Li", "Yaowei Wang", "Tiejun Huang", "Yonghong Tian" ], "title": "Transductive episodic-wise adaptive metric for few-shot learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L. Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Avinash Ravichandran", "Rahul Bhotika", "Stefano Soatto" ], "title": "Few-shot learning with embedded class models and shot-free meta training", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Joseph Redmon", "Santosh Kumar Divvala", "Ross B. Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B. Tenenbaum", "Hugo Larochelle", "Richard S. Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross B. Girshick", "Jian Sun" ], "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Victor Garcia Satorras", "Joan Bruna Estrach" ], "title": "Few-shot learning with graph neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Eli Schwartz", "Leonid Karlinsky", "Joseph Shtok", "Sivan Harary", "Mattias Marder", "Abhishek Kumar", "Rogério Schmidt Feris", "Raja Giryes", "Alexander M. Bronstein" ], "title": "Delta-encoder: an effective sample synthesis method for few-shot object recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christian Simon", "Piotr Koniusz", "Richard Nock", "Mehrtash Harandi" ], "title": "Adaptive subspaces for few-shot learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip H.S. Torr", "Timothy M. Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Yue Wang", "Dilip Krishnan", "Joshua B Tenenbaum", "Phillip Isola" ], "title": "Rethinking few-shot image classification: a good embedding is all you need", "venue": null, "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The caltech-ucsd birds-200-2011 dataset", "venue": "Technical Report CNS-TR-2011-001, California Institute of Technology,", "year": 2011 }, { "authors": [ "Yu-Xiong Wang", "Ross B. Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ziyang Wu", "Yuwei Li", "Lihua Guo", "Kui Jia" ], "title": "Parn: Position-aware relation networks for few-shot learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Chen Xing", "Negar Rostamzadeh", "Boris Oreshkin", "Pedro O O. Pinheiro" ], "title": "Adaptive cross-modal few-shot learning", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron C. Courville", "Ruslan Salakhutdinov", "Richard S. Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In ICML, pp. 2048–2057,", "year": 2015 }, { "authors": [ "Ling Yang", "Liangliang Li", "Zilun Zhang", "Xinyu Zhou", "Erjin Zhou", "Yu Liu" ], "title": "DPGN: distribution propagation graph network for few-shot learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Shulin Yang", "Liefeng Bo", "Jue Wang", "Linda G Shapiro" ], "title": "Unsupervised template learning for fine-grained object recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Han-Jia Ye", "Hexiang Hu", "De-Chuan Zhan", "Fei Sha" ], "title": "Few-shot learning via embedding adaptation with set-to-set functions", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Sung Whan Yoon", "Jun Seo", "Jaekyun Moon" ], "title": "Tapnet: Neural network augmented with task-adaptive projection for few-shot learning", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Chi Zhang", "Yujun Cai", "Guosheng Lin", "Chunhua Shen" ], "title": "Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers", "venue": "In CVPR,", "year": 2020 } ]
[ { "heading": null, "text": "Most recent few-shot learning (FSL) approaches are based on episodic training whereby each episode samples few training instances (shots) per class to imitate the test condition. However, this strict adhering to test condition has a negative side effect, that is, the trained model is susceptible to the poor sampling of few shots. In this work, for the first time, this problem is addressed by exploiting interepisode relationships. Specifically, a novel meta-learning via modeling episodelevel relationships (MELR) framework is proposed. By sampling two episodes containing the same set of classes for meta-training, MELR is designed to ensure that the meta-learned model is robust against the presence of poorly-sampled shots in the meta-test stage. This is achieved through two key components: (1) a Cross-Episode Attention Module (CEAM) to improve the ability of alleviating the effects of poorly-sampled shots, and (2) a Cross-Episode Consistency Regularization (CECR) to enforce that the two classifiers learned from the two episodes are consistent even when there are unrepresentative instances. Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves 1.0%–5.0% improvements over the baseline (i.e., ProtoNet) used for FSL in our model and outperforms the latest competitors under the same settings." }, { "heading": "1 INTRODUCTION", "text": "Deep convolutional neural networks (CNNs) have achieved tremendous successes in a wide range of computer vision tasks including object recognition (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Russakovsky et al., 2015; He et al., 2016a), semantic segmentation (Long et al., 2015; Chen et al., 2018), and object detection (Ren et al., 2015; Redmon et al., 2016). For most visual recognition tasks, at least hundreds of labeled training images are required from each class for training a CNN model. However, collecting a large number of labeled training samples is costly and may even be impossible in real-life application scenarios (Antonie et al., 2001; Yang et al., 2012). To reduce the reliance of deep neural networks on large amount of annotated training data, few-shot learning (FSL) has been studied (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Sung et al., 2018), which aims to recognize a set of novel classes with only a few labeled samples by knowledge transfer from a set of base classes with abundant samples.\n∗Corresponding author.\nRecently, FSL has been dominated by meta-learning based approaches (Finn et al., 2017; Snell et al., 2017; Sung et al., 2018; Lee et al., 2019; Ye et al., 2020), which exploit the ample samples from base classes via episodic training. During meta-training, to imitate an N -way K-shot novel class recognition task, an N -way K-shot episode/meta-task is sampled in each iteration from the base classes, consisting of a support set and a query set. By setting up the meta-training episodes exactly the same way as the meta-test ones (i.e., N -wayK-shot in the support set), the objective is to ensure that the meta-learned model can generalize to novel tasks. However, this also leads to an unwanted side-effect, that is, the model will be susceptible to the poor sampling of the few shots.\nOutlying training instances are prevalence in vision benchmarks which can be caused by various factors such as occlusions or unusual pose/lighting conditions. When trained with ample samples, modern CNN-based recognition models are typically robust against abnormal instances as long as they are not dominant. However, when as few as one shot per class is used to build a classifier for FSL, the poorly-sampled few shots could be catastrophic, e.g., when the cat class is represented in the support set by a single image of a half-occluded cat viewed from behind, it would be extremely hard to build a classifier to recognize cats in the query set that are mostly full-body visible and frontal. Existing episodic-training based FSL models do not offer any solution to this problem. The main reason is that different episodes are sampled randomly and independently. When the cat class is sampled in two episodes, these models are not aware that they are the same class, and thus cannot enforce the classifiers independently learned to be consistent to each other, regardless whether there exist poorly-sampled shots in one of the two episodes.\nIn this paper, a novel meta-learning via modeling episode-level relationships (MELR) framework is proposed to address the poor sampling problem of the support set instances in FSL. In contrast to the existing episodic training strategy, MELR conducts meta learning over two episodes deliberately sampled to contain the same set of base classes but different instances. In this way, cross-episode model consistency can be enforced so that the meta-learned model is robust against poorly-sampled shots in the meta-test stage. Concretely, MELR consists of two key components: Cross-Episode Attention Module (CEAM) and Cross-Episode Consistency Regularization (CECR). CEAM is composed of a cross-episode transformer which allows the support set instances to be examined through attention so that unrepresentative support samples can be identified and their negative effects alleviated (especially for computing class prototypes/centers). CECR, on the other hand, exploits the fact that since the two episodes contain the same set of classes, the obtained classifiers (class prototypes) should produce consistent predictions regardless whether there are any poorly-sampled instances in the support set and/or which episode a query instance comes from. This consistency is enforced via cross-episode knowledge distillation.\nOur main contributions are three-fold: (1) For the first time, the poor sampling problem of the few shots is formally tackled by modeling the episode-level relationships in meta-learning based FSL. (2) We propose a novel MELR model with two cross-episode components (i.e., CEAM and CECR) to explicitly enforce that the classifiers of the same classes learned from different episodes need to be consistent regardless whether there exist poorly-sampled shots. (3) Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves significant improvements over the baseline ProtoNet (Snell et al., 2017) and even outperforms the latest competitors under the same settings. We will release the code and models soon." }, { "heading": "2 RELATED WORK", "text": "Few-Shot Learning. Few-shot learning (FSL) has become topical recently. Existing methods can be generally divided into four groups: (1) Metric-based methods either learn a suitable embedding space for their chosen/proposed distance metrics (e.g., cosine similarity (Vinyals et al., 2016), Euclidean distance (Snell et al., 2017), and a novel measure SEN (Nguyen et al., 2020)) or directly learn a suitable distance metric (e.g., CNN-based relation module (Sung et al., 2018; Wu et al., 2019), ridge regression (Bertinetto et al., 2019), and graph neural networks (Satorras & Estrach, 2018; Kim et al., 2019; Yang et al., 2020)). Moreover, several approaches (Yoon et al., 2019; Li et al., 2019a; Qiao et al., 2019; Ye et al., 2020; Simon et al., 2020) learn task-specific metrics which are adaptive to each episode instead of learning a shared task-agnostic metric space. (2) Model-based methods (Finn et al., 2017; Nichol et al., 2018; Rusu et al., 2019) learn good model initializations on base classes and then quickly adapt (i.e., finetune) them on novel classes with few shots and a\nlimited number of gradient update steps. (3) Optimization-based methods (Ravi & Larochelle, 2017; Munkhdalai & Yu, 2017; Li et al., 2017) aim to learn to optimize, that is, to meta-learn optimization algorithms suitable for quick finetuning from base to novel classes. (4) Hallucination-based methods (Hariharan & Girshick, 2017; Wang et al., 2018; Schwartz et al., 2018; Li et al., 2020) learn generators on base classes and then hallucinate new novel class data to augment the few shots. Additionally, there are also other methods that learn to predict network parameters given few novel class samples (Qiao et al., 2018; Gidaris & Komodakis, 2019; Guo & Cheung, 2020). Although the metric-based ProtoNet (Snell et al., 2017) is used as our baseline in this paper, our proposed MELR framework can be easily integrated with other episodic-training based methods.\nModeling Episode-Level Relationships. In the FSL area, relatively less effort has been made to explicitly model the relationships across episodes. For modeling such episode-level relationships, there are two recent examples: (1) LGM-Net (Li et al., 2019b) proposes an inter-task normalization strategy, which applies batch normalization to all support samples across a batch of episodes in each training iteration. (2) Among a batch of episodes, Meta-Transfer Learning (Sun et al., 2019) records the class with the lowest accuracy in each episode and then re-samples ‘hard’ meta-tasks from the set of recorded classes. In this work, instead of utilizing the relationships implicitly, we propose to model episode-level relationships (MELR) explicitly by focusing on episodes with the same set of classes. Furthermore, our MELR is specifically designed to cope with the poor sampling of the few shots – an objective very different from those in (Li et al., 2019b; Sun et al., 2019).\nAttention Mechanism. Attention mechanism was first proposed by (Bahdanau et al., 2015) for machine translation and has now achieved great success in natural language processing (Vaswani et al., 2017) and computer vision (Xu et al., 2015). An attention module typically takes a triplet (queries, keys, values) as input and learns interactions between queries and key-value pairs according to certain task objectives. It is referred to as self-attention or cross-attention depending on whether keys and queries are the same. Several recent works (Hou et al., 2019; Guo & Cheung, 2020; Ye et al., 2020) have utilized attention mechanism for meta-learning based FSL. CAN (Hou et al., 2019) employs cross-attention between support and query samples to learn better feature representations. AWGIM (Guo & Cheung, 2020) adopts both self- and cross-attention for generating classification weights. FEAT (Ye et al., 2020) only uses self-attention on the class prototypes of the support set. The biggest difference between these methods and our MELR lies in whether attention is modeled within each episode or across episodes. Only MELR allows modeling cross-episode instance attention explicitly so that the meta-learned model can be insensitive to badly-sampled support set instances. In addition, in our MELR, query set instances are also updated using cross-attention whilst existing models such as FEAT only apply attention to prototypes obtained using support set instances. They thus cannot directly handle instance-level anomalies." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "Let Db = {(xi, yi)|yi ∈ Cb, i = 1, 2, · · · , Nb} denote an abundant meta-training set from base classes Cb, where xi is the i-th image, yi denotes the class label of xi, and Nb is the number of images in Db. Similarly, let Dn = {(xi, yi)|yi ∈ Cn, i = 1, 2, · · · , Nn} denote a few-shot sample set from a set of novel classes Cn (e.g., K-shot means that each novel class has K labeled images and Nn = K|Cn|), where Cb ∩ Cn = ∅. We are also given a test set T from Cn, where Dn ∩ T = ∅. By exploitingDb andDn for training, the objective of few-shot learning (FSL) is to predict the class labels of test images in T ." }, { "heading": "3.2 META-LEARNING BASED FSL", "text": "Most FSL methods are based on meta-learning (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Sung et al., 2018; Lee et al., 2019; Ye et al., 2020), which adopt episodic training on the base class sample set Db and test their models over few-shot classification tasks sampled from the novel classes Cn. Concretely, an N -way K-shot Q-query episode e = (Se,Qe) is generated as follows: (1) We first randomly sample a subset Ce from base classes Cb during meta-training (or from novel classes Cn during meta-test) and re-index it as Ce = {1, 2, · · · , N}. (2) For each class in Ce, K support and Q query images are then randomly sampled to form the support set Se = {(xi, yi)|yi ∈\nCe, i = 1, 2, · · · , N × K} and the query set Qe = {(xi, yi)|yi ∈ Ce, i = 1, 2, · · · , N × Q} (Se ∩Qe = ∅), respectively. A meta-learning based FSL approach typically designs a few-shot classification loss over the query set Qe for each meta-training episode e:\nLfsc(e) = E(xi,yi)∈QeL(yi, f(ψ(xi);Se)), (1)\nwhere ψ denotes a feature extractor with an output dimension d, f(· ;Se) : Rd → RN can be any scoring function constructed from the support set Se, andL(·, ·) is the classification loss (e.g., widely used cross-entropy). By minimizing the above loss function via back propagation to update the part of the model to be meta-learned (e.g., ψ in ProtoNet), the model is trained over many meta-training episodes and then evaluated on the meta-test episodes." }, { "heading": "3.3 MODELING EPISODE-LEVEL RELATIONSHIPS (MELR)", "text": "In the FSL area, relatively less effort has been made to explicitly model the relationships across episodes, and many FSL methods define their loss functions within each episode independently. In contrast, with our Modeling Episode-Level Relationships (MELR), two N -way episodes are sampled in each training iteration from exactly the same set of N base classes. Cross-Episode Attention Module (CEAM) and Cross-Episode Consistency Regularization (CECR) are then devised to exploit this type of episode-level relationship explicitly (see Figure 1).\nCross-Episode Attention Module (CEAM). We are given two N -way K-shot Q-query episodes e(1) = (S(1)e ,Q(1)e ) and e(2) = (S(2)e ,Q(2)e ) sampled from the same subset Ce of Cb, where Ce is re-indexed as Ce = {1, 2, · · · , N}, and e(1) ∩ e(2) = ∅. For both episodes, to minimize the negative impact of badly-sampled few shots for a given query instance, we propose CEAM for cross-episode attention modeling, which is detailed below.\nConcretely, let S(1) = [ψ(xi)T ;xi ∈ S(1)e ] ∈ RNK×d (or S(2)) and Q(1) = [ψ(xi)T ;xi ∈ Q(1)e ] ∈ RNQ×d (or Q(2)) denote the feature matrices of support and query samples in e(1) (or e(2)), respectively, and let F(1) = [S(1);Q(1)] ∈ RN(K+Q)×d (or F(2) = [S(2);Q(2)]) be the feature matrix of all samples in e(1) (or e(2)). For episode e(1), CEAM takes the triplet (F(1),S(2),S(2)) as input, which corresponds to the input (queries, keys, values) in a typical attention module:\nF̂(1) = CEAM(F(1),S(2),S(2)) = F(1) + softmax( F\n(1) Q S (2)T K√ d )S (2) V , (2)\nwhere the inputs are first linearly mapped into a latent space with the same dimension of the feature space (using projection matrices WQ,WK ,WV ∈ Rd×d):\nF (1) Q = F (1)WQ ∈ RN(K+Q)×d, (3)\nS (2) K = S (2)WK ∈ RNK×d, (4)\nS (2) V = S (2)WV ∈ RNK×d. (5)\nSimilarly, for episode e(2), we have (analogous to Eq. (2)):\nF̂(2) = CEAM(F(2),S(1),S(1)) = F(2) + softmax( F\n(2) Q S (1)T K√ d )S (1) V , (6)\nwhere the learnable parameters of fully connected layers (i.e., WQ, WK and WV ) are shared across Eq. (2) and Eq. (6). We can then obtain the transformed support and query embedding matrices in e(1) (or e(2)) from F̂(1) = [Ŝ(1); Q̂(1)] (or F̂(2) = [Ŝ(2); Q̂(2)]).\nCross-Episode Consistency Regularization (CECR). In our MELR model, CEAM utilizes instance-level attention to alleviate the negative effects of the poor support set instance sampling so that each query set instance can be assigned to the right class with minimal loss. Our CECR is designed to further reduce the model sensitivity to badly-sampled shots in different episodes by forcing the two classifiers learned over the two episodes to produce consistent predictions. There are various options on how to enforce such consistency. CECR adopts a knowledge distillation based strategy as empirically it is the most effective one (see Section 4.3).\nLet f(·; Ŝ(1)) : Rd → RN and f(·; Ŝ(2)) : Rd → RN be the scoring functions of the two classifiers constructed from Ŝ(1) and Ŝ(2), respectively. To determine which classifier/scoring function is stronger, we compute the few-shot classification accuracies of the two classifiers on the merged query samples from both episodes. Concretely, let Q̂(1)e = {(q̂(1)i , y (1) i )|q̂ (1) i ∈ Rd, i = 1, 2, · · · , NQ} (or Q̂(2)e ) denote the set of transformed embedding vectors of query samples in e(1) (or e(2)), where q̂(1)i (or q̂ (2) i ) denotes the i-th row of the transformed embedding matrix Q̂\n(1) (or Q̂(2)). With Q̂(1,2)e = Q̂(1)e ∪ Q̂(2)e = {(q̂(1,2)i , y (1,2) i ), i = 1, 2, · · · , 2NQ}), we are able to compute the few-shot classification accuracies of the two classifiers w.r.t. the ground-truth labels and the corresponding predicted ones (i.e., arg maxj σj(f(q̂ (1,2) i ; Ŝ (1))) and arg maxj σj(f(q̂ (1,2) i ; Ŝ (2))) (j = 1, 2, · · · , N ) for (q̂(1,2)i , y (1,2) i ) ∈ Q̂ (1,2) e , where σj(v) ,\nexp(vj)∑N j′=1 exp(vj′ )\n(v ∈ RN ) is the softmax function).\nThe classifier with higher accuracy is thus considered to be the stronger one and subsequently used as the teacher classifier for the student to behave consistently with it. Without loss of generality, we assume that f(·; Ŝ(1)) is stronger than f(·; Ŝ(2)). We choose the knowledge distillation loss (Hinton et al., 2015) for CECR, which is stated as:\nLcecr(e (1), e(2);T ) = E\n(q̂ (1,2) i ,y (1,2) i )∈Q̂ (1,2) e\nL′(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2));T ), (7)\nwhere T is the temperature parameter as used in (Hinton et al., 2015). More specifically, when the softmax function σj(v;T ) ,\nexp(vj/T )∑N j′=1 exp(vj′/T ) (v ∈ RN , j = 1, 2, · · · , N ) is used, we define\nL′(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2));T ) in Eq. (7) with the cross-entropy loss:\nL′(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2));T )\n=− N∑ j=1 σj(f(q̂ (1,2) i ; Ŝ (1));T ) log ( σj(f(q̂ (1,2) i ; Ŝ (2));T ) ) . (8)\nNote that we cut off the gradients over f(·; Ŝ(1)) when back-propagating since the output of the teacher scoring function is treated as the soft target for the student.\nAlgorithm 1 MELR-based FSL Input: Our MELR model with the set of all parameters Θ\nThe base class sample set Db Hyper-parameters λ, T\nOutput: The learned model 1: for all iteration = 1, 2, · · · , MaxIteration do 2: Randomly sample e(1) and e(2) from Db, satisfying that C(1)e = C(2)e , e(1) ∩ e(2) = ∅; 3: Compute F̂(1) for e(1) using CEAM with Eq. (2), and obtain F̂(2) with Eq. (6) similarly. 4: Compute Lfsc(e(1)) and Lfsc(e(2)) with Eq. (9), respectively; 5: Construct Q̂(1,2)e = Q̂(1)e ∪ Q̂(2)e based on the two episodes; 6: Determine the teacher episode e(t) and the student e(s) by computing the few-shot classifica-\ntion accuracies of the two classifiers within e(1) and e(2), respectively; 7: Compute the CECR loss Lcecr(e(t), e(s);T ) with Eq. (7); 8: Compute the total loss Ltotal with Eq. (10); 9: Compute the gradients∇ΘLtotal;\n10: Update Θ using stochastic gradient descent; 11: end for 12: return The learned model." }, { "heading": "3.4 MELR-BASED FSL ALGORITHM", "text": "As we have mentioned above, in each training iteration, we randomly sample twoN -wayK-shotQquery episodes e(1) = (S(1)e ,Q(1)e ) and e(2) = (S(2)e ,Q(2)e ), which must have exactly the same set of classes but with different instances. We first transform the feature embeddings with Cross-Episode Attention Module (CEAM) and then compute the few-shot classification loss adopting ProtoNet (Snell et al., 2017) for both episodes (e ∈ {e(1), e(2)}):\nLfsc(e) = E(q̂i,yi)∈Q̂eL(yi, fProtoNet(q̂i; Ŝ))\n= E(q̂i,yi)∈Q̂e − log σyi(fProtoNet(q̂i; Ŝ)). (9)\nNext we determine the stronger/teacher episode to compute the Cross-Episode Consistency Regularization (CECR) loss between the two episodes. The total loss for MELR is finally given by:\nLtotal = 1\n2 (Lfsc(e\n(1)) + Lfsc(e (2))) + λLcecr(e (t), e(s);T ), (10)\nwhere e(t) ∈ {e(1), e(2)} denotes the teacher episode, and e(s) ∈ {e(1), e(2)} is the student. By combining CEAM and CECR for episodic training, our MELR-based FSL algorithm is summarized in Algorithm 1. Once learned, with the optimal model found by our algorithm, we randomly sample multiple N -way K-shot meta-test episodes from Cn for evaluation. In other words, the episode-level relationship is only exploited during meta-training." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND SETTINGS", "text": "Datasets. Two widely-used benchmarks are selected: (1) miniImageNet (Vinyals et al., 2016): It contains 100 classes from ILSVRC-12 (Russakovsky et al., 2015). Each class has 600 images. We split it into 64 training classes, 16 validation classes and 20 test classes, as in (Ravi & Larochelle, 2017). (2) tieredImageNet (Ren et al., 2018): It is a larger subset of ILSVRC-12, containing 608 classes and 779,165 images in total. We split it into 351 training classes, 97 validation classes and 160 test classes, as in (Ren et al., 2018). All images of the two datasets are resized to 84× 84.\nEvaluation Protocols. The 5-way 5-shot/1-shot settings are used. Each test episode e(test) = (S(test)e ,Q(test)e ) has 5 classes randomly sampled from the test split, with 5 or 1 shots and 15 queries per class. We thus have N = 5, K = 5 or 1, Q = 15 as in previous works. Although we meta-train\nour MELR with two episodes in each training iteration, we still evaluate it over test episodes one by one, strictly following the standard setting. Moreover, since no cross-episode relationships can be used when testing, we take (F(test), S(test), S(test)) as the input of CEAM. Note that the meta-test process is non-transductive since the embedding of each query sample in e(test) is independently updated using the keys and values coming from the support set. We report average 5-way classification accuracy (%, top-1) over 2,000 test episodes as well as the 95% confidence interval.\nImplementation Details. Our MELR algorithm adopts Conv4-64 (Vinyals et al., 2016), Conv4512 and ResNet-12 (He et al., 2016b) as the feature extractors ψ for fair comparison with published results. The output feature dimensions of Conv4-64, Conv4-512 and ResNet-12 are 64, 512, and 640, respectively. To accelerate the entire training process, we pre-train all three backbones on the training split of each dataset as in many previous works (Zhang et al., 2020; Ye et al., 2020; Simon et al., 2020). We use data augmentation during pre-training (as well as meta-training with ResNet-12 on miniImageNet). For ResNet-12, the stochastic gradient descent (SGD) optimizer is employed with the initial learning rate of 1e-4, the weight decay of 5e-4, and the Nesterov momentum of 0.9. For Conv4-64 and Conv4-512, the Adam optimizer (Kingma & Ba, 2015) is adopted with the initial learning rate of 1e-4. The hyper-parameters λ and T are respectively selected from {0.02, 0.05, 0.1, 0.2} and {16, 32, 64, 128} according to the validation performances of our MELR algorithm (see Appendix A.5 for more details). The code and models will be released soon." }, { "heading": "4.2 MAIN RESULTS", "text": "We compare our MELR with the representative/state-of-the-art methods for standard FSL on the two benchmark datasets in Table 1. Note that we re-implement our baseline (i.e., ProtoNet, denoted with †) by sampling two episodes in each training iteration since it is still considered as a strong FSL approach especially when the backbone is deep. We can observe from Table 1 that: (1) Meth-\nods trained with ResNet-12 generally perform better than those employing shallower backbones. Also, methods trained with Conv4-512 generally perform better than those employing Conv4-64 even though the two backbones are of the same depth. This is expected because deeper and wider backbones have better representation learning abilities. (2) Our MELR achieves the new state-ofthe-art performance on both benchmarks under all settings. Particularly, the improvements over the baseline (i.e., ProtoNet†) range from 1.0% to 5.0%, which clearly validates the effectiveness and the strong generalization ability of our MELR. (3) On both benchmarks, the improvements obtained by our MELR over ProtoNet† under the 1-shot setting are significantly larger than those under the 5-shot setting. This demonstrates the superior performance of our MELR for FSL with less shots. Again this is expected: FSL with less shots is more likely to suffer from the poor sampling of the few support instances; such a challenging problem is exactly what our MELR is designed for." }, { "heading": "4.3 FURTHER EVALUATION", "text": "Ablation Study. To demonstrate the contributions of each cross-episode learning objective in our MELR, we conduct experiments on miniImageNet by adding these learning objectives to the baseline (one at a time) under the 5-way 1-shot and 5-shot settings. Note that ProtoNet without † is trained with one episode in each training iteration while ProtoNet† is trained with two episodes per iteration. The ablation study results in Figure 2(a) show that: (1) Increasing mini-batch size helps little for ProtoNet, indicating that our MELR benefits from two cross-episode objectives rather than doubling the mini-batch size. (2) CEAM or CECR alone clearly improves the performance of the baseline model and CEAM appears to be more beneficial to FSL than CECR. (3) The combination of the two cross-episode learning objectives in our full model (i.e., MELR) achieves further improvements, suggesting that these two learning objectives are complementary to each other. Moreover, in Appendix A.2, we conduct more ablative experiments when the attention module is applied within each episode, validating the necessity of cross-episode attention.\nComparison to CEAM Alternatives. As we have described, our CEAM takes the support samples as ‘keys’ and ‘values’, and all samples in one episode as ‘queries’ for attention module (denoted as Support→All). We can also input prototypes (mean representations of support samples from the same class) as ‘keys’ and ‘values’ or input only query samples as ‘queries’ for attention. This results in other three alternatives of CEAM: Prototype→Query, Support→Query, and Prototype→All. Note that under the 5-way 1-shot setting, Prototype→Query is equal to Support→Query and Prototype→All is the same as Support→All. Additionally, we compare to All→All: inputting all samples from the other episode as ‘keys’ and ‘values’ for CEAM when training but still testing as Support→All (not violating the non-transductive setting). From the comparative results of different choices in Figure 2(b), we can see that Support→All is the best for CEAM. Moreover, All→All works worse than both Prototype→All and Support→All. One possible explanation is that All→All exploits all query set instances during meta-training but only has access to the support set during meta-test to conform to the inductive learning setting. This mis-match reduces its effectiveness.\nComparison to CECR Alternatives. Our consistency regularization loss Lcecr in Eq. (7) is defined with the knowledge distillation (KD) loss, which can be easily replaced by the negative cosine similarity (NegCos), symmetric Kullback–Leibler divergence (symKL), or the L2 distance (see Ap-\npendix A.1 for more details). The results obtained by ProtoNet†+CECR using different consistency losses are shown in Figure 2(c). It can be seen that ProtoNet†+KD performs slightly better than other implementations. We thus choose KD as our CECR loss.\nVisualizations of Data Distributions and Attention Maps. MELR is designed to alleviate the negative effects of poorly-sampled few shots. To validate this, we further provide some visualization results in Figure 3. (1) We sample one episode in the test split of miniImageNet under the 5-way 5-shot setting and obtain the embeddings of all images using the trained models of ProtoNet†, ProtoNet†+CEAM, and our MELR (i.e., ProtoNet†+CEAM+CECR), respectively. We then apply tSNE to project these embeddings into a 2-dimensional space in Figure 3(a) – 3(c). Across three subfigures, samples with the same color belong to the same class and diamonds are class prototypes/centers. We can observe that adding CEAM makes the distribution of different classes more separable (see Figure 3(b) vs. 3(a)), validating the effectiveness of our CEAM. Moreover, the embeddings obtained by our MELR are clearly much more evenly distributed with the prototypes right in the center, indicating less outlying instances (see Figure 3(c) vs. 3(b)). This shows that when CECR is combined with CEAM, those badly-sampled shots in Figure 3(a) are now pulled back to the center of the class distributions. (2) We also visualize the attention maps over the meta-test episode using our trained MELR. Since we take all samples in the episode as ‘queries’, and support samples as ‘keys’ and ‘values’ for CEAM when meta-testing, each of 100 samples has a 25-dimensional weight vector under the 5-way 5-shot setting. For each weight vector, we average the weights of the same class and obtain a 5-dimensional vector. For 75 query samples, we average the vectors of samples with the same class, resulting in a 5× 5 instance attention map (see Figure 3(d)). Similarly, we obtain the attention map for 25 support samples (see Figure 3(e)). It can be seen that the two attention maps are very much alike, indicating that support and query sample embeddings are transformed by our CEAM in a similar way, which thus brings performance improvements for FSL." }, { "heading": "5 CONCLUSION", "text": "We have investigated the challenging problem of how to counter the negative effects of badlysampled few shots for FSL. For the first time, we propose to exploit the underlying relationships between training episodes with identical sets of classes explicitly for meta-learning. This is achieved by two key components: CEAM is designed for neutralizing unrepresentative support set instances, and CECR is to enforce the prediction consistency of few-shot classifiers obtained in the two episodes. Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves 1.0%–5.0% improvements over the baseline (i.e., ProtoNet) used for FSL in our model and outperforms the latest competitors under the same settings." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by National Natural Science Foundation of China (61976220 and 61832017), Beijing Outstanding Young Scientist Program (BJJWZYJH012019100020098), Open Project Program Foundation of Key Laboratory of Opto-Electronics Information Processing, Chinese Academy of Sciences (OEIP-O-202006), and Alibaba Innovative Research (AIR) Program." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ABOUT CECR ALTERNATIVES", "text": "For the cross-episode consistency regularization (CECR) loss Lcecr in Eq. (7), we compare the knowledge distillation (KD) loss to the negative cosine similarity (NegCos), the L2 distance, and the symmetric Kullback-Leibler (KL) divergence (SymKL) in the main paper. Here we give the details about three CECR alternatives. Concretely, let σ(1)(q̂(1,2)i ) (or σ (2)(q̂ (1,2) i )) denote the normalized vector of f(q̂(1,2)i ; Ŝ (1)) (or f(q̂(1,2)i ; Ŝ (2))) using softmax, we then have\nL(NegCos)cecr =E(q̂(1,2)i ,y(1,2)i )∈Q̂(1,2)e NegCos(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2)))\n=E (q̂\n(1,2) i ,y (1,2) i )∈Q̂ (1,2) e − < σ\n(1)(q̂ (1,2) i ), σ (2)(q̂ (1,2) i ) >\n‖σ(1)(q̂(1,2)i )‖2 · ‖σ(2)(q̂ (1,2) i )‖2\n, (11)\nL(L2)cecr =E(q̂(1,2)i ,y(1,2)i )∈Q̂(1,2)e L2(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2)))\n=E (q̂\n(1,2) i ,y (1,2) i )∈Q̂ (1,2) e ‖σ(1)(q̂(1,2)i )− σ (2)(q̂ (1,2) i )‖2, (12)\nL(symKL)cecr =E(q̂(1,2)i ,y(1,2)i )∈Q̂(1,2)e symKL(f(q̂ (1,2) i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2));T ),\n=KL(f(q̂(1,2)i ; Ŝ (1)), f(q̂ (1,2) i ; Ŝ (2))/T )\n+ KL(f(q̂(1,2)i ; Ŝ (2)), f(q̂ (1,2) i ; Ŝ (1))/T ), (13) where < ·, · > is the inner product of two vectors, T is the temperature parameter, and KL(u,v) =∑N j=1 σj(u) log σj(u) σj(v)\n(u, v ∈ RN are two unnormalized scoring vectors, σ denotes the softmax function, and σj(u) denotes the j-th element of σ(u))." }, { "heading": "A.2 MORE ABLATIVE RESULTS", "text": "In Table 2, we show more ablative results when the attention module is applied within each episode independently (named as Intra-Episode Attention Module (IEAM)). Concretely, IEAM is used for ProtoNet†+IEAM († means that ProtoNet is trained with two episodes in each training iteration for fair comparison) as follows (i = 1, 2):\nF̂(i) = IEAM(F(i),S(i),S(i)) = F(i) + softmax( F\n(i) Q S (i)T K√ d )S (i) V . (14)\nWe also add our Cross-Episode Consistency Regularization (CECR) for ProtoNet†+IEAM+CECR to see the performance when IEAM instead of our CEAM is adopted.\nWe can see from Table 2 that adding IEAM to the baseline ProtoNet† also improves its performance, but IEAM is not as beneficial as our CEAM (see ProtoNet†+IEAM vs. ProtoNet†+CEAM). When our CECR is applied on top of ProtoNet†+IEAM (i.e., ProtoNet†+IEAM+CECR), the improvement is rather minor under the 5-way 1-shot setting and the result even gets worse under the 5-shot setting. However, our MELR can still benefit from CECR (see MELR vs. ProtoNet†+CEAM), indicating that CECR is not suitable for IEAM and our CEAM is necessary." }, { "heading": "A.3 MORE VISUALIZATION RESULTS", "text": "Similar to Section 4.3, we provide more visualization results in Figure 4. (1) We sample five episodes (corresponding to five rows in Figure 4) in the test split of miniImageNet under the 5-way 5-shot setting and visualize the data distributions in the first three columns using the trained models of ProtoNet†, ProtoNet†+CEAM, and our MELR (i.e., ProtoNet†+CEAM+CECR), respectively. We can observe that adding CEAM makes the distributions of different classes more separable in the first four rows (see the second column vs. the first column), validating the effectiveness of our CEAM. Moreover, the embeddings obtained by our MELR are clearly much more evenly distributed with the prototypes generally right in the center, indicating less poorly-sampled instances (see the third column vs. the second column). Specifically, ProtoNet†+CEAM brings an obvious outlying instance in the last row, but adding CECR stabilizes the training of CEAM. In a word, when CECR is combined with CEAM, the badly-sampled shots can be pulled back to the center of the class distributions. (2) We also visualize the attention maps over each meta-test episode using our trained MELR model. For each of the five episodes, we obtain two 5 × 5 attention maps for query and\nsupport sets in the last two subfigures of each row, respectively. It can be seen that the two attention maps in each row are very much alike, indicating that support and query sample embeddings are transformed by our CEAM in a similar way, which thus brings performance improvements for FSL." }, { "heading": "A.4 RESULTS FOR FINE-GRAINED FSL", "text": "To evaluate our MELR under the fine-grained setting where the poorly-sampled shots may have greater negative impact since the classes are much more close, we conduct experiments on CUB200-2011 Birds (CUB) (Wah et al., 2011) with Conv4-64 as the feature extractor. CUB has 200 finegrained classes of birds and 11,788 images in total. We follow (Ye et al., 2020) and split the dataset into 100, 50, and 50 classes for training, validation, and test, respectively. For direct comparison, we also use the pre-trained backbone model released by (Ye et al., 2020), which is pre-trained on the training set. The comparative results in Table 3 show that: (1) Our MELR achieves the best results and improves over the second-best FEAT by 1.4% – 2.1%, validating the effectiveness of MELR under the fine-grained setting. (2) Our ProtoNet†+CEAM alone outperforms all the competitors, and adding CECR into ProtoNet†+CEAM (i.e., our MELR) further brings noticeable improvements (0.5% – 1.3%), indicating that both CEAM and CECR are crucial for fine-grained FSL." }, { "heading": "A.5 ANALYSIS OF HYPER-PARAMETER SENSITIVITY", "text": "As we have mentioned in Section 4.1, the hyper-parameters λ and T are respectively selected from {0.02, 0.05, 0.1, 0.2} and {16, 32, 64, 128} according to the validation performance of our MELR algorithm. Concretely, on miniImageNet (with Conv4-64 as the feature extractor), we choose λ = 0.1 and T = 128 under the 5-way 1-shot setting, and choose λ = 0.05 and T = 64 under the 5-way 5-shot setting. In Figure 5, we further present our hyper-parameter analysis on miniImageNet. The results show that our algorithm is quite insensitive to these parameters." }, { "heading": "A.6 RESULTS BY VARYING THE NUMBER OF EPISODES", "text": "We also conduct experiments by varying the number of episodes Ne in each training iteration. For the implementation of CEAM, we have two slightly different choices. Concretely, for each episode e(i) (i = 1, · · · , Ne), the output of CEAM can be defined as:\nImplement. (1): F̂(i) = 1 Ne − 1 ∑\nj=1,··· ,Ne,j 6=i\nCEAM(F(i),S(j),S(j)); (15)\nImplement. (2): F̂(i) = CEAM(F(i), [S(j)]Nej=1,j 6=i, [S (j)]Nej=1,j 6=i), (16)\nwhere [S(j)]Nej=1,j 6=i ∈ RNK(Ne−1)×d is the concatenation of S(j) ∈ RNK×d (j = 1, · · · , i− 1, i+ 1, · · · , Ne). As for CECR, we determine the episode with the best accuracy as the teacher and distill knowledge to the rest Ne − 1 student episodes. The results on miniImageNet using Conv4-64 in Table 4 show that the performance drops slightly as the number of episodes in each training iteration increases for both implementations. One possible explanation is that too much training data make the model fit better on the training set but fail to improve its generalization ability on novel classes." }, { "heading": "A.7 COMPARISON REGARDING THE NUMBER OF PARAMETERS", "text": "We select several representative/latest FSL models from Table 1 of the main paper and list their numbers of parameters in Table 5. We can observe that: (1) With an extra CEAM in addition to the backbone (Conv4-64 or ResNet-12), our MELR has about 13% – 15% relatively more parameters than the baseline ProtoNet. Note that CECR (in our MELR) leads to no extra parameters. Considering the (statistically) significant improvements achieved by our MELR over ProtoNet, we think that our MELR is cost-effective because it requires not much additional parameters. (2) The number of MELR’s parameters is almost the same as that of FEAT’s and is much less than that of PARN’s, but our MELR achieves better results than FEAT and PARN, indicating that our MELR is the most cost-effective among these three methods." }, { "heading": "A.8 SCHEMATIC ILLUSTRATION OF CEAM", "text": "To demonstrate how our proposed CEAM can alleviate the negative effect of the poorly sampled shots, we present a schematic illustration of CEAM with a pair of episodes as its input in Figure 6. For easy understanding (but without loss of generality), a toy visual example is considered: only one outlying instance x exists in the support set S(1) of the first episode, but the support set S(2) of the second episode is properly sampled. Since the two episodes are sampled from the same set of classes, the data distributions of S̃(1) = S(1) \\ {x} and S(2) are similar. On one hand, when S(1) (including the outlier x) is used as keys and values to update S(2), the distribution of S(2) will not be influenced too much by the outlier x since all the shots in S(2) are far away from x and the weights on x will be very small. That is, our CEAM is insensitive to few outliers in the keys and values. On the other hand, when S(1) is transformed based on S(2), the distribution of S̃(1) will be changed little (the data distributions of S̃(1) and S(2) are similar). Particularly, for the outlier x, its updated embedding x̂ will be pulled back to S̃(1) (i.e., its negative effect is mitigated) since x̂ = CEAM(x,S(2),S(2)) ≈ x + WS(2) ≈ x + W̃S̃(1), where x is the original embedding of x, S(2) and S̃(1) are respectively the feature matrices of S(2) and S̃(1), W and W̃ are two normalized weight matrices. Note that this cannot be done by attending on x with S(1).\nAdditionally, we provide the results obtained by an extra CEAM alternative in Table 6: only the embeddings of support samples are transformed (denoted as ‘Support→ Support’) instead of transforming all samples as our choice. We can see from Table 6 and also Figure 2(b) that our choice (i.e., Support → All) achieves the best results among all CEAM alternatives. One possible explanation for why we resort to updating all samples is that transforming support and query samples into the same embedding space is beneficial to the model learning." }, { "heading": "A.9 SIGNIFICANCE ANALYSIS OF CECR", "text": "Since CECR (in our MELR) requires no extra learnable parameters and only computes a loss for consistency constraint, it brings very limited computational cost. Empirically, the meta-training time of MELR is almost the same as that of ProtoNet†+CEAM, indicating that the performance improvement over ProtoNet†+CEAM is obtained by CECR at an extremely low cost.\nTo study when CECR has a significant impact on the final FSL performance, we select 10 meta-test episodes from the 2,000 ones used in the evaluation stage and visualize them in Figure 7. Concretely, for each of the 10 selected meta-test episodes, we visualize three data distributions (from left to right) obtained by ProtoNet†, ProtoNet†+CEAM, and our MELR, respectively. That is, each meta-test episode is denoted by a group of three subfigures. In each subfigure, we compute the test accuracy over query samples and present it as the title. We can see that: (1) When the few-shot classification task is hard (i.e., ProtoNet† obtains relatively low accuracy), CEAM leads to significant improvements (about 3% – 9%). (2) In the same hard situation, CECR further achieves significant improvements (about 5% – 8%) on top of CEAM and shows its great effect on the final FSL performance. This indicates that CECR and CEAM are complementary to each other in hard situations, and thus both are crucial for solving the poor sampling problem in meta-learning based FSL." }, { "heading": "A.10 RESULTS OF TRANSDUCTIVE FSL", "text": "The main difference between standard and transductive FSL is whether query samples are tested one at a time or all simultaneously. As we have mentioned in Section 4.1, we evaluate our MELR model strictly following the non-transductive setting for standard FSL since the embedding of each query sample in the test episode is independently transformed using the keys and values coming from the\nsupport set. However, in this section, we further conduct experiments under the transductive FSL setting to study how well our MELR can make use of the unlabeled query samples.\nConcretely, for each meta-test episode e(test), we input all samples (both support and query ones) as keys and values into the trained CEAM:\nF̂(test) = CEAM(F(test),F(test),F(test)), (17)\nsuch that the relationships of all unlabeled query samples can be taken into consideration. With the transformed embeddings, we make predictions for all query samples based on Semi-ProtoNet (Ren et al., 2018), which utilizes the unlabeled query samples to help construct better class prototypes and then makes predictions similar to ProtoNet. To match the meta-test process, we also make changes to meta-training accordingly. Specifically, for one episode (out of the two) in each training iteration, we use all samples from the other episode as keys and values for CEAM to update all of its own embeddings. This is followed by obtaining the prototypes based on Semi-ProtoNet as well as computing the FSL loss and CECR loss.\nThe results of transductive FSL on miniImageNet are shown in Table 7. It can be seen that: (1) By utilizing the unlabeled query samples under transductive FSL, our MELR achieves further improvements, as compared to MELR under standard FSL. Particularly, the performance improvement under 1-shot (i.e., 6.3%) is more significant than that under 5-shot (i.e., 2.6%), indicating that exploiting unlabeled query samples brings more benefits to FSL with less labeled support samples. (2) Our MELR achieves the best results among all the transductive FSL methods. Specifically, MELR outperforms FEAT by a large margin (2.0% – 4.6%). Since FEAT also makes predictions based on Semi-ProtoNet, this clearly validates the effectiveness of our MELR under the transductive setting. (3) Our ProtoNet†+CEAM alone outperforms all the competitors, and adding CECR into ProtoNet†+CEAM (i.e., our MELR is obtained) further brings noticeable improvements (0.6% – 1.4%), indicating that both CEAM and CECR play important roles under transductive FSL.\nA.11 VISUALIZATIONS OF THE GENERALIZATION ABILITY OF MELR\nWe further provide the visualization of the generalization ability of our MELR during meta-test in Figure 8. Concretely, we randomly sample 1,000 episode pairs from the test split of miniImageNet under the 5-way 1-shot setting, where the two episodes in each pair have identical sets of classes. We then compute the average 5-way classification accuracy over all 2,000 episodes (from the 1,000 episode pairs) and the average Lcecr in Eq. (7) over all 1,000 episode pairs at each training epoch. We present the visualization results w.r.t. accuracy and CECR loss in Figure 8. As expected, the accuracy of our MELR is consistently higher than that of our baseline ProtoNet†. Moreover, as compared with ProtoNet†, the CECR loss of our MELR is also lower across the whole training process, indicating that MELR has better performance consistency between two episodes. This provides direct evidence that our CEAM and CECR can boost the generalization ability of the learned model on novel classes." } ]
2,021
null
SP:5ee24df635a6659978378a8ff6e0cc41e51b6010
[ "The paper proposes a tensor network for text classification. There are two components: (i) word-GTNs convert word embeddings to m-d probability encoding vectors, and (ii) a sentence-DND takes the word probability encoding vectors as input, combining them using matrix product state (MPS). Experiments on several text classification datasets e.g. SST, CR, MPQA show that the proposed method outperforms existing ones when using word2vec and BERT word embeddings. " ]
As a novel model that bridges machine learning and quantum theory, tensor network (TN) has recently gained increasing attention and successful applications for processing natural images. However, for natural languages, it is unclear how to design a probabilistic encoding architecture to efficiently and accurately learn and classify texts based on TN. This paper proposes a general two-step scheme of text classification based on Tensor Network, which is named as TextTN. TextTN first encodes the word vectors in a probabilistic space by a generative TN (word-GTN), and then classifies a text sentence using a discriminative TN (sentence-DTN). Moreover, in sentence-DTN, its hyper-parameter (i.e., bond-dimension) can be analyzed and selected by the theoretical property of TextTN’s expressive power. In experiments, our TextTN also obtains the state-of-the-art result on SST-5 sentiment classification task.
[]
[ { "authors": [ "Alexei Baevski", "Sergey Edunov", "Yinhan Liu", "Luke Zettlemoyer", "Michael Auli" ], "title": "Cloze-driven pretraining of self-attention networks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "arXiv preprint arXiv:2003.10555,", "year": 2020 }, { "authors": [ "Biyun Dai", "Jinlong Li", "Ruoyi Xu" ], "title": "Multiple positional self-attention network for text classification", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Ivan Glasser", "Ryan Sweke", "Nicola Pancotti", "Jens Eisert", "Ignacio Cirac" ], "title": "Expressive power of tensor-network factorizations for probabilistic modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qipeng Guo", "Xipeng Qiu", "Pengfei Liu", "Yunfan Shao", "Xiangyang Xue", "Zheng Zhang" ], "title": "Startransformer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Zhao-Yu Han", "Jun Wang", "Heng Fan", "Lei Wang", "Pan Zhang" ], "title": "Unsupervised generative modeling using matrix product states", "venue": "Physical Review X,", "year": 2018 }, { "authors": [ "Minqing Hu", "Bing Liu" ], "title": "Mining and summarizing customer reviews", "venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2004 }, { "authors": [ "William Huggins", "Piyush Patil", "Bradley Mitchell", "K Birgitta Whaley", "E Miles Stoudenmire" ], "title": "Towards quantum machine learning with tensor networks", "venue": "Quantum Science and technology,", "year": 2019 }, { "authors": [ "Valentin Khrulkov", "Alexander Novikov", "Ivan V. Oseledets" ], "title": "Expressive power of recurrent neural networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Yoav Levine", "David Yakira", "Nadav Cohen", "Amnon Shashua" ], "title": "Deep learning and quantum entanglement: Fundamental connections with implications to network design", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoav Levine", "Or Sharir", "Nadav Cohen", "Amnon Shashua" ], "title": "Quantum entanglement in deep learning architectures", "venue": "Physical review letters,", "year": 2019 }, { "authors": [ "Ding Liu", "Shi-Ju Ran", "Peter Wittek", "Cheng Peng", "Raul Blázquez García", "Gang Su", "Maciej Lewenstein" ], "title": "Machine learning by unitary tensor network of hierarchical tree structure", "venue": "New Journal of Physics,", "year": 2019 }, { "authors": [ "Xuanqing Liu", "Hsiang-Fu Yu", "Inderjit Dhillon", "Cho-Jui Hsieh" ], "title": "Learning to encode position for transformer with continuous dynamical model", "venue": "arXiv preprint arXiv:2003.09229,", "year": 2020 }, { "authors": [ "Jacob Miller", "Guillaume Rabusseau", "John Terilla" ], "title": "Tensor networks for language modeling", "venue": "arXiv preprint arXiv:2003.01039,", "year": 2020 }, { "authors": [ "Ivan Oseledets" ], "title": "Tensor-train decomposition", "venue": "SIAM Journal on Scientific Computing,", "year": 2011 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics,", "year": 2004 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics,", "year": 2004 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "U Schollwock" ], "title": "The density-matrix renormalization group in the age of matrix product states", "venue": "Annals of Physics,", "year": 2011 }, { "authors": [ "Tao Shen", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Shirui Pan", "Chengqi Zhang" ], "title": "Disan: Directional self-attention network for rnn/cnn-free language understanding", "venue": "arXiv preprint arXiv:1709.04696,", "year": 2017 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Y. Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "E Miles Stoudenmire", "David J Schwab" ], "title": "Supervised learning with quantum-inspired tensor networks", "venue": "arXiv: Machine Learning,", "year": 2016 }, { "authors": [ "Zheng-Zhi Sun", "Cheng Peng", "Ding Liu", "Shi-Ju Ran", "Gang Su" ], "title": "Generative tensor network classification model for supervised machine learning", "venue": "Physical Review B,", "year": 2020 }, { "authors": [ "Deborah K Watson", "Martin Dunn" ], "title": "Rearranging the exponential wall for large n-body systems", "venue": "Physical review letters,", "year": 2010 }, { "authors": [ "Janyce Wiebe", "Theresa Wilson", "Claire Cardie" ], "title": "Annotating expressions of opinions and emotions", "venue": "in language. language resources and evaluation,", "year": 2005 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Min Yang", "Wei Zhao", "Jianbo Ye", "Zeyang Lei", "Zhou Zhao", "Soufei Zhang" ], "title": "Investigating capsule networks with dynamic routing for text classification", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Lipeng Zhang", "Peng Zhang", "Xindian Ma", "Shuqin Gu", "Zhan Su", "Dawei Song" ], "title": "A generalized language model in tensor space", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Peng Zhang", "Zhan Su", "Lipeng Zhang", "Benyou Wang", "Dawei Song" ], "title": "A quantum many-body wave function inspired language modeling approach", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning incorporating with the quantum mechanics forms a novel interdisciplinary field known as quantum machine learning (Huggins et al., 2019; Ran et al., 2020). Tensor network (TN) as a novel model has become prominent in the field of quantum machine learning (Biamonte et al., 2017). On the one hand, tensor network can be used as mathematical tool to enhance the theoretical understanding of existing neural network methods (Levine et al., 2018; 2019). On the other hand, based on tensor network, new machine learning algorithms have been proposed, e.g., discriminative TN (DTN) (Stoudenmire & Schwab, 2016) for supervised tasks and generative TN (GTN) (Han et al., 2018) for unsupervised scenarios (Han et al., 2018). Based on the natural analogy between the quantum concepts (e.g., quantum many-body system (Levine et al., 2018)) and the image representation, many studies and applications are conducted for processing and learning natural pictures (Stoudenmire & Schwab, 2016; Sun et al., 2020; Liu et al., 2019). However, for natural languages, it remains unclear how to design an efficient and effective TN approach, which can accurately learn and classify texts.\nIn the field of natural language processing (NLP), researchers have realized the analogy between the quantum many-body wave function and the word interactions (by the tensor product) in a text sentence, and developed a quantum-inspired language representation (Zhang et al., 2018). Based on the quantum many-body physics and tensor decomposition techniques, Zhang et al. (2018) provided a mathematical understanding of existing convolution neural network (CNN) based text classification methods. Similarly, a tensor space language model (TSLM) has been built based on the tensor network formulation (Zhang et al., 2019). This work shows that TSLM is a more generalized language model compared with n−gram and recurrent neural network (RNN) based language models. In implementation, however, TSLM did not provide a tensor network algorithm. The challenge lies in the high dimensionality of each word vector, which is much higher than the dimensionality of each pixel representation in image scenarios. After the tensor product of a number of word vectors, the resulting high-order tensors will become computationally intractable.\nMore recently, a tensor network algorithm, namely uniform matrix product state (u-MPS) model, has been proposed for probabilistic modeling of a text sequence (Miller et al., 2020). u-MPS is evaluated on a context-free language task, which uses an synthetic data set. However, u-MPS has not been applied in a real-world NLP task, e.g., typical language modeling or text classification task. In addition, the expressive power of u-MPS has not been investigated. The expressive power of tensor\nnetwork is a fundamental property of various TNs and has been systematically studied for tensor network factorizations of multivariate probability distributions (Glasser et al., 2019). This motivates us to make use of the theoretical property of TN’s expressive power, for developing a tensor network based probabilistic model for natural language representation and classification 1.\nTo build such a text tensor network, we need to address two research problems in this paper. First, how to design a probabilistic encoding architecture to efficiently and effectively learn and classify the text. Second, how to analyse its expressive power, and make use of such analyses for more theoretical understanding and practical effectiveness of the text tensor network.\nIn this paper, we propose a novel tensor network architecture, named as TextTN. TextTN encodes each word vector in word-GTNs and classifies the sentence in a sentence-DTN. First, the proposed word-GTNs train a TN for each word and treat each element of a word vector as a node. In this manner, the word-GTNs firstly map a high-dimensional word vector a low-dimensional linear space by the tensor network operators. Then, the second layer tensor network, called as sentence-DTN, trains a TN for each sentence, by regarding the low-dimensional word vector obtained by word-GTNs as its input.\nIn TextTN, a sentence is represented by the tensor product among word vectors. Therefore, the interaction information among different word vectors and different dimensions are both modeled in TextTN. Such interactions, are encoded in the high-order weighted tensors, which represent a high-order semantic space. In both word-GTNs and sentence-DTN, the high-order tensor can be solved by the tensor network, i.e., a matrix product state model, which uses the idea of low-rank approximation that can conquer the exponential wall problem (Watson & Dunn, 2010).\nIn sentence-DTN, the bond-dimension is an important hyper-parameter and reflects the expressive power of TextTN. In this paper, we analyze the upper and lower bounds of the bond-dimension. Particularly, its lower bound can be determined by the entanglement entropy, which can be considered as a measurement of the communication information encoded in tensor network. A reference bonddimension can be set as this lower bound, as we assume that a larger value means an information redundancy and a smaller value indicates an insufficiency of the TN’s expressive power. In the experiments, such a reference bond-dimension can achieve effective classification results, which indicates the TextTN’s practical advantage in its potential to save hyper-parameter tuning efforts.\nMoreover, the word interaction has been taken into account in sentence-DTN by the joint effect of different words for the later class predication by the loss functions. For the learning algorithm, we observe that different word positions have different weights in a sentence, so that the one-function (for a specific position) training in the original DTN is inappropriate. Therefore, we propose an all-function training process in the sentence-DTN to improve the stability of TextTN.\nWe have evaluated TextTN in four major text classification datasets (MR, Subj, CR and MPQA). The results show that TextTN outperforms convolutional neural network (CNN) on all the datasets. This departs from vision tasks where according to the recent literature, a tensor network has not been reported to outperform CNN (Kim, 2014). In addition, based on the word vectors from the pre-trained model BERT, the TextTN has better results than the BERT model on SST-2 and SST-5 tasks, and the accuracy of BERT+TextTN is comparable with the state of the art (SOTA) result on SST-5 dataset." }, { "heading": "2 BACKGROUND", "text": "We now provide the background of Matrix product States (MPS), a family of tensor networks. MPS (Schollwock, 2011) is also known as the tensor train decomposition (Oseledets, 2011). Because of the low degree of freedom, the research based on MPS is developing rapidly. At present, the tensor network based on MPS can be roughly divided into two categories. One is the Generative Tensor Network (GTN) (Han et al., 2018; Sun et al., 2020), and the other one is the supervised tensor network (also named as Discriminative Tensor Network, DTN) (Stoudenmire & Schwab, 2016). Then, we briefly describe existing GTN and DTN models for image classification tasks.\nGTNs are used to model the joint probability distribution of given data. For a picture X with n pixels, each pixel is encoded into a two-dimensional vector xi = (pi, 1− pi)T by a feature mapping from a\n1In this paper, we focus on the text classification task. However, the idea and formulation of our proposed approach are general and have potential in other NLP tasks.\npixel’s value (Sun et al., 2020), where i ∈ {1, . . . , n} and pi is the mapped probabilities of the pixel xi. The representation of the picture X can be give out Φ(X) through the operator of tensor product between these vectors xi. A joint probability distribution of the picture X is computed by the GTNs.\nPj(X) = |Wj • Φ(X)|2 (1)\nwhere Pj(X) represents the probability of a pictureX with respect to the category j (j ∈ {1, . . . ,m}), m is the number of the categories on an image classification dataset, and • is the operator of tensor contraction. The MPS decomposition of the jth n-order weight tensor Wj can be written as:\nWj = ∑ {α} Aα1s1 A α1α2 s2 . . . A αn−1 sn (2)\nIn the left of the Figure 1, we give out an illustrative GTNs in two categories (i.e., m=2). Second or third order tensors Asi (i ∈ {1, . . . , n}) are from the decomposition of the weight tensor Wj . Each ’virtual’ indicator αk (k ∈ {1, . . . , n − 1}) is the rank obtained from the tensor-train decomposition (Oseledets, 2011), and the ’physical’ indicator si is the dimension of the pixel xi.\nDTN is used to identify the class or label of a picture and computes a conditional probability distribution P (y|X). In DTN, the conditional probability distribution P (y|X) is computed as follows. P (y|X) = Wl • Φ(X). (3) where the (n+1)-order weight tensor Wl is decomposed into a MPS and l is an extra order/index representing the label:\nWl = ∑ {α} Aα1s1 . . . A l;αi−1αi si . . . A αn−1 sn (4)\nwhere P (y|X) is a vector and encodes the conditional probability distribution of outputting y given a picture X . Eq. 3 is defined to classify the input X by choosing the label for which the value in the vector P (y|X) is largest. In practice, the MPS shown in the right of Figure 1 is a supervised learning model (Stoudenmire & Schwab, 2016), and these rank values {αk} are set to be equal as the hyper-parameter (bond-dimension) in TN." }, { "heading": "3 TENSOR NETWORK FOR LANGUAGE ENCODING AND CLASSIFICATION", "text": "Problem setting Our goal is to develop a tensor network architecture to encode and classify a sentence of words in a TN’s probabilistic manner. For a sentence with n words (i.e, S=(w1, . . . , wn)), the tensor representation of the sentence can be written as (w1 ⊗ . . .⊗wi ⊗ . . .⊗wn) (Zhang et al., 2019), where wi are the word vectors. However, as aforementioned in Introduction, because of the dimensional disaster, it is infeasible to directly input such as a sentence representation into a tensor network. In order to solve this problem and still model the word interaction on a high-order tensor (denoted as Wl), we can formalize the TN model as follows:\nP (y|S) = Wl • f(S) = Wl • (f(w1)⊗ f(w2)⊗ . . .⊗ f(wn))\n(5)\nwhere f is an operator to encode a sentence in a low-dimensional probabilistic space.\nIn our work, word-GTNs are used to encode the word vectors into a low-dimensional space, and then the new representation of word can be efficiently inputted to the tensor network. Specifically, the\nfunction f in Eq 5 is embodied as word-GTNs (in Section 3.1), and Wl is embodied as sentenceDTN (in Section 3.2). In the process of text classification, the word-GTNs and sentence-DTN are unified into a new TN framework (TextTN), as shown in Figure 2. Bond-dimension is an important hyper-parameter which reflects the expressive power of TN model and influences the effectiveness of text classification. In Section 3.2, we propose a reference bond-dimension that can be computed based on the entanglement entropy. Besides, to improve the stability of sentence-DTN, all-function learning on sentence-DTN is proposed in Section 3.3." }, { "heading": "3.1 WORD ENCODING BASED ON WORD-GTNS", "text": "We unfasten TextTN step by step in the sentence representation and classification problem. First, m word-GTNs are used to encode word vectors to a low-dimensional space, where m(m > 1) corresponds to the dimension of the low-dimensional space. In Figure 2, as an example, two wordGTNs are used to encode each word in a sentence, with shared parameters for each word. Firstly, each element of word embedding vector is mapped by a feature mapping. Then, each word is represented as a d-order tensor by calculating the tensor product of each element after feature mapping. For example, for a word vector wi = (θ1, . . . , θd)T , the tensor product representation is shown as follows:\nφ(wi) = ( cos (π2 θ1) sin (π2 θ1) ) ⊗ . . .⊗ ( cos (π2 θd) sin (π2 θd) ) (6)\nwhere⊗ is the tensor product, and φ(wi) is the tensor product representation of word wi. The feature mapping (i.e., cos (·) and sin (·)) is motivated by “spin” vectors encoded in quantum systems, which is usually used in TN (Stoudenmire & Schwab, 2016; Han et al., 2018).\nThen, two word-GTNs are used to encode the tensor Φ(wi) and get the probability distribution vi, i ∈ {1, . . . , n} (see Figure 2). Specifically, for word wi, each word-GTN can encode the word into a probability value P ij , j ∈ {1, 2}. In the following, vi is represented by a 2-dimensional vector (P i1, P i 2) T which is calculated by:\nvi = f(wi) = ( P i1 P i2 ) = ( |W1 • φ(wi)|2 |W2 • φ(wi)|2 ) (7)\nwhere • is the operator of tensor contraction, W1 and W2 are two weight tensors to be learned in word-GTNs, and shared for all word vectors in a sentence. The probability condition of P i1+P i 2=1 is then achieved by the softmax in practice.\nword-GTNs are different from the existing GTNs shown in Background section. The word-GTNs are used to learning the new word representation (rather than text representation), and the existing GTNs are used to classify the pictures (not for pixels). Since a word does not have a class or label, the output of word-GTNs corresponds to the probabilities in a low-dimensional feature space.\nRegarding the dimension of low-dimensional space of word-GTNs. i.e, m, we will show that the upper bound of the bond dimension of TextTN is mbn/2c in Theorem 1. The expressive power of the model will increase exponentially along with the increasing m (Khrulkov et al., 2018). Therefore, the dimension m should be set as a small value (e.g., 2 in this section) to prevent a too large expressive power. This observation is also supported by our experiments." }, { "heading": "3.2 ANALYSIS OF BOND-DIMENSION ON SENTENCE-DTN", "text": "After we use word-GTNs, the new representation of a sentence S can be written as (v1 ⊗ . . .⊗ vn). which is a n-order tensor. We now introduce a sentence-DTN to classify sentences. This sentenceDTN is a MPS which is from the tensor-train decomposition (Oseledets, 2011) of high-order tensor. By inputting the sentence vector obtained by word-GTNs, the sentenc-DTN can be formalized as:\nP (y|S) = Wl • (v1 ⊗ . . .⊗ vn) = AS1(v1)AS2(v2) . . . A l Si(vi) . . . ASn(vn)\n(8)\nwhich is a n+1-order MPS. As we introduced in Background, we can set the rank values as an equal value (bond-dimension), i.e., α1=. . .=αj=. . .=αn−1. In other words, the bond dimension is set as r\n(r=αj). Then, each element of Wl can be calculated as follows,\nWlS1,...,Sn = r∑\n{αj=1}\nAα1s1 . . . A l;αj−1αj si . . . A αn−1 sn (9)\nThe bond-dimension r is the hyper-parameter which can reflect the expressive power of a tensor network. We provide an approach to a reference bond-dimension, which is an optimal parameter setting reported in our experiments. In the following, we will derive the bound of the bond-dimension and show that its lower bound is the reference or recommended bond-dimension. This lower-bound, is calculated by the entanglement entropy, which will be described as follows.\nDefinition 1. For an n+1-order weight tensor Wl decomposed to a matrix product states (MPS), λ (λ ≤ r) singular values can be given out in the bn2 c-th node of MPS. The entanglement entropy is calculated by the singular-value vector gk as\nE = − λ∑ k=1 g2k log gk 2 (10)\nwhere λ is the number of non-zero singular values, log is the log to the base 2, and the normalization condition of the singular-value vector ∑R k=1 g 2 k = 1 is satisfied.\nEntanglement entropy measures the amount of communication information in TN (Levine et al., 2019), and we can derive the lower bound of the bond-dimension based on entanglement entropy, as shown in Theorem 1. Theorem 1. For the input {v1, . . . ,vn} of the sentence-DTN, the bond-dimension is bounded as\nmbn/2c ≥ bond−dimension ≥ b2Ec. (11)\nwhere E is the entanglement entropy of TextTN, m is the inputting dimension of sentence-DTN, n is the length of a sentence, and b c indicates rounding down.\nProof. The proof can be found in Appendix B.\nAccording to Eq. 11, the lower bound of bond-dimension can be determined by the entanglement entropy E. If the bond-dimension is less than b2Ec, TextTN can not model sufficient information measured by the entanglement entropy E. Compared with the lower bound b2Ec, choosing a larger value which satisfies Eq.11 will increase the number of parameters and the whole computational cost of the TextTN, but only gain little information from the data. In addition, the increase of the bond-dimension will increase the model complexity, possibly leading to the overfitting problem. Therefore, we recommend the lower bound b2Ec as the reference bond-dimension. About calculating the required entanglement entropy E, we first need to use a sufficiently large bond-dimension. Since the initial bond-dimension is large enough and the entanglement entropy finally converges, the E measures the amount of information that the network can capture from a\ncertain data set (Levine et al., 2019). After calculating the entanglement entropy E, according to the proof of the Theorem 1, we get the lower bound of bond-dimension shown as Eq.11, which is the reference bond-dimension.\nMoreover, we can derive the upper bound of the bond-dimension. According to the research (Khrulkov et al., 2018), the upper bound of the bond-dimension can be given out and is mbn/2c, where m is the inputting dimension of each node in MPS, and n is the number of the nodes, i.e., the length of sentence in TextTN. If the bond-dimension is greater than mbn/2c, then the network structure will not meet the constraints of Tensor-Train decomposition (Oseledets, 2011) ( Appendix B), then TextTN will be rank deficient." }, { "heading": "3.3 ALL-FUNCTION LEARNING ON SENTENCE-DTN", "text": "In the original DTN, the position of l in Figure 2 can be moved from AS1 to ASn in the process of training. It means that any position can be selected for the label bound l, leading to a variance for different positions in the processes of training and prediction. The following Claim 1 shows that the original DTN calculates a conditional probability by putting the label bound in a certain of the model, which corresponds to a certain certain probability function. However, the effects of putting the label bound in different positions should be taken into consideration. Claim 1. For an input S = {v1,v2, . . . ,vn}, the original DTN (Stoudenmire & Schwab, 2016) computes the conditional probability P (y|S) by putting the label bound l in a certain position, which can be considered as a function from a set Tof functions which compute all the possible conditional probabilities. Such a set is defined as:\nT = {P (y|v1;v2 . . .vn), . . . , P (y|v1 . . .vi;vi+1 . . . ,vn), . . . , P (y|v1 . . .vn−1;vn)}, (12) where i the location of segmentation for the input S, and is also a position of label l.\nProof. The proof can be found in Appendix C.\nIn the original DTN (Stoudenmire & Schwab, 2016), the loss function is defined as L(Wl) =∑N j=1 l(〈f(S),W\nl〉,y) based on the label bound l in a certain position, which only corresponds to a certain probability function. If one function is learned, the different aspects probability distribution for the label bound l in different position can not be considered. To solve this problem, we propose to model all functions in the set T in Eq. 12, with a new loss defined as follows:\nL(Wl,g) = 1\nN N∑ j=1 CE( n∑ i=1 gi〈f(S),Wli〉,y) (13)\nwhere CE is cross-entropy, y is the label that is represented by a one-hot vector, N is the number of samples, n is the length of the sentence, g is a parameter vector and gi is a value in the vector.〈f(S),Wli〉 is one function from the function set T . The sentence-DTN with all-function learning is expected to achieve higher accuracy than the original DTN.\nFinally, we give a Algorithm to describe our training method of overall TextTN given a sentence S with n words in the Algorithm 1." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we divide the experiment into two parts based on the theories presented above. The first set of experiments is to verify the effectiveness of TextTN. The second set of experiment is to verify that the effectiveness of the reference bond-dimension, which can be computed by Eq. 11 based on the entanglement entropy for different datasets. A Python implementation of the proposed algorithm is placed in Supplementary Material." }, { "heading": "4.1 TEXT CLASSIFICATION EVALUATION OF TEXTTN", "text": "Datasets and Benchmarks: Six text classification datasets are used in our experiments. MR (Pang & Lee, 2004b): Movie reviews are divided into positive and negative categories; CR (Hu & Liu,\nAlgorithm 1 Training Algorithm of TextTN\nInput: S = {w1, . . . , wn}, wi ∈ Rd (word embedding vectors), y ∈ Rl (the label of S); Output: W1, W2 and Wl (parameters tensors);\n1: Initialize W1 and W2 ( for word-GTNs); Wl (for sentence-DTNs); 2: Initialized training : the reference bond-dimension for Wl is obtained by the process. 3: repeat 4: Feature mapping: wi ∈ Rd −→ φ(wi) ∈ Rd×2; 5: Generate probability codings: vi = softmax(|W1 • φ(wi)|2, |W2 • φ(wi)|2), vi ∈ R2; 6: Compute conditional probability; y = Wl • (v1, . . . , vn), y ∈ Rl; 7: Use Cross-Entropy method (CE) to calculate loss: min L = CE(y, y); 8: Perform backpropagation by L and update parameters W1, W2 and Wl; 9: until The loss L converges\n2004): Customer reviews set where the task is to predict positive or negative product reviews; Subj: Subjectivity dataset where the target is to classify a text as being subjective or objective; MPQA (Wiebe et al., 2005): Opinion polarity detection subtask; SST-5 (Pang & Lee, 2004a): The movie reviews in the Stanford Sentiment Treebank, which is a fine-grained classification task (negative, somewhat negative, neutral, somewhat positive, positive); SST-2 (Socher et al., 2013): Stanford Sentiment Treebank with binary classes. The evaluation metric is \"Accuracy\" of all tasks.\nFor CR, MR, MPQA, and Subj tasks, we use publicly word vectors (Word2vec) (Pennington et al., 2014) with 300 dimensions. For SST-2 and SST-5, the Bert-Large pre-trained model (Devlin et al., 2019) is used to obtain word vectors, and the dimension of the word vector from the BERT is 1024. More dataset statistics and experimental settings are given out in the Appendix D.1.\nTable 1: The comparison of experimental results between TextTN and other benchmarks. Accuracy is the evaluation metric.\nModel MR CR Subj MPQA\nCNN 81.5 85.0 93.4 89.6 DiSAN – 84.8 94.2 90.1 Capsule-B 82.3 85.1 93.8 – SGC 75.9 – – – MPSAN – 85.4 94.6 90.4\nTextTN 82.2 85.7 95.3 90.4\nTable 2: The comparison between TextTN and other complicated models based on pre-trained model.\nModel SST-5 SST-2\nBCN+ELMo 54.7 – Star-Transformer 53 – FLOATER-large – 96.7 CNN_Large – 94.6 BERT 52.9 94.6\nBERT+TextTN 54.8 95.3\nTen benchmarks are used to compare with TextTN. In Table 1, CNN (Kim, 2014) is used for classification; DiSAN (Shen et al., 2017) is a self-attention based model; Capsule-B (Yang et al., 2018) is a capsule neural network; SGC (Wu et al., 2019) proposes a graph neural network; MPSAN (Dai et al., 2020) designs a novel self-attention network. In Table 2 , BCN+ELMo (Peters et al., 2018) is a a deep bidirectional language model; Star-Transformer (Guo et al., 2019) is a improved Transformer network; BERT (Devlin et al., 2019) , CNN_Large (Baevski et al., 2019) and FLOATER (Liu et al., 2020) are complicated neural network models based on the BERT_Large pre-trained model.\nExperimental Results and Analysis: First, TextTN is experimented on four datasets. The results in Table 1 show that TextTN has better results than CNN (Kim, 2014) on MR (+0.7%), CR (+0.7%),\n5 10 15 20 25 30 Bond Dimension\n82.0\n82.5\n83.0\n83.5\nAc cu\nra cy\n2.50\n2.75\n3.00\n3.25\n3.50\n3.75\n4.00\n4.25\n4.50\nEn ta\nng le\nm en\nt E nt\nro py\nAccuraccy Entanglement Entropy\n5 10 15 20 25 30 Bond Dimension\n86.5\n87.0\n87.5\n88.0\nAc cu\nra cy\n2.6\n2.8\n3.0\n3.2\n3.4\n3.6\n3.8\n4.0\n4.2\nEn ta\nng le\nm en\nt E nt\nro py\nAccuracy Entanglement Entropy\n5 10 15 20 25 30 Bond Dimension\n89.8\n90.0\n90.2\n90.4\n90.6\n90.8\n91.0\nAc cu\nra cy\n2.50\n2.75\n3.00\n3.25\n3.50\n3.75\n4.00\n4.25\nEn ta\nng le\nm en\nt E nt\nro py\nAccuracy Entanglement Entropy\n5 10 15 20 25 30 Bond Dimension\n93.6\n93.8\n94.0\n94.2\n94.4\n94.6\n94.8\n95.0\n95.2\nAc cu\nra cy\n2.50\n2.75\n3.00\n3.25\n3.50\n3.75\n4.00\n4.25\nEn ta\nng le\nm en\nt E nt\nro py\nAccuracy Entanglement Entropy\n0 20 40 60 80 Epoch\n4.10\n4.15\n4.20\n4.25\n4.30\n4.35\nEn ta\nng le\nm en\nt E nt\nro py\nMR\n0 20 40 60 80 Epoch\n3.4\n3.5\n3.6\n3.7\n3.8\n3.9\n4.0\n4.1\n4.2\nEn ta\nng le\nm en\nt E nt\nro py\nCR\n0 20 40 60 80 Epoch\n4.28\n4.30\n4.32\n4.34\n4.36\nEn ta\nng le\nm en\nt E nt\nro py\nMPQA\n0 20 40 60 80 Epoch\n4.10\n4.15\n4.20\n4.25\n4.30\n4.35\nEn ta\nng le\nm en\nt E nt\nro py\nSubj\nMR CR MPQA Subj\nFigure 3: On four classification tasks (MR, CR, MPQA and Subj), we calculate the entanglement e tropy and evaluate the accuracy of TextTN, respectively, with different bond-dimensions set from 5 to 30.\nSubj (+1.9%) and MPQA (+0.8%). Moreover, compared with the state-of-the-art self-attention based model MPSAN (Dai et al., 2020), TextTN outperforms its results on CR by 0.3%, Subj by 0.7%, and obtains a same result 90.4% on MPQA.\nAfter that, TextTN is experimented with the pre-training word vectors from BERT on SST-2 and SST-5. The results are shown in Table 2. In this experiment, BERT+TextTN achieves the comparable result with the model ELECTRA (Clark et al., 2020), and obtains higher results than BERT (Devlin et al., 2019) by 0.7% on SST-2. For SST-5 dataset, BERT+TextTN outperforms the state-of-the-art results 54.7 by 0.1%, and is higher than BERT (Devlin et al., 2019) by 1.9%.\nIn order to verify the effectiveness of all-function learning and word-GTNs, we perform ablation experiments shown as Table 3. First, when TextTN is without the all-function learning (TextTN w/o all-function), the classification accuracy declines to varying degrees on the four data sets. The accuracy values on MR, CR, Subj and MPQA drop by 1.7%, 1.6%, 0.9% and 0.6%, respectively. The results verify that the method of all-function learning for text classification is effective and useful.\nIn addition, in Table 3, we evaluate the effectiveness of word-GTNs by ablation experiments. TextTN w/o w-GTNs means that we use a linear function of the word embeddings to obtain the low-dimensional word representations, which are inptted to the sentence-DTN. On the contrary, TextTN uses word-GTNs to generate sentence representations. Compared with TextTN, the accuracy of TextTN w/o w-GTNs are largely reduced on four dataset. In particular, CR drops by 5.5%, MR, MPQA, and Subj decrease by 3.6%, 3.1%, and 2.4%, respectively. This shows that word-GTNs can effectively model word dependence in sentences and provide more effective semantic information for the classifier.\nFinally, we analyze the impact of different word vector dimensions on the performance of TextTN, and the results are shown in the Appendix D.2. In addition, in the Section 3.1, the dimension m of probability encoding is a hyper-parameter set to 2. In order to verify this choice, we conduct a comparative experiment, and the results in Appendix D.3 illustrate that compared with m = 2, the accuracy of classification with m > 2 has not been improved, or even decreased." }, { "heading": "4.2 EVALUATION OF THE REFERENCE BOND-DIMENSION OF TEXTTN", "text": "In Theorem 1, we show that the lower-bound of the bond-dimension can be derived by the entanglement entropy, and then consider such a lower bound can be the reference bond-dimension of TextTN. In order to evaluate such a reference hyper-parameter value, we designed experiments as follows. First, we can compute the entanglement entropy through the initialized training of TextTN on text classification datasets (see Appendix D.4 for details). After that, based on the entanglement entropy, the reference bond-dimension can be computed.\nFor MR, CR, MPQA and Subj datasets, their accuracy scores are highest when the bond-dimensions of sentence-DTN are 21,18,19 and 20, respectively. These values (i.e., 21,18,19 and 20), are actually the reference bond-dimensions, which are computed based on the entanglement entropy from the initialized training. Specifically, these values can be computed by Eq. 11, i.e., b24.40c = 21, b24.20c = 18, b24.32c = 19 and b24.37c = 20, respectively. In Figure 3, four charts show that the accuracy values reach the highest point when the reference bond-dimensions are computed as 21(MR), 18(CR), 19(MPQA) and 20(Subj).\nAs shown in Figure 3, by reducing and increasing the bond-dimension with respect to its reference value, we also observe interesting results about the accuracy. For example, in the MR task, when bond dimension ≤ 21 accuracy is less than the accuracy when the bond-dimension is the reference value 21. If bond dimension ≥ 21, the accuracy begins to decline, but the updated entanglement entropy value (computed by the non-zero single values of sentence-DTN) remains stably. The same results are shown on the other three data sets, i.e., CR, MPQA and Subj.\nIn summary, the above results show the effectiveness of the reference bond-dimension. However, a more rigorous analysis about the reference bond-dimension and the classification effectiveness of TextTN is still an open research question." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we propose a text tensor network, named as TextTN, which aims at encoding each word (in a sentence) from a high dimensional vector to a low-dimensional vector by word-GTNs. After that, the new sentence representations can be inputted to the sentence-DTN to carry out the sentence classification. The proposed TextTN achieves better results than CNN, and even better experimental results than some neural network models, such as HCapsNet. In addition, TextTN also achieves the experimental results that largely exceeds GTNs in all datasets. The entanglement entropy is defined in TextTN and is helpful to identify the the reference bond-dimension which usually achieves the best accuracy in our experiments.\nIn the future, we would like to further investigate more theoretical evidence and understanding of using of the entanglement entropy to set an optimal bound dimension. In the practical point of view, we can apply TextTN on more NLP tasks and test the effectiveness of both TextTN and its reference hyper-parameters." }, { "heading": "A BASIC KNOWLEDGE ABOUT TENSOR NETWORK (TN)", "text": "We now describe some basic knowledge on tensor and tensor contraction, as well as the matrix product states (MPS).\nA.1 TENSOR AND TENSOR CONTRACTION\nA tensor can simply be thought of as multi-dimensional array. The order of a tensor is defined to the number of indexing entries in the array. For example, in Figure 4a, the vector v with the index i ∈ m2, is a one order tensor, and the matrix A with two indexes (or nodes) i ∈ m and j ∈ n is\n2i ∈ [m] also is written as i ∈ {1, 2, . . . ,m}\na two-order tensor. After that, a n-order tensor is drawn in Figure with n nodes, d1, d2, ...,and dn, where each node corresponds to an input on TN.\nTensors have some basic operations. The most fundamental one is tensor contradiction, which means summing over same indices of two tensors. Two examples are presented here in order to illustrate the operation more directly. In Figure 4a, we show the process of a matrix (2-order tensor) contracting with a vector (1-order tensor). In Figure 4b, another contraction process between two matrix A and B is demonstrated. And those high-order tensors, the tensor contraction operations between them follow the same rules with the above examples.\nA.2 MATRIX PRODUCT STATE (MPS)\nA Tensor Network as a weighted graph, where each node corresponds to a tensor. It has a linear architecture with a list of nodes and edges, and the order of the tensor is the degree of the node in the graph. The weight of each edge is referred to as its bond dimension. In (Novikov et al., 2016; Han et al., 2018)), MPS is a kind of TN, which is decomposed by a high-order tensor through Tensor-Train decomposition (Oseledets, 2011). First, as shown in Figure 5a, the high-order tensor W is decomposed into multiple low-order tensors. Then, through tensor contraction, the low-order tensors represented by nodes can be contracted into a MPS network in Figure 5b. The formula is as follows:\nW = ∑ {α} Aα1d1 . . . A αi−1αi di . . . A αN−1 dN\n(14)\nIn the MPS, α1, α2, . . ., αN−1 are the indexes of bond_dimensions (also named as TT-rank in some works), di ∈ Rm (i ∈ [N ]). If the low-order tensors is not trimmed in the process of TT decomposition, that is, the full rank is guaranteed, then α1 ∈ Rm, α2 ∈ Rm 2 , αbN/2c ∈ Rm bN/2c\n, . . . , αN ∈ Rm. That is, the maximum bond-dimension is mbN/2c. Therefore, we avoid directly using the high-order tensor and Tensor-Train decomposition due to its exponentially increasing parameters. During the practical process, we do not construct a highorder tensor, nor do we perform tensor decomposition. Instead, we usually initialize an MPS architecture with random generated low-order tensors, optimizing them while training in order to better approximate feature distribution of data (which can be understood as a high-order tensor). Within the allowable deviation range, the amount of parameters will be dramatically reduced from mN to N ×m× r × r, and r is the value of bond-dimension." }, { "heading": "B PROOF OF THEOREM 1", "text": "In TextTN, the bond-dimension is an important hyper-parameter which directly influences the expressive power of tensor network models (i.e., TextTN). An reference bond-dimension can be given out based on the entanglement entropy which is defined in Definition 1., Therefore, The Theorem 1 shows the relation between the entanglement entropy and the bond-dimension, and gives out the lower-bond and the upper-bond of the bond-dimension. The proof of inequality 15 is showed here.\nTheorem 1. For the input of the sentence-DTN {v1, . . . ,vn}, the bond- dimension is bounded as\nmbn/2c ≥ bond−dimension ≥ b2Ec. (15)\nwhere E is the entanglement entropy of TextTN, m is the inputting dimension of sentence-DTN, n is the length of a sentence, and b c indicates rounding down.\nProof. As for the upper bound of the Eq. 15, as shown in Appendix A.2, sentence-DTN as a MPS tensor network can be represented by a decomposed n-order tensor. If no rank is cut during\ndecomposition, the bond-dimension (TT-rank) of sentence-DTN is the same as the n-order tensor. Suppose the formula of n-order tensor is:\nT = (v1 ⊗ . . .⊗ vbn/2c)⊗ (vbn/2c+1 ⊗ . . .⊗ vn) (16)\nvi is the word probability encodings shown in Eq.7. T is a mn tensor. From the matrix maximum rank theorem, the maximum rank in T appears at the position bn/2c, that is, the maximum rank is mbn/2c. Thus, the upper bound of bond-dimension in sentence-DTN is also mbn/2c.\nAs for the lower bound of Eq. 15, the entanglement entropy E is first computed by the first initialization training. For the lower bond of bond-dimension, practically, the singular values between vbn/2c and vbn/2c+1 in Figure 6 can be obtained in the training process of TextTN, when TextTN converges.\nAssume that bond−dimension of TextTN is equal to k. The maximal entanglement entropy is computed when the k singular values in TextTN are equal. Eq. 17 gives out the maximal entanglement entropy based on a fixed bond-dimension k.\nE = − k∑ i=1 1 k log 1 k = log k, (17)\nOn both sides of the equation, doing the exponential calculations in base 2 on both sides of the equation, the new equation can be computed as follows,\n2E = k. (18)\nWhen k singular values are not equal, the inequality can be written as\nk ≥ 2E . (19)\nTo ensure that bond − dimension represented by k, is an integer, the inequality can rewritten as k ≥ b2Ec, where b c indicates rounding down. Therefore, the inequality bond− dimension ≥ b2Ec in Eq 15 can be established. Recall that the entanglement entropy E computed by Eq. 17 represents the maximal amount of information that TextTN can model. Eq. 19 implies that the maximal entanglement entropy E determines the lower bound of the bond-dimension of TextTN. For the upper bond, a parameter tensor W ∈ Rmn can be viewed as a matrix with the shape mi × mn−i. The maximal rank can be obtained when the parameter tensor W is viewed as the matrix with the shape mbn/2c ×mn−bn/2c. In this case, the value of maximal rank can be equal to mbn/2c, which is the upper bond of bond-dimension.\nTherefore, the upper bond and lower bond of bond-dimension be obtained.\nTheorem 1 has also been validated in practice. In the process of TextTN training, we can compute the entanglement entropy of TextTN according to the Definition 1. When this training process converges, the maximal entanglement entropy is obtained. In addition, TextTNs with different bond-dimensions are experimented. At the beginning, the classification accuracy of TextTN increases with the increasing of the bond-dimension. When the bond-dimension increases to a certain value, the accuracy of TextTN begins to decline slowly. Such a certain value is the reference bond-dimension and can be computed by b2Ec, when the training process converges." }, { "heading": "C PROOF OF CLAIM 1", "text": "Claim 1 For an input S = {v1,v2, . . . ,vn}, the original DTN (Stoudenmire & Schwab, 2016) computes the conditional probability P (y|S) by putting the label bound l in a certain position, which can be considered as a certain function from a set T of functions which compute all the possible conditional probabilities. Such a set is defined as:\nT = {P (y|v1;v2 . . .vn), . . . , P (y|v1 . . .vi;vi+1 . . . ,vn), . . . , P (y|v1 . . .vn−1;vn)}, (20)\nwhere i the location of segmentation for the input S, and is also a position of label bound l.\nProof. For an input S={v1, . . . ,vn}, which is from the output of word-DTNs, a n-order tensor, i.e., the input of the sentence-DTN, is given b y the operator of tensor product between vectors, vi, i ∈ {1, . . . , n}. The parameter tensor Wl in DTN is a (n+ 1)-order tensor. There is a mode (or an index) in tensor Wl to control the number of class in the classification task, which is l. As shown in Figure 7 (a), in the processing of training, the index l is moved from S1 to Sn.\nWhen the position of l is S1, the input S is split two parts, which are I={v1} and J={v2, . . . ,vn}, the basic events are I and J , the classification function f1(S) can be represented by condition probability as follows, f1(S) = P (y|I, J) = P (y|v1;v2 . . .vn). (21) When the position of separation i is 1<i<n, the input S is separated into two parts, which are I={v1, . . . ,vi} and J= {vi+1, . . . ,vn}, the function f1(S) is represented\nf i(S) = P (y|v1 . . .vi;vi+1 . . .vn). (22)\nWhen the position of separation i is i=n− 1, the function f1(S) is denoted Eq. 23 similar to Eq. 21 and Eq. 22. fn−1(S) = P (y|v1 . . .vn−1;vn). (23) For different functions f i(S), they represent different condition probabilities. In the existing work (Stoudenmire & Schwab, 2016), i.e., original DTN, one function is used to classify the input S.\nIn order to model all functions in the process of training, and one function learning cannot model all conditional probabilities in the input S, we propose all-function learning method for text classification, which is showed in Figure 7 (b). We adopt weighted summation for these functions (i.e.,f(S)= ∑n i=1 gif i(S), where g is a parameter vector, gi is an element from the vector g)." }, { "heading": "D EXPERIMENTAL RESULTS AND ANALYSIS", "text": "In this section, we first describe the datasets used in our experiments.\nD.1 DATASETS\nThe statistics of six datasets are showed in Table 4. For CR, MR, MPQA and Subj datasets, they all have a total file. Therefore, we randomly select 10% of the file as the validation set and test set, respectively, and select 80% of the file as the train set. The evaluating indicator of all text classification tasks is the \"Accuracy\".\nFor CR, MR, MPQA and Subj tasks, we implement our model with Pytorch-1.20, and train them on a Nvidia P40 GPU. As for learning method, we use the Adam optimizer Kingma & Ba (2015) and an exponentially decaying learning rate with a linear warm up. For SST-2 and SST-5 dataset, the initialized learning rate is 5e-5.\nD.2 COMPARISON BETWEEN DIFFERENT DIMENSIONS OF PRE-TRAINED WORD VECTORS\nAs shown in Table 5, we compare the accuracy of TextTN using word2vec (300-dimension) and Bert_large (1024-dimension) pre-trained word vectors,respectively, for nine text classification tasks (e.g., MR, CR and SST-5). The results show that the accuracy for six tasks will have a significantly increase as the dimension of word representations increases in TextTN. Especially for SST-5 and MR, the results of 1024-dimensional word embedding has over 5% increases compared with the results of 300-dimensional word embedding. Comparative experiments show that TextTN has better learning effects for high-dimensional word representations containing more semantic information.\nD.3 A EXPERIMENT ABOUT THE DIMENSION OF PROBABILITY ENCODING\nIn Section 3.1, we analyze that the dimension m of probability encoding can not exceed 2. In this experiment, we evaluate the conclusion. As shown in Table 6, we compare the accuracy of the TextTN with different probability encoding dimensions set from 2 to 10. In addition, reported results are the average of 10 runs. The results illustrate that when m = 2, the accuracy of classification is 95.3, and when m > 2, the accuracy has a significant decrease. In particular, the accuracy only achieves 94.2 when m = 5, dropping by 1.1%. The Experimental results verify the effectiveness of m = 2.\nD.4 THE INITIALIZED TRAINING ON TEXTTN\nIn order to get the entanglement entropy shown as Eq.10(in this paper) in the initialized training, we set 80 epochs to training the TextTN on four text classification tasks (i.e., MR, CR, MPQA and Subj). In this process, the entanglement entropy, which is computed according to Definition 1, gradually converges to a determinate value as the increasing of training epoch, which shown as Figure 8." } ]
2,020
null
SP:6e300c6a4bdcf32b80cecf2d4526b99deb30912a
[ "In this paper, the authors study the problem of learning node embeddings for hypergraphs. While most of the existing studies consider reducing hyper-graphs into graphs, this paper studies learning embeddings directly on the hypergraphs using two stages of aggregations / sampling. The efficacy of the proposed method is illustrated in a semi-supervised as well as inductive setting, where the method achieves better performances than the baselines. " ]
Graphs are the most ubiquitous form of structured data representation used in machine learning. They model, however, only pairwise relations between nodes and are not designed for encoding the higher-order relations found in many real-world datasets. To model such complex relations, hypergraphs have proven to be a natural representation. Learning the node representations in a hypergraph is more complex than in a graph as it involves information propagation at two levels: within every hyperedge and across the hyperedges. Most current approaches first transform a hypergraph structure to a graph for use in existing geometric deep learning algorithms. This transformation leads to information loss, and sub-optimal exploitation of the hypergraph’s expressive power. We present HyperSAGE, a novel hypergraph learning framework that uses a two-level neural message passing strategy to accurately and efficiently propagate information through hypergraphs. The flexible design of HyperSAGE facilitates different ways of aggregating neighborhood information. Unlike the majority of related work which is transductive, our approach, inspired by the popular GraphSAGE method, is inductive. Thus, it can also be used on previously unseen nodes, facilitating deployment in problems such as evolving or partially observed hypergraphs. Through extensive experimentation, we show that HyperSAGE outperforms state-of-the-art hypergraph learning methods on representative benchmark datasets. We also demonstrate that the higher expressive power of HyperSAGE makes it more stable in learning node representations as compared to the alternatives.
[]
[ { "authors": [ "Sameer Agarwal", "Kristin Branson", "Serge Belongie" ], "title": "Higher order learning with graphs", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Devanshu Arya", "Stevan Rudinac", "Marcel Worring" ], "title": "Hyperlearn: a distributed approach for representation learning in datasets with many modalities", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Song Bai", "Feihu Zhang", "Philip HS Torr" ], "title": "Hypergraph convolution and hypergraph attention", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "C. Berge", "E. Minieka" ], "title": "Graphs and Hypergraphs", "venue": "URL https://books. google.nl/books?id=ARoVvgAACAAJ", "year": 1976 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "T-H Hubert Chan", "Anand Louis", "Zhihao Gavin Tang", "Chenzi Zhang" ], "title": "Spectral properties of hypergraph laplacian and approximation algorithms", "venue": "Journal of the ACM (JACM),", "year": 2018 }, { "authors": [ "I Eli Chien", "Huozhi Zhou", "Pan Li" ], "title": "hŝ2: Active learning over hypergraphs with pointwise and pairwise queries", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Yihe Dong", "Will Sawin", "Yoshua Bengio" ], "title": "HNHN: Hypergraph networks with hyperedge neurons", "venue": "arXiv preprint arXiv:2006.12278,", "year": 2020 }, { "authors": [ "Yifan Feng", "Haoxuan You", "Zizhao Zhang", "Rongrong Ji", "Yue Gao" ], "title": "Hypergraph neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Shi Gu", "Muzhi Yang", "John D Medaglia", "Ruben C Gur", "Raquel E Gur", "Theodore D Satterthwaite", "Danielle S Bassett" ], "title": "Functional hypergraph uncovers novel covariant structures over neurodevelopment", "venue": "Human brain mapping,", "year": 2017 }, { "authors": [ "Xuemei Gu", "Lijun Chen", "Mario Krenn" ], "title": "Quantum experiments and hypergraphs: Multiphoton sources for quantum interference, quantum computation, and quantum entanglement", "venue": "Physical Review A,", "year": 2020 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Matthias Hein", "Simon Setzer", "Leonardo Jost", "Syama Sundar Rangapuram" ], "title": "The total variation on hypergraphs-learning on hypergraphs revisited", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Jianwen Jiang", "Yuxuan Wei", "Yifan Feng", "Jingxuan Cao", "Yue Gao" ], "title": "Dynamic hypergraph neural networks", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": null, "year": 2016 }, { "authors": [ "Guohao Li", "Chenxin Xiong", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepergcn: All you need to train deeper gcns", "venue": "arXiv preprint arXiv:2006.07739,", "year": 2020 }, { "authors": [ "Guoyin Li", "Liqun Qi", "Gaohang Yu" ], "title": "The z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory", "venue": "Numerical Linear Algebra with Applications,", "year": 2013 }, { "authors": [ "Pan Li", "Olgica Milenkovic" ], "title": "Inhomogeneous hypergraph clustering with applications", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "arXiv preprint arXiv:1511.05493,", "year": 2015 }, { "authors": [ "Ryan A. Rossi", "Nesreen K. Ahmed" ], "title": "The network data repository with interactive graph analytics and visualization", "venue": "In AAAI,", "year": 2015 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Amnon Shashua", "Ron Zass", "Tamir Hazan" ], "title": "Multi-way clustering using super-symmetric nonnegative tensor factorization", "venue": "In European conference on computer vision,", "year": 2006 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Shulong Tan", "Jiajun Bu", "Chun Chen", "Bin Xu", "Can Wang", "Xiaofei He" ], "title": "Using rich social media information for music recommendation via hypergraph model", "venue": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM),", "year": 2011 }, { "authors": [ "Michael M Wolf", "Alicia M Klinvex", "Daniel M Dunlavy" ], "title": "Advantages to modeling relational data using hypergraphs versus graphs", "venue": "IEEE High Performance Extreme Computing Conference (HPEC),", "year": 2016 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Naganand Yadati", "Madhav Nimishakavi", "Prateek Yadav", "Vikram Nitin", "Anand Louis", "Partha Talukdar" ], "title": "Hypergcn: A new method for training graph convolutional networks on hypergraphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Muhan Zhang", "Yixin Chen" ], "title": "Link prediction based on graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Shali Jiang", "Yixin Chen" ], "title": "Beyond link prediction: Predicting hyperlinks in adjacency space", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Songyang Zhang", "Zhi Ding", "Shuguang Cui" ], "title": "Introducing hypergraph signal processing: theoretical foundation and practical applications", "venue": "IEEE Internet of Things Journal,", "year": 2019 }, { "authors": [ "Dengyong Zhou", "Jiayuan Huang", "Bernhard Schölkopf" ], "title": "Learning with hypergraphs: Clustering, classification, and embedding", "venue": "In Advances in neural information processing systems,", "year": 2007 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs are considered the most prevalent structures for discovering useful information within a network, especially because of their capability to combine object-level information with the underlying inter-object relations (Wu et al., 2020). However, most structures encountered in practical applications form groups and relations that cannot be properly represented using pairwise connections alone, hence a graph may fail to capture the collective flow of information across objects. In addition, the underlying data structure might be evolving and only partially observed. Such dynamic higher-order relations occur in various domains, such as social networks (Tan et al., 2011), computational chemistry (Gu et al., 2020), neuroscience (Gu et al., 2017) and visual arts (Arya et al., 2019), among others. These relations can be readily represented with hypergraphs, where an edge can connect an arbitrary number of vertices as opposed to just two vertices in graphs. Hypergraphs thus provide a more flexible and natural framework to represent such multi-way relations (Wolf et al., 2016), however, this requires a representation learning technique that exploits the full expressive power of hypergraphs and can generalize on unseen nodes from a partially observed hypergraph.\nRecent work in the field of geometric deep learning have presented formulations on graph structured data for the tasks of node classification (Kipf & Welling, 2016), link prediction (Zhang & Chen, 2018), or the classification of graphs (Zhang et al., 2018b). Subsequently, for data containing higher-order relations, a few recent papers have presented hypergraph-based learning approaches on similar tasks (Yadati et al., 2019; Feng et al., 2019). A common implicit premise in these papers is that a hypergraph can be viewed as a specific type of regular graph. Therefore, reduction of hypergraph learning problem to that of a graph should suffice. Strategies to reduce a hypergraph to a graph include transforming the hyperedges into multiple edges using clique expansion (Feng et al., 2019; Jiang et al., 2019; Zhang et al., 2018a), converting to a heterogeneous graph using star\nexpansion (Agarwal et al., 2006), and replacing every hyperedge with an edge created using a certain predefined metric (Yadati et al., 2019). Yet these methods are based on the wrong premise, motivated chiefly by a larger availability of graph-based approaches. By reducing a hypergraph to regular graph, these approaches make existing graph learning algorithms applicable to hypergraphs. However, hypergraphs are not a special case of regular graphs. The opposite is true, regular graphs are simply a specific type of hypergraph (Berge & Minieka, 1976). Therefore, reducing the hypergraph problem to that of a graph cannot fully utilize the information available in hypergraph. Two schematic examples outlining this issue are shown in Fig.1. To address tasks based on complex structured data, a hypergraph-based formulation is needed that complies with the properties of a hypergraph.\nA major limitation of the existing hypergraph learning frameworks is their inherently transductive nature. This implies that these methods can only predict characteristics of nodes that were present in the hypergraph at training time, and fail to infer on previously unseen nodes. The transductive nature of existing hypegraph approaches makes them inapplicable in, for example, finding the most promising target audience for a marketing campaign or making movie recommendations with new movies appearing all the time. An inductive solution would pave the way to solve such problems using hypergraphs. The inductive learning framework must be able to identify both the node’s local role in the hypergraph, as well as its global position (Hamilton et al., 2017). This is important for generalizing the learned node embeddings that the algorithm has optimized on to a newly observed hypergraph comprising previously unseen nodes, thus, making inductive learning a far more complex problem compared to the transductive learning methods.\nIn this paper, we address the above mentioned limitations of the existing hypergraph learning methods. We propose a simple yet effective inductive learning framework for hypergraphs that is readily applicable to graphs as well. Our approach relies on neural message passing techniques due to which it can be used on hypergraphs of any degree of cardinality without the need for reduction to graphs. The points below highlight the contributions of this paper:\n• We address the challenging problem of representation learning on hypergraphs by proposing HyperSAGE, comprising a message passing scheme which is capable of jointly capturing the intra-relations (within a hyperedge) as well as inter-relations (across hyperedges).\n• The proposed hypergraph learning framework is inductive, i.e. it can perform predictions on previously unseen nodes, and can thus be used to model evolving hypergraphs.\n• HyperSAGE facilitates neighborhood sampling and provides the flexibility in choosing different ways to aggregate information from the neighborhood.\n• HyperSAGE is more stable than state-of-the-art methods, thus provides more accurate results on node classification tasks on hypergraphs with reduced variance in the output." }, { "heading": "2 RELATED WORK", "text": "Learning node representations using graph neural networks has been a popular research topic in the field of geometric deep learning (Bronstein et al., 2017). Graph neural networks can be broadly classified into spatial (message passing) and spectral networks. We focus on a family of spatial message passing graph neural networks that take a graph with some labeled nodes as input and learn embeddings for each node by aggregating information from its neighbors (Xu et al., 2019). Message passing operations in a graph simply propagate information along the edge connecting two nodes. Many variants of such message passing neural networks have been proposed, with some popular ones including Gori et al. (2005); Li et al. (2015); Kipf & Welling (2016); Gilmer et al. (2017); Hamilton et al. (2017).\nZhou et al. (2007) introduced learning on hypergraphs to model high-order relations for semisupervised classification and clustering of nodes. Emulating a graph-based message passing framework for hypergraphs is not straightforward since a hyperedge involves more than two nodes which makes the interactions inside each hyperedge more complex. Representing a hypergraph with a matrix makes it rigid in describing the structures of higher order relations (Li et al., 2013). On the other hand, formulating message passing on a higher dimensional representation of hypergraph using tensors makes it computationally expensive and restricts it to only small datasets (Zhang et al., 2019). Several tensor based methods do perform learning on hypergraphs (Shashua et al., 2006; Arya et al., 2019), however they are limited to uniform hypergraphs only.\nTo resolve the above issues, Feng et al. (2019) and Bai et al. (2020) reduce a hypergraph to graph using clique expansion and perform graph convolutions on them. These approaches cannot utilize complete structural information in the hypergraph and lead to unreliable learning performance for e.g. classification, clustering and active learning (Li & Milenkovic, 2017; Chien et al., 2019). Another approach by Yadati et al. (2019), named HyperGCN, replaces a hyperedge with pair-wise weighted edges between vertices (called mediators). With the use of mediators, HyperGCN can be interpreted as an improved approach of clique expansion, and to the best of our knowledge, is also the state-of-the-art method for hypergraph representation learning. However, for many cases such as Fano plane where each hyperedge contains at most three nodes, HyperGCN becomes equivalent to the clique expansion (Dong et al., 2020). In spectral theory of hypergraphs, methods have been proposed that fully exploit the hypergraph structure using non-linear Laplacian operators (Chan et al., 2018; Hein et al., 2013). In this work, we focus on message passing frameworks. Drawing inspiration from GraphSAGE (Hamilton et al., 2017), we propose to eliminate matrix (or tensor) based formulations in our neural message passing frameworks, which not only facilitates utilization of all the available information in a hypergraph, but also makes the entire framework inductive in nature." }, { "heading": "3 PROPOSED MODEL: HYPERSAGE", "text": "The core concept behind our approach is to aggregate feature information from the neighborhood of a node spanning across multiple hyperedges, where the edges can have varying cardinality. Below, we first define some preliminary terms, and then describe our generic aggregation framework. This framework performs message passing at two-levels for a hypergraph. Further, for any graphstructured data, our framework emulates the one-level aggregation similar to GraphSAGE (Hamilton et al., 2017). Our approach inherently allows inductive learning, which makes it also applicable on hypergraphs with unseen nodes." }, { "heading": "3.1 PRELIMINARIES", "text": "Definition 1 (Hypergraph). A general hypergraph H can be represented as H = (V,E,X), where V = {v1, v2, ..., vN} denotes a set of N nodes (vertices) and E = {e1, e2, ..., eK} denotes a set of hyperedges, with each hyperedge comprising a non-empty subset from V. X ∈ RN×d denote the feature matrix, such that xi ∈ X is the feature vector characterizing node vi ∈ V. The maximum cardinality of the hyperedges in H is denoted as M = max\ne∈E |e|.\nUnlike in a graph, the hyperedges of H can contain different number of nodes and M denotes the largest number. From the definition above, we see that graphs are a special case of hypergraphs with\nM=2. Thus, compared to graphs, hypergraphs are designed to model higher-order relations between nodes. Further, we define three types of neighborhoods in a hypergraph:\nDefinition 2 (Intra-edge neighborhood). The intra-edge neighborhood of a node vi ∈ V for any hyperedge e ∈ E is defined as the set of nodes vj belonging to e and is denoted by N(vi, e) Further, let E(vi) = {e ∈ E | vi ∈ e} be the sets of hyperedges that contain node vi. Definition 3 (Inter-edge neighborhood). The inter-edge neighborhood of a node vi ∈ V also referred as its global neighborhood, is defined as the neighborhood of vi spanning across the set of hyperedges E(vi) and is represented by N(vi) = ⋃ e∈E(vi) N(vi, e).\nDefinition 4 (Condensed neighborhood). The condensed neighborhood of any node vi ∈ V is a sampled set of α ≤ |e| nodes from a hyperedge e ∈ E(vi) denoted by N(vi, e;α) ⊂ N(vi, e)." }, { "heading": "3.2 GENERALIZED MESSAGE PASSING FRAMEWORK", "text": "We propose to interpret the propagation of information in a given hypergraph as a two-level aggregation problem, where the neighborhood of any node is divided into intra-edge neighbors and inter-edge neighbors. For message aggregation, we define aggregation function F(·) as a permutation invariant set function on a hypergraph H = (V,E,X) that takes as input a countable unordered message set and outputs a reduced or aggregated message. Further, for two-level aggregation, let F1(·) and F2(·) denote the intra-edge and inter-edge aggregation functions, respectively. Schematic representation of the two aggregation functions is provided in Fig.2. Similar to X we also define Z as the encoded feature matrix built using the outputs zi of aggregation functions. Message passing at node vi for aggregation of information at the lth layer can then be stated as\nx (e) i,l ← F1 ({xj,l−1 | vj ∈ N(vi, e;α)}), (1)\nxi,l ← xi,l−1 + F2 ({x(e)i,l |vi ∈ E(vi)}), (2)\nwhere, x(e)i,l refers to the aggregated feature set at vi obtained with intra-edge aggregation for edge e.\nThe combined two-level message passing is achieved using nested aggregation function F = F2. To ensure that the expressive power of a hypergraph is preserved or at least the loss is minimized, the choice of aggregation function should comply with certain properties.\nFirstly, the aggregation function should be able to capture the features of neighborhood vertices in a manner that is invariant to the permutation of the nodes and hyperedges. Many graph representation learning methods use permutation invariant aggregation functions, such as mean, sum and max functions (Xu et al., 2019). These aggregations have proven to be successful for node classification problems. For the existing hypergraph frameworks, reduction to simple graphs along with a matrix-based message passing framework limits the possibilities of using different types of feature aggregation functions, and hence curtails the potential to explore unique node representations.\nSecondly, the aggregation function should also preserve the global neighborhood invariance at the ‘dominant nodes’ of the graph. Here, dominant nodes refer to nodes that contain important features, thereby, impacting the learning process relatively more than their neighbors. The aggregation function should ideally be insensitive to the input, whether the provided hypergraph contains a few large\nhyperedges, or a larger number of smaller ones obtained from splitting them. Generally, a hyperedge would be split in a manner that the dominant nodes are shared across the resulting hyperedges. In such cases, global neighborhood invariance would imply that the aggregated output at these nodes before and after the splitting of any associated hyperedge stays the same. Otherwise, the learned representation of a node will change significantly with each hyperedge split.\nBased on these considerations, we define the following properties for a generic message aggregation function that should hold for accurate propagation of information through the hypergraphs.\nProperty 1 (Hypergraph Isomorphic Equivariance). A message aggregation function F(·) is equivariant to hypergraph isomorphism, if for two isomorphic hypergraphs H = (V,E,X) and H∗ = (V∗,E∗,X∗), given that H∗ = σ •H, and Z and Z∗ represent the encoded feature matrices obtained using F(·) on H and H∗, the condition Z∗ = σ • Z holds. Here, σ denotes a permutation operator on hypergraphs.\nProperty 2 (Global Neighborhood Invariance). A message aggregation scheme F(·) satisfies global neighborhood invariance at any node vi ∈ V for a given hypergraph H = (V,E,X) if for any operation Γ(·), such that H∗ = Γ(H), and zi and z∗i denote the encoded feature vectors obtained using F(·) at node vi on H and H∗, the condition z∗i = zi holds. Here Γ(H) could refer to operations such as hyperedge contraction or expansion.\nThe flexibility of our message passing framework allows us to go beyond the simple aggregation functions on hypergraphs without violating Property 1. We introduce a series of power mean functions as aggregators, which have recently been shown to generalize well on graphs (Li et al., 2020). We perform message aggregation in hypergraphs using these generalized means, denoted byMp and provide in section 4.2, a study on their performances. We also show that with appropriate combinations of the intra-edge and inter-edge aggregations Property 2 is also satisfied. This property ensures that the representation of a node after message passing is invariant to the cardinality of the hyperedge, i.e., the aggregation scheme should not be sensitive to hyperedge contraction or expansion, as long as the global neighborhood of a node remains the same in the hypergraph.\nAggregation Functions. One major advantage of our strategy is that the message passing module is decoupled from the choice of the aggregation itself. This allows our approach to be used with a broad set of aggregation functions. We discuss below a few such possible choices.\nGeneralized means. Also referred to as power means, this class of functions are very commonly used for getting an aggregated measure over a given set of samples. Mathematically, generalized\nmeans can be expressed as Mp = ( 1 n ∑n i=1 x p i ) 1 p , where n refers to the number of samples in the aggregation, and p denotes its power. The choice of p allows providing different interpretations to the aggregation function. For example, p = 1 denotes arithmetic mean aggregation, p = 2 refers to mean squared estimate and a large value of p corresponds to max pooling from the group. Similarly, Mp can be used for geometric and harmonic means with p→ 0 and p = −1, respectively. Similar to the recent work of Li et al. (2020), we use generalized means for intra-edge as well as inter-edge aggregation. The two functions F1(·) and F2(·) for aggregation at node vi is defined as\nF (i) 1 (s) = 1|N(vi, e)||N(vi)| ∑ vj∈N(vi,e) |E(vi)|∑ m=1\n1\n|N(vi, em)|\n−1 xpj 1 p\n(3)\nF (i) 2 (s) = 1 |E(vi)| ∑ e∈E(vi) (F1(s)) p 1p (4) where we use ‘s’ for concise representation of the unordered set of input as shown in Eq.1. Here and henceforth in this paper, we remove the superscript index ‘(i)’ for the sake of clarity and further occurrences of the two aggregation functions shall be interpreted in terms of node vi. Note that in Eq. 3 and Eq. 4, we have chosen the power term p to be same for F1 and F2 so as to satisfy the global neighborhood invariance as stated in Property 2. Note, the scaling term added to F1 is added to balance the bias in the weighting introduced in intra-edge aggregation due to varying\ncardinality across the hyperedges. These restrictions ensure that the joint aggregation F2(·) satisfies the property of global neighborhood invariance at all times. Proof of the two aggregations satisfying Property 2 is stated in Appendix B.\nSampling-based Aggregation. Our neural message passing scheme provides the flexibility to adapt the message aggregation module to fit the desired computational budget through aggregating information from only a subset N(vi, e;α) of the full neighborhood N(vi, e), if needed. We propose to apply sub-sampling only on the nodes from the training set, and use information from the full neighborhood for the test set. The advantages of this are twofold. First, reduced number of samples per aggregation at training time reduces the relative computational burden. Second, similar to dropout (Srivastava et al., 2014), it serves to add regularization to the optimization process. Using the full neighborhood on test data avoids randomness in the test predictions, and generates consistent output." }, { "heading": "3.3 INDUCTIVE LEARNING ON HYPERGRAPHS", "text": "HyperSAGE is a general framework for learning node representations on hypergraphs, on even unseen nodes. Our approach uses a neural network comprising L layers, and feature-aggregation is performed at each of these layers, as well as across the hyperedges.\nAlgorithm 1 HyperSAGE Message Passing Input : H = (V,E,X); depth L; weight matri-\nces Wl for l = 1 . . . L; non-linearity σ; intra-edge aggregation function F1(·); inter-edge aggregation function F2(·)\nOutput: Node embeddings zi| vi ∈ V h0i ← xi ∈ X | vi ∈ V for l = 1 . . . L do\nfor e ∈ E do hli ← h l−1 i\nfor vi ∈ e do hli ← hli + F (i) 2 (s)\nend end hli ← σ(Wl(hli/||hli||2)) | vi ∈ V\nend zi ← hLi | vi ∈ V\nAlgorithm 1 describes the forward propagation mechanism which implements the aggregation function F(·) = F2(·) described above. At each iteration, nodes first aggregate information from their neighbors within a specific hyperedge. This is repeated over all the hyperedges across all the L layers of the network. The trainable weight matrices Wl with l ∈ L are used to aggregate information across the feature dimension and propagate it through the various layers of the hypergraph.\nGeneralizability of HyperSAGE. HyperSAGE can be interpreted as a generalized formulation that unifies various existing graphbased as well as hypergraph formulations. Our approach unifies them, identifying each of these as special variants/cases of our method. We discuss here briefly the two popular algorithms.\nGraph Convolution Networks (GCN). The GCN approach proposed by Kipf & Welling (2016) is a graph-based method that can be derived as a special case of HyperSAGE with maximum cardinality |M | = 2, and setting the agggregation function F2 = Mp with p = 1. This being a graph-based method, F1 will not be used.\nGraphSAGE. Our approach, when reduced for graphs using |M | = 2, is similar to GraphSAGE. For exact match, the aggregation function F2 should be one of mean, max or LSTM . Further, the sampling term α can be adjusted to match the number of samples per aggregation as in GraphSAGE." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "For the experiments in this paper, we use co-citation and co-authorship network datasets: CiteSeer, PubMed, Cora (Sen et al., 2008) and DBLP (Rossi & Ahmed, 2015). The task for each dataset is to predict the topic to which a document belongs (multi-class classification). For these datasets, xi corresponds to a bag of words such that xi,j ∈ xi represents the normalized frequency of occurence of the jth word. Additional details related to the hypergraph topology are presented in Appendix\nA.2. Further, for all experiments, we use a neural network with 2 layers. All models are implemented in Pytorch and trained using Adam optimizer. See Appendix A.2 for implementation details." }, { "heading": "4.2 SEMI-SUPERVISED NODE CLASSIFICATION ON HYPERGRAPHS", "text": "Performance comparison with existing methods. We implemented HyperSAGE for the task of semi-supervised classification of nodes on a hypergraph, and the results are compared with stateof-the art methods. These include (a) Multi-layer perceptron with explicit hypergraph Laplacian regularisation (MLP + HLR), (b) Hypergraph Neural Networks (HGNN) (Feng et al., 2019) which uses a clique expansion, and (c) HyperGCN and its variants (Yadati et al., 2019) that collapse the hyperedges using mediators. For HyperSAGE method, we use 4 variants of generalized means Mp with p = 1, 2,−1 and 0.01 with complete neighborhood i.e., α = |e|. For all the cases, 10 data splits over 8 random weight initializations are used, totalling 80 experiments per method and for every dataset. The data splits are the same as in HyperGCN described in Appendix A.1.\nTable 1 shows the results obtained for the node classification task. We see that the different variants of HyperSAGE consistently show better scores across our benchmark datasets, except Cora cocitation where no improvement is observed compared to HGNN. Cora co-citation data is relatively small in size with a cardinality of 3.0± 1.1, and we speculate that there does not exist enough scope of improving with HyperSAGE beyond what HGNN can express with the clique expansion.\nFor the larger datasets such as DBLP and Pubmed, we see that the improvements obtained in performance with HyperSAGE over the best baselines are 6.3% and 4.3% respectively. Apart from its superior performance, HyperSAGE is also stable, and is less sensitive to the choice of data split and initialization of the weights. This is evident from the scores of standard deviation (SD) for the various experiments in Table 1. We see that the SD scores for our method are lower than other methods, and there is a significant gain in performance compared to HyperGCN. Another observation is that the HyperGCN method is very sensitive to the data splits as well as initializations with very large errors in the predictions. This is even higher for the FastHyperGCN variant. Also, we have found that all the 4 choices of p work well with HyperSAGE for these datasets. We further perform a more comprehensive study analyzing the effect of p on model performance later in this section.\nStability analysis. We further study the stability of our method in terms of the variance observed in performance for different ratios of train and test splits, and compare results with that of HyperGCN implemented under similar settings. Fig. 3 shows results for the two learning methods on 5 different train-test ratios. We see that the performance of both models improves when a higher fraction of data is used for training, and the performances are approx-\nimately the same at the train-test ratio of 1/3. However, for smaller ratios, we see that HyperSAGE outperforms HyperGCN by a significant margin across all datasets. Further, the standard deviation for the predictions of HyperSAGE are significantly lower than that of HyperGCN. Clearly, this implies that HyperSAGE is able to better exploit the information contained in the hypergraph compared to HyperGCN, and can thus produce more accurate and stable predictions. Results on Cora and Citeseer can be found in Appendix C.\nEffect of generalized mean aggregations and neighborhood sampling. We study here the effect of different choices of the aggregation functions F1(·) and F2(·) on the performance of the model. Further, we also analyze how the number of samples chosen for aggregation affect its performance. Aggregation functions from Mp are chosen with p = 1, 2, 3, 4, 5, 0.01 and −1, and to comply with global neighborhood invariance, we use aggregation function as in Eq. 4. The number of neighbors α for intra-edge aggregation are chosen to be 2, 3, 5 and 10. Table 2 shows the accuracy scores obtained for different choices of p and α on DBLP and Pubmed datasets. For most cases, higher value of p reduces the performance of the model. For α = 2 on DBLP, performance seems to be independent of the choice of p. A possible explanation could be that the number of neighbors is very small, and change in p does not affect the propagation of information significantly. An exception is p = −1, where the performance drops for all cases. For Pubmed, the choice of p seems to be very important, and we find that p = 0.01 seems to fit best.\nWe also see that the number of samples per aggregation can significantly affect the performance of the model. For DBLP, model performance increases with increasing value of α. However, for Pubmed, we observe that performance improves up to α = 5, but then a slight drop is observed for larger sets of neighbors. Note that for Pubmed, the majority of the hyperedges have cardinality less than or equal to 10. This means that during aggregation, information will most often be aggregated from all the neighbors, thereby involving almost no stochastic sampling. Stochastic sampling of nodes could serve as a regularization mechanism and reduce the impact of noisy hyperedges. However, at α = 10, it is almost absent, due to which the noise in the data affects the performance of the model which is not the case in DBLP." }, { "heading": "4.3 INDUCTIVE LEARNING ON EVOLVING GRAPHS", "text": "For inductive learning experiment, we consider the case of evolving hypergraphs. We create 4 inductive learning datasets from DBLP, Pubmed, Citeseer and Core (co-citation) by splitting each\nof the datasets into a train-test ratio of 1:4. Further, the test data is split into two halves: seen and unseen. The seen test set comprises nodes that are part of the hypergraph used for representation learning. Further, unseen nodes refer to those that are never a part of the hypergraph during training. To study how well HyperSAGE generalizes for inductive learning, we classify the unseen nodes and compare the performance with the scores obtained on the seen nodes. Further, we also compare our results on unseen nodes with those of MLP+HLR. The results are shown in Table 3. We see that results obtained with HyperSAGE on unseen nodes are significantly better than the baseline method. Further, these results seem to not differ drastically from those obtained on the seen nodes, thereby confirming that HyperSAGE can work with evolving graphs as well." }, { "heading": "5 CONCLUSION", "text": "We have proposed HyperSAGE, a generic neural message passing framework for inductive learning on hypergraphs. The proposed approach fully utilizes the inherent higher-order relations in a hypergraph structure without reducing it to a regular graph. Through experiments on several representative datasets, we have shown that HyperSAGE outperforms the other methods for hypergraph learning. Several variants of graph-based learning algorithm such as GCN and GraphSAGE can be derived from the flexible aggregation and neighborhood sampling framework, thus making HyperSAGE a universal framework for learning node representations on hypergraphs as well as graphs." }, { "heading": "A EXPERIMENTS: ADDITIONAL DETAILS", "text": "We perform multi-class classification on co-authorship and co-citation datasets, where the task is to predict the topic (class) for each document.\nA.1 DATASET DESCRIPTION\nHypergraphs are created on these datasets by assigning each document as a node and each hyperedge represents (a) all documents co-authored by an author in co-authorship dataset and (b) all documents cited together by a document in co-citation dataset. Each document (node) is represented by bagof-words features. The details about nodes, hyperedges and features is shown in Table 4. We use the same dataset and train-test splits as provided by Yadati et al. (2019) in their publically available implementation 1.\nA.2 IMPLEMENTATION DETAILS\nWe use the following set of hyperparameters similar to the prior work by Kipf & Welling (2016) for all the models.\n• hidden layer size: 32 • dropout rate: 0.5 • learning rate: 0.01 • weight decay: 0.0005 • number of training epochs: 150 • λ for explicit Laplacian regularisation: 0.001" }, { "heading": "B CHOICE OF INTER-EDGE AND INTRA-EDGE AGGREGATIONS", "text": "Proof. For any given hypergraph H1 = (V,E1,X), let vi denote a node at which global neighborhood equivariance exists. The aggregation output F1(s) at vi can then be written using generalized means Mp as\nF1(s) = 1 |N(vi, e)| ∑ vj∈N(vi,e) xp1j 1p1 . (5) To reiterate here, s denotes the unordered set of input as shown in Eq. 5. Further, the inter-edge aggregation F2(·) can be stated as\n1HyperGCN Implementation: https://github.com/malllabiisc/HyperGCN\nF2(s) = 1|E(vi)| ∑ e∈E(vi) 1 |N(vi, e)| ∑ vj∈N(vi,e) xp1j p2 p1 1 p2\n(6)\nThis equation can be rewritten as\nF2(s) = 1|E(vi)| 1 |N(vi, eq)| ∑\nvj∈N(vi,eq)\nxp1j\n p2 p1 + ∑\ne∈E(vi),e6=eq\n 1 |N(vi, e)| ∑ vj∈N(vi,e) xp1j p2 p1 1 p2\n(7) Further, let\nΨ = ∑\ne∈E(vi),e6=eq\n 1 |N(vi, e)| ∑ vj∈N(vi,e) xp1j p2 p1 , (8)\nthen Eq. 7 can be rewritten as\nF2(s) = 1|E(vi)| 1 |N(vi, eq)| ∑ vj∈N(vi,eq) xp1j p2 p1 + Ψ 1 p2\n(9)\nLet us assume now that hyperedge eq is split into r hyperedges given by E(vi, eq) = {eq1 , eq2 . . . eqr}. Stating the aggregation on the new set of hyperedges as F̃2(s), we assemble the contribution from this new set of hyperedges with added weight terms wj as stated below.\nF̃2(s) = 1|E(vi)| ∑\ne∈E(vi,eq)\n 1 |N(vi, e)| ∑ vj∈N(vi,e) wjx p1 j p2 p1 + Ψ 1 p2\n(10)\nFor the property of global neighborhood invariance to hold at vi, the following condition should be satisfied: F2(vi) = F̃2(vi). Based on this, we would like to solve for the weights wj . For this, we equate the two terms and obtain 1\n|N(vi, eq)| ∑\nvj∈N(vi,eq)\nxp1j\n p2 p1 = ∑\ne∈E(vi,eq)\n 1 |N(vi, e)| ∑ vj∈N(vi,e) wjx p1 j p2 p1\n(11)\nWe further solve for the variables p1, p2 and wj where Eq. 11 holds. For the sake of clarity, we first simplify Eq. 11 using the following substitutions: α = p2p1 , β = 1 |N(vi,eq)| and βmj = wj |N(vi,em)| , where the index m here is used to refer to the mth hyperedge from among the r hyperedges obtained on splitting eq . Further, let zj = x p1 j for vj ∈ N(vi, eq) and zmj = x p1 j for vj ∈N(vi, em) and em ∈ E(vi, eq). Based on these substitutions, Eq. 11 can be restated as\nβα(z1 + z2 + . . .+ zN ) α = (β11z1 + β12z2 + . . .+ β1jzj + . . .+ β1NzN ) α\n+ (β21z1 + β22z2 + . . .+ β2jzj + . . .+ β2NzN ) α+\n... + (βr1z1 + βr2z2 + . . .+ βrjzj + . . .+ βrNzN ) α. (12)\nWe seek general solutions for wj and α which holds for all values of zj ∈ [0, 1] since every element in the normalized feature vectors xj lies in [0, 1].\nFor a generalized solution, the coefficients of zj on the right should be equal to the coefficient of zj on the left. The term on the left can be reformulated as\nβα(z1 + z2 + . . .+ zN ) α = βα(z1 + (z2 + z3 + . . .+ zN )) α (13)\nConsider the case when |z1| ≤ |z2 + z3 + . . . |, we expand Eq. 13. using binomial expansion for real co-efficients,\nβα(z1 + (z2 + z3 + . . .)) α = βα(\n( α\n0\n) zα1 + ( α\n1\n) zα−11 (z2 + z3 + . . .+ zN )+\n...\n+\n( α\nα− 1\n) z1(z2 + z3 + . . .+ zN ))\n= βα(zα1 + α(z α−1 1 z2 + z α−1 1 z3 + . . .+ z α−1 1 zN )+\n...\n+ αz1(z2 + z3 + . . .+ zN ) α−1) (14)\nWithout any loss of generality, we consider splitting of hyperedge eq into r hyperedges such that nodes vγ1 and vγ2 are not contained in the same hyperedge anymore. This implies that RHS in Eq. 14 should not contain product terms of z1 and z2. Hence, the term zα−11 z2 should be such that\nα− 1 = 0⇒ α = 1⇒ p1 = p2 (15)\nPutting α = 1 and comparing the coefficients in Eq.12, we get β = β11 + β12 + . . .+ β21 + β22 . . .+ βr1 + βr2 + . . .\n1\n|N(vi, eq)| = r∑ m=1 wj |N(vi, em)|\n(16)\nwj = 1\n|N(vi, eq)| ∗\n( r∑\nm=1\n1\n|N(vi, em)|\n)−1 (17)\nThus, if an edge eq is split into multiple edges E(vi, eq), then for the two aggregations to hold, the conditions are p1 = p2 and wj = 1|N(vi,eq)| ∗ (∑r m=1 1 |N(vi,em)| )−1 ∀ e ∈ E(vi, eq).\nWhile we provide above a description related to splitting a certain hyperedge eq into r hyperedges, the derived results can be used to compute global neighborhood itself on any given node vi. Similar to eq above, node vi together with its global neighborhood (counted as N(vi)) can be interpreted as a virtual hyperedge that has been split into a number of hyperedges that actually exist and contain vi. These resultant hyperdges are equivalent to the r hyperdges obtained after splitting, as stated above." }, { "heading": "C STABILITY TEST ON CORA AND CITESEER", "text": "" } ]
2,020
null
SP:1d63d99b43018556784ecc4a3ee494c71e7ef06e
[ "The paper studies the memorization capacity of deep networks as a function of the number of parameters. Many prior works have shown that to memorize $N$ examples $O(N)$ parameters are sufficient and that to memorize any set of $N$ examples $\\Omega(N)$ parameters are necessary. This work shows that under very mild and commonly satisfied conditions, $O(N^{\\frac{2}{3}})$ parameters and layers are sufficient for memorizing $N$ examples, a significant improvement over prior results. Additionally, they show that even very narrow width-bounded networks can memorize with sub-linear parameters, but the same is not true for depth-bounded networks, demonstrating a new capability unlocked by deeper networks. Finally, they characterize the properties sufficient for memorizing $N$ examples by a given network." ]
It is known that O(N) parameters are sufficient for neural networks to memorize arbitrary N input-label pairs. By exploiting depth, we show that O(N) parameters suffice to memorize N pairs, under a mild condition on the separation of input points. In particular, deeper networks (even with width 3) are shown to memorize more pairs than shallow networks, which also agrees with the recent line of works on the benefits of depth for function approximation. We also provide empirical results that support our theoretical findings.
[]
[ { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Peter L. Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Eric B. Baum" ], "title": "On the Capabilities of Multilayer Perceptrons", "venue": "Journal of Complexity,", "year": 1988 }, { "authors": [ "Eric B. Baum", "David Haussler" ], "title": "What size net gives valid generalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Vaggos Chatziafratis", "Sai Ganesh Nagarajan", "Ioannis Panageas" ], "title": "Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Vaggos Chatziafratis", "Sai Ganesh Nagarajan", "Ioannis Panageas", "Xiao Wang" ], "title": "Depth-width trade-offs for relu networks via sharkovsky’s theorem", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of Control, Signals and Systems,", "year": 1989 }, { "authors": [ "Ronen Eldan", "Ohad Shamir" ], "title": "The power of depth for feedforward neural networks", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Walter Gautschi" ], "title": "Some elementary inequalities relating to the gamma and incomplete gamma function", "venue": "Journal of Mathematics and Physics,", "year": 1959 }, { "authors": [ "Boris Hanin", "Mark Sellke" ], "title": "Approximating continuous functions by ReLU nets of minimal width", "venue": "arXiv preprint arXiv:1710.11278,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Tengyu Ma" ], "title": "Identity matters in deep learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Guang-Bin Huang" ], "title": "Learning capability and storage capacity of two-hidden-layer feedforward networks", "venue": "IEEE Transactions on Neural Networks,", "year": 2003 }, { "authors": [ "Guang-Bin Huang", "Haroon A Babri" ], "title": "Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions", "venue": "IEEE Transactions on Neural Networks,", "year": 1998 }, { "authors": [ "Jesse Johnson" ], "title": "Deep, skinny neural networks are not universal approximators", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Marek Karpinski", "Angus Macintyre" ], "title": "Polynomial bounds for vc dimension of sigmoidal and general pfaffian neural networks", "venue": "Journal of Computer and System Sciences,", "year": 1997 }, { "authors": [ "Patrick Kidger", "Terry Lyons" ], "title": "Universal approximation with deep narrow networks", "venue": "In Conference on Learning Theory,", "year": 2020 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Henry W Lin", "Max Tegmark", "David Rolnick" ], "title": "Why does deep and cheap learning work so well", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "Zhou Lu", "Hongming Pu", "Feicheng Wang", "Zhiqiang Hu", "Liwei Wang" ], "title": "The expressive power of neural networks: A view from the width", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Wolfgang Maass" ], "title": "Bounds for the computational power and learning complexity of analog neural nets", "venue": "SIAM Journal on Computing,", "year": 1997 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep Double Descent: Where Bigger Models and More Data Hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NeurIPS Deep Learning and Unsupervised Feature Learning Workshop,", "year": 2011 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "Optimization Landscape and Expressivity of Deep CNNs", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sejun Park", "Chulhee Yun", "Jaeho Lee", "Jinwoo Shin" ], "title": "Minimum Width for Universal Approximation", "venue": "arXiv preprint arXiv:2006.08859,", "year": 2020 }, { "authors": [ "Allan Pinkus" ], "title": "Approximation theory of the MLP model in neural networks", "venue": "Acta Numerica,", "year": 1999 }, { "authors": [ "Eduardo D. Sontag" ], "title": "Shattering all sets of k points in “general position", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Matus Telgarsky" ], "title": "Benefits of depth in neural networks", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Roman Vershynin" ], "title": "Memory capacity of neural networks with threshold and ReLU activations", "venue": "arXiv preprint 2001.06938,", "year": 2020 }, { "authors": [ "Masami Yamasaki" ], "title": "The lower bound of the capacity for a neural network with multiple hidden layers", "venue": "In International Conference on Artificial Neural Networks,", "year": 1993 }, { "authors": [ "Dmitry Yarotsky" ], "title": "Optimal approximation of continuous functions by very deep relu networks", "venue": "In Conference on Learning Theory,", "year": 2018 }, { "authors": [ "Hyunho Yeo", "Youngmok Jung", "Jaehong Kim", "Jinwoo Shin", "Dongsu Han" ], "title": "Neural Adaptive Content-aware Internet Video Delivery", "venue": "In USENIX Symposium on Operating Systems Design and Implementation,", "year": 2018 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The modern trend of over-parameterizing neural networks has shifted the focus of deep learning theory from analyzing their expressive power toward understanding the generalization capabilities of neural networks. While the celebrated universal approximation theorems state that over-parameterization enables us to approximate the target function with a smaller error (Cybenko, 1989; Pinkus, 1999), the theoretical gain is too small to satisfactorily explain the observed benefits of over-parameterizing already-big networks. Instead of “how well can models fit,” the question of “why models do not overfit” has become the central issue (Zhang et al., 2017).\nIronically, a recent breakthrough on the phenomenon known as the double descent (Belkin et al., 2019; Nakkiran et al., 2020) suggests that answering the question of “how well can models fit” is in fact an essential element in fully characterizing their generalization capabilities. In particular, the double descent phenomenon characterizes two different phases according to the capability/incapability of the network size for memorizing training samples. If the network size is insufficient for memorization, the traditional bias-variance trade-off occurs. However, after the network reaches the capacity that memorizes the dataset, i.e., “interpolation threshold,” larger networks exhibit better generalization. Under this new paradigm, identifying the minimum size of networks for memorizing finite input-label pairs becomes a key issue, rather than function approximation that considers infinite inputs.\nThe memory capacity of neural networks is relatively old literature, where researchers have studied the minimum number of parameters for memorizing arbitrary N input-label pairs. Existing results showed that O(N) parameters are sufficient for various activation functions (Baum, 1988; Huang and Babri, 1998; Huang, 2003; Yun et al., 2019; Vershynin, 2020). On the other hand, Sontag (1997) established the negative result that for any network using analytic definable activation functions with o(N) parameters, there exists a set of N input-label pairs that the network cannot memorize. The sub-linear number of parameters also appear in a related topic, namely the VC-dimension of neural networks. It has been proved that there exists a set of N inputs such that a neural network with o(N) parameters can “shatter,” i.e., memorize arbitrary labels (Maass, 1997; Bartlett et al., 2019). Comparing the two results on o(N) parameters, Sontag (1997) showed that not all sets of N inputs can be memorized for arbitrary labels, whereas Bartlett et al. (2019) showed that at least one set of N inputs can be shattered. This suggests that there may be a reasonably large family of N input-label pairs that can be memorized with o(N) parameters, which is our main interest." }, { "heading": "1.1 SUMMARY OF RESULTS", "text": "In this paper, we identify a mild condition satisfied by many practical datasets, and show that o(N) parameters suffice for memorizing such datasets. In order to bypass the negative result by Sontag (1997), we introduce a condition to the set of inputs, called the ∆-separateness.\nDefinition 1. For a set X ⊂ Rdx , we say X is ∆-separated if sup\nx,x′∈X :x 6=x′ ‖x− x′‖2 < ∆× inf x,x′∈X :x 6=x′ ‖x− x′‖2.\nThis condition requires that the ratio of the maximum distance to the minimum distance between distinct points is bounded by ∆. Note that the condition is milder when ∆ is bigger. By Definition 1, any given finite set of (distinct) inputs is ∆-separated for some ∆, so one might ask why ∆- separateness is different from having distinct inputs in a dataset. The key difference is that even if the number of data points N grows, the ratio of the maximum to the minimum should remain bounded by ∆. Given the discrete nature of computers, there are many practical datasets that satisfy ∆-separateness, as we will see shortly. Also, this condition is more general than the minimum distance assumptions (∀i, ‖xi‖2 = 1, ∀i 6= j, ‖xi − xj‖2 ≥ ρ > 0) that are employed in existing theoretical results (Hardt and Ma, 2017; Vershynin, 2020). To see this, note that the minimum distance assumption implies 2/ρ-separateness. In our theorem statements, we will use the phrase “∆-separated set of input-label pairs” for denoting the set of inputs is ∆-separated.\nIn our main theorem sketched below, we prove the sufficiency of o(N) parameters for memorizing any ∆-separated set of N pairs (i.e., any ∆-separated set of N inputs with arbitrary labels) even for large ∆. More concretely, our result is of the following form: Theorem 1 (Informal). For any w ∈ ( 23 , 1], there exists aO(N\n2−2w/ logN+log ∆)-layer, O(Nw+ log ∆)-parameter fully-connected network with sigmoidal or RELU activation that can memorize any ∆-separated set of N input-label pairs.\nWe note that log has base 2. Theorem 1 states that if the number of layers increases with the number of pairs N , then any ∆-separated set of N pairs can be memorized by a network with o(N) parameters. Here, we can check from Definition 1 that the log ∆ term does not usually dominate the depth or the number of parameters, especially for modern deep architectures and practical datasets. For example, it is easy to check that any dataset consisting of 3-channel images (values from {0, 1, . . . , 255}) of size a× b satisfies log ∆ < (9 + 12 log(ab)) (e.g., log ∆ < 17 for the ImageNet dataset), which is often much smaller than the depth of modern deep architectures.\nFor practical datasets, we can show that networks with parameters fewer than the number of pairs can successfully memorize the dataset. For example, in order to perfectly classify one million images in ImageNet dataset1 with 1000 classes, our result shows that 0.7 million parameters are sufficient. The improvement is more significant for large datasets. To memorize 15.8 million bounding boxes in Open Images V62 with 600 classes, our result shows that only 4.5 million parameters suffice.\nTheorem 1 improves the sufficient number of parameters for memorizing a large class of N pairs (i.e., 2O(N\nw)-separated) from O(N) down to O(Nw) for any w ∈ ( 23 , 1), for deep networks. Then, it is natural to ask whether the depth increasing with N is necessary for memorization with a sub-linear number of parameters. The following existing result on the VC-dimension implies that this is indeed necessary for memorization with o(N/ logN) parameters, at least for RELU networks. Theorem [Bartlett et al. (2019)]. (Informal) For L-layer RELU networks, Ω(N/(L logN)) parameters are necessary for memorizing at least a single set of N inputs with arbitrary labels.\nThe above theorem implies that for RELU networks of constant depth, Ω(N/ logN) parameters are necessary for memorizing at least one set of N inputs with arbitrary labels. In contrast, by increasing depth with N , Theorem 1 shows that there is a large class of datasets that can be memorized with o(N/ logN) parameters. Combining these two results, one can conclude that increasing depth is necessary and sufficient for memorizing a large class of N pairs with o(N/ logN) parameters.\nGiven that the depth is critical for the memorization power, is the width also critical? We prove that it is not the case, via the following theorem. Theorem 2 (Informal). For a fully-connected network of width 3 with a sigmoidal or RELU activation function, O(N2/3 + log ∆) parameters (i.e., layers) suffice for memorizing any ∆-separated set of N input-label pairs.\nTheorem 2 states that under 2O(N 2/3)-separateness of inputs, the network width does not necessarily have to increase with N for memorization with sub-linear parameters. Furthermore, it shows that 1http://www.image-net.org/ 2https://storage.googleapis.com/openimages/web/index.html\neven a surprisingly narrow network of width 3 has a superior memorization power than a fixed-depth network, requiring only O(N2/3) parameters.\nTheorems 1 and 2 show the existence of network architectures that memorize N points with o(N) parameters, under the condition of ∆-separateness. This means that these theorems do not answer the question of how many such data points can a given network memorize. We provide generic criteria for identifying the maximum number of points given general networks (Theorem 3). In a nutshell, our criteria indicate that to memorize more pairs under the same budget for the number of parameters, a network must have a deep and narrow architecture at the final layers of the network. In contrast to the prior results that the number of arbitrary pairs that can be memorized is at most proportional to the number of parameters (Yamasaki, 1993; Yun et al., 2019; Vershynin, 2020), our criteria successfully incorporate with the characteristics of datasets, the number of parameters, and the architecture, which enable us to memorize ∆-separated datasets with number of pairs super-linear in the number of parameters.\nFinally, we provide empirical results corroborating our theoretical findings that deep networks often memorize better than their shallow counterparts with a similar number of parameters. Here, we emphasize that better memorization power does not necessarily imply better generalization. We indeed observe that shallow and wide networks often generalize better than deep and narrow networks, given the same (or similar) training accuracy.\nOrganization. We first introduce related works in Section 2. In Section 3, we introduce necessary notation and the problem setup. We formally state our main results and discuss them in Section 4. In Section 6, we provide empirical observations on the effect of depth and width in neural networks. Finally, we conclude the paper in Section 7." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 NUMBER OF PARAMETERS FOR MEMORIZATION", "text": "Sufficient number of parameters for memorization. Identifying the sufficient number of parameters for memorizing arbitrary N pairs has a long history. Earlier works mostly focused on bounding the number of hidden neurons of shallow networks for memorization. Baum (1988) proved that for 2-layer STEP3 networks, O(N) hidden neurons (i.e., O(N) parameters) are sufficient for memorizing arbitrary N pairs when inputs are in general position. Huang and Babri (1998) showed that the same bound holds for any bounded and nonlinear activation function σ satisfying that either limx→−∞ σ(x) or limx→∞ σ(x) exists, without any condition on inputs. The O(N) bounds on the number of hidden neurons was improved to O( √ N) by exploiting an additional hidden layer by Huang (2003); nevertheless, this construction still requires O(N) parameters.\nWith the advent of deep learning, the study has been extended to modern activation functions and deeper architectures. Zhang et al. (2017) proved that O(N) hidden neurons are sufficient for 2-layer RELU networks to memorize arbitrary N pairs. Yun et al. (2019) showed that for deep RELU (or hard tanh) networks having at least 3 layers, O(N) parameters are sufficient. Vershynin (2020) proved a similar result for STEP (or RELU) networks with an additional logarithmic factor, i.e., Õ(N) parameters are sufficient, for memorizing arbitrary {xi : ‖xi‖2 = 1}Ni=1 satisfying ‖xi − xj‖22 = Ω( log log dmax log dmin ) and N, dmax = eO(d 1/5 min) where dmax and dmin denote the maximum and the minimum hidden dimensions, respectively.\nIn addition, the memorization power of modern network architectures has also been studied. Hardt and Ma (2017) showed that RELU networks consisting of residual blocks with O(N) hidden neurons can memorize arbitrary {xi : ‖xi‖2 = 1}Ni=1 satisfying ‖xi − xj‖2 ≥ ρ for some absolute constant ρ > 0. Nguyen and Hein (2018) studied a broader class of layers and proved that O(N) hidden neurons suffice for convolutional neural networks consisting of fully-connected, convolutional, and max-pooling layers for memorizing arbitrary N pairs having different patches.\nNecessary number of parameters for memorization. On the other hand, the necessary number of parameters for memorization has also been studied. Sontag (1997) showed that for any neural network using analytic definable activation functions, Ω(N) parameters are necessary for memorizing arbitrary\n3STEP denotes the binary threshold activation function: x 7→ 1[x ≥ 0].\nN pairs. Namely, given any network using analytic definable activation with o(N) parameters, there exists a set of N pairs that the network cannot memorize.\nThe Vapnik-Chervonenkis (VC) dimension is also closely related to the memorization power of neural networks. While the memorization power studies the number of parameters for memorizing arbitrary N pairs, the VC-dimension studies the number of parameters for memorizing at least a single set of N inputs with arbitrary labels. Hence, it naturally provides the lower bound on the necessary number of parameters for memorizing arbitrary N pairs. The VC-dimension of neural networks has been studied for various types of activation functions. For memorizing at least a single set of N inputs with arbitrary labels, it is known that Θ(N/ logN) parameters are necessary (Baum and Haussler, 1989) and sufficient (Maass, 1997) for STEP networks. Similarly, Karpinski and Macintyre (1997) proved that Ω( √ N/U) parameters are necessary for sigmoid networks of U neurons. Recently, Bartlett et al. (2019) showed that Θ(N/(L̄ logN)) parameters are necessary and sufficient for L-layer networks using any piecewise linear activation function where L̄ := 1WL ∑L `=1W` and W` denotes the number of parameters up to the `-th layer." }, { "heading": "2.2 BENEFITS OF DEPTH IN NEURAL NETWORKS", "text": "To understand deep learning, researchers have investigated the advantages of deep neural networks compared to shallow neural networks with a similar number of parameters. Initial results discovered examples of deep neural networks that cannot be approximated by shallow neural networks without using exponentially many parameters (Telgarsky, 2016; Eldan and Shamir, 2016; Arora et al., 2018). Recently, it is discovered that deep neural networks require fewer parameters than shallow neural networks to represent or approximate a class of periodic functions (Chatziafratis et al., 2020a;b). For approximating continuous functions, Yarotsky (2018) proved that the number of required parameters for RELU networks of constantly bounded depth are square to that for deep RELU networks." }, { "heading": "3 NOTATION AND PROBLEM SETUP", "text": "In this section, we introduce notation and the problem setup. We use log to denote the logarithm to the base 2. We let RELU be the function x 7→ max{x, 0}. For X ⊂ R, we denote bXc := {bxc : x ∈ X}. For n ∈ N and a set X , we denote (X n ) := {S ⊂ X : |S| = n} and [n] := {0, . . . , n− 1}. For x ≥ 0 and y > 0, we denote x mod y := x− y · bxy c.\nThroughout this paper, we consider fully-connected feedforward networks. In particular, we consider the following setup: Given an activation function σ, we define a neural network fθ of L layers (or equivalently L − 1 hidden layers), input dimension dx, output dimension 1, and hidden layer dimensions d1, . . . , dL−1 parameterized by θ as fθ := tL−1◦σ◦· · ·◦t1◦σ◦t0. Here, t` : Rd` → Rd`+1 is an affine transformation parameterized by θ.4 We denote a neural network using an activation function σ by a “σ network.” We define the width of fθ as max{d1, . . . , dL−1}. As we introduced in Section 1.1, our main results hold for any sigmoidal activation function and RELU. Here, we formally define the sigmoidal functions as follows. Definition 2. We say a function σ : R→ R is sigmoidal if the following conditions are satisfied.\n• Both limx→−∞ σ(x), limx→∞ σ(x) exist and limx→−∞ σ(x) 6= limx→∞ σ(x).\n• There exists z ∈ R such that σ is continuously differentiable at z and σ′(z) 6= 0.\nA class of sigmoidal functions covers many activation functions including sigmoid, tanh, hard tanh, etc.5 Furthermore, since hard tanh can be represented as a combination of two RELU functions, all results for sigmoidal activation functions hold for RELU as well.6\nLastly, we formally define the memorization as follows. Definition 3. Given C, dx ∈ N, a set of inputs X ⊂ Rdx , a label function y : X → [C], and a neural network fθ : Rdx → R parameterized by θ, we say fθ can memorize {(x, y(x)) : x ∈ X} in dx dimension with C classes if for any ε > 0, there exists θ such that |fθ(x)− y(x)| ≤ ε for all x ∈ X .\n4We set d0 := dx and dL := 1. 5hard tanh activation function: x 7→ −1[x ≤ −1] + x · 1[−1 < x ≤ 1] + 1[x > 1]. 6hard tanh(x) = RELU(x+ 1)− RELU(x− 1)− 1 = RELU ( 2− RELU(1− x) ) − 1\nDefinition 3 defines the memorizability as the ability of a network fθ to fit a set of input-label pairs. While existing results often define memorization only for the binary labels, we consider arbitrary C classes, and prove our results for general multi-class classification problems. We often write “fθ can memorize arbitrary N pairs” without “in dx dimension with C classes” throughout the paper." }, { "heading": "4 MAIN RESULTS", "text": "" }, { "heading": "4.1 MEMORIZATION VIA SUB-LINEAR PARAMETERS", "text": "Efficacy of depth for memorization. Now, we are ready to introduce our main theorem on memorizing N pairs with o(N) parameters. The proof of Theorem 1 is presented in Section 5.\nTheorem 1. For any C,N, dx ∈ N, for any w ∈ [2/3, 1], for any ∆ ≥ 1, for any sigmoidal activation function σ, there exists a σ network fθ of O ( log dx + log ∆ + N2−2w 1+(1.5w−1) logN logC ) hidden layers\nand O ( dx + log ∆ +N w +N1−w/2 logC )\nparameters such that fθ can memorize any ∆-separated set of N pairs in dx dimension with C classes.\nNote that Theorem 1 covers w = 2/3 which is not included in its informal version presented in Section 1.1. However, we could only achieve O(N2/3) parameters at w = 2/3 as the logN term disappears. In addition, while we only address sigmoidal activation functions in the statement of Theorem 1, note that the same conclusion naturally holds for RELU as we described in Section 3.\nIn Theorem 1, ∆ only incursO(log ∆) overhead to the number of layers and the number of parameters. As we introduced in Section 1.1, log ∆ for modern datasets is often very small. Furthermore, log ∆ is small for random inputs. For example, a set of dx-dimensional i.i.d. standard normal random vectors of size N satisfies log ∆ = O( 1dx log(N/ √ δ)) with probability at least 1− δ (see Section C). Hence, the ∆-separateness condition is often negligible.\nSuppose that dx and C are treated as constants, as also assumed in existing results. Then, Theorem 1 implies that if log ∆ = O(Nw) for some w < 1, then Θ(Nw) (i.e., sub-linear to N ) parameters are sufficient for sigmoidal or RELU networks to memorize arbitrary ∆-separated set of N pairs. Furthermore, if log ∆ ≤ O(N2−2w/logN) and w ∈ ( 23 , 1), then the network construction in Theorem 1 has O(N2−2w/logN) layers and O(Nw) parameters. Note that the condition log ∆ ≤ O(N2−2w/logN) is very loose for many practical datasets, especially for those with huge N . Combined with the lower bound Ω(N/ logN) on the necessary number of parameters for RELU networks of constant depth (Bartlett et al., 2019), Theorem 1 implies that depth growing in N is necessary and sufficient for memorizing a large class (i.e., ∆-separated) of N pairs with o(N/ logN) parameters. In other words, deeper RELU networks have more memorization power.\nUnimportance of width for memorization. While depth is critical for the memorization power, we show that the width is not very critical. In particular, we prove that extremely narrow networks of width 3 can memorize with O(N2/3) layers (i.e., O(N2/3) parameters) as stated in the following theorem. The proof of Theorem 2 is presented in Section F.\nTheorem 2. For any C,N, dx ∈ N, for any ∆ ≥ 1, for any sigmoidal activation function σ, a σ network of Θ(N2/3 logC) hidden layers and width 3 can memorize any ∆-separated set of N pairs in dx dimension with C classes.\nThe statement of Theorem 2 might be somewhat surprising since the network width for memorization does not depend to the input dimension dx. This is in contrast with the recent universal approximation results that width at least dx + 1 is necessary for approximating functions having dx-dimensional domain (Lu et al., 2017; Hanin and Sellke, 2017; Johnson, 2019; Park et al., 2020). The main difference follows from the fundamental difference in the two approximation problems, i.e., approximating a function at finite inputs versus infinite inputs, e.g., the unit cube. Any set of N input vectors can be easily mapped to N “distinct” scalar values by simple projection using inner product. Hence, memorizing finite input-label pairs in dx-dimension can be easily translated into memorizing finite input-label pairs in one-dimension. In other words, the dimensionality (dx) of inputs is not very important as long as they can be translated to distinct scalar values. In contrast, there is no “natural” way to map design an injection from dx-dimensional unit cube to lower dimension. Namely, to not loose the “information” from inputs, width dx for preserving inputs is necessary. Therefore, function approximation in general cannot be done with width independent of dx.\nExtension to regression problem. The results of Theorem 1 and Theorem 2 can be easily applied to the regression problem, i.e., when labels are from [0, 1]. This is because one can simply translate the regression problem with some ε > 0 error tolerance to the classification problem with d1/εe classes. Here, each class c ∈ {0, 1, . . . , d1/εe} corresponds to the target value c · ε. Hence, the regression problem can also be solved within ε error with o(N) parameters, where the sufficient number of layers and the sufficient number of parameters are identical to the numbers in Theorem 1 and Theorem 2 with the replacement of logC with log(1/ε).\nRelation with benefits of depth in neural networks. Our observation that deeper RELU networks have more memorization power is closely related to the recent studies on the benefits of depth in neural networks (Telgarsky, 2016; Eldan and Shamir, 2016; Lin et al., 2017; Arora et al., 2018; Yarotsky, 2018; Chatziafratis et al., 2020a;b). While our observation indicates that depth is critical for the memorization power, these works mostly focused on showing the importance of depth for approximating functions. Here, the existing results on benefits of depth for function approximation cannot directly imply the benefits of depth for memorization since they often focus on specific classes of functions or require parameters far beyond O(N)." }, { "heading": "4.2 GENERIC CRITERIA FOR IDENTIFYING MEMORIZATION POWER", "text": "While Theorem 1 proves the existence of networks having o(N) parameters for memorization, the following theorem states the generic criteria for verifying the memorization power of network architectures. The proof of Theorem 3 is presented in Section G.\nTheorem 3. For some sigmoidal activation function σ, let fθ be a σ network of L hidden layers having d` neurons at the `-th hidden layer. Then, for any C,N, dx ∈ N and ∆ ≥ 1, fθ can memorize any ∆-separated set of N pairs in dx dimension with C classes if the following statement holds:\nThere exist 0 < L1 < · · · < LK < L for some 2 ≤ K ≤ logN satisfying conditions 1–4 below.\n1. d` ≥ 3 for all ` ≤ LK and d` ≥ 7 for all LK < `. 2. ∏L1 `=1b(d` + 1)/2c ≥ ∆ √ 2πdx.\n3. ∑Li Li−1+1 (d` − 2) ≥ 2i+3 for all 1 < i ≤ K − 1.\n4. 2K · (∑LK\n`=LK−1+1 (d` − 2)\n) · ⌊ (L− LK − 1)/dlogCe ⌋ − 4 ≥ N2.\nOur criteria in Theorem 3 require that the layers of the network can be “partitioned” into have K + 1 distinct parts characterized by L1, . . . , LK for some K ≥ 2. Under this partition, we describe the four conditions in Theorem 3 in detail. The first condition suggests that the network width is not very critical. We note that d` ≥ 7 for LK < ` does not contradict Theorem 2 as we highly optimize the network architecture to fit in width 3 for Theorem 2, while we provide generic criteria here.\nThe second condition considers the first L1 hidden layers. In order to satisfy this condition, deep and narrow architectures are better than shallow and wide architectures under a similar budget for parameters, due to the product form ∏L1 `=1b(d` + 1)/2c. Nevertheless, the architecture of the first L1 hidden layers is not very critical as only log ∆ + 12 log(2πdx) layers are sufficient even with width 3 (e.g., log ∆ < 17 for the ImageNet dataset).\nThe third condition is closely related to the last condition: As K increases, the LHS in the last condition increases. However, the third condition states that it requires more hidden neurons. Simply, increasing K by one requires doubling hidden neurons from the (L1 + 1)-th hidden layer to the LK−1-th hidden layer. Nevertheless, this doubling hidden neurons would make the LHS of the last condition double as well.\nThe last condition is simple. As we explained, 2K is approximately proportional to the number of hidden neurons from the (L1 + 1)-th hidden layer to the LK−1-th hidden layer. The second term in the LHS of the last condition ∑LK `=LK−1+1\n(d` − 2) is even more simple; it only requires counting hidden neurons. On the other hand, the last term counts the number of layers. This indicates that to satisfy conditions in Theorem 3 using few parameters, the last layers of the network should be deep and narrow. In particular, we note that such a deep and narrow architecture in the last layers is indeed necessary for RELU networks to memorize with o(N) parameters (Bartlett et al., 2019).\nNow, we describe how to show memorization with o(N) parameters using our criteria. For simplicity, consider the network with the minimum width stated in the first condition, i.e., d` = 3 for all ` ≤ LK and d` = 7 for all LK < `. As we explained, the second conditions can be easily satisfied. For the third condition, consider choosing K = log(N2/3), then Θ(N2/3) hidden neurons (i.e., Θ(N2/3) hidden layers) would be sufficient, i.e., LK−1 − L1 = Θ(N2/3). Likewise, we choose remaining Li to satisfy LK − LK−1 = Θ(N2/3) and L− LK = Θ(N2/3). Then, it naturally satisfy the last condition while using only Θ(N2/3) parameters." }, { "heading": "5 PROOF OF THEOREM 1", "text": "In this section, we give a constructive proof of Theorem 1: We design a σ network which can memorize N inputs in dx dimension with C classes using only o(N) parameters. We note that the proofs of Theorem 2 and Theorem 3 also utilize similar constructions. To construct networks, we first introduce the following lemma, motivated by the RELU networks achieving nearly tight VC-dimension (Bartlett et al., 2019). The proof of Lemma 4 is presented in Section E.1.\nLemma 4. For any C, V ∈ N and p ∈ [1/2, 1], there exists a σ network fθ of O (\nV 1−p 1+(p−0.5) log V )\nhidden layers and O(V p + V 1/2 logC) parameters such that fθ can memorize any X ∈ (R V ) with C classes satisfying bXc = [V ].\nRoughly speaking, Lemma 4 states the following: Suppose that the V inputs are scalar and wellseparated, in the sense that exactly one input falls into each interval [i, i + 1). Then, such wellseparated inputs can be mapped to the corresponding labels using only O(V 1/2) parameters (with p = 1/2). Thus, what remains is to build a network mapping N arbitrary inputs to some wellseparated set bounded by some V = o(N2), again using o(N) parameters.\nProjecting input vectors to scalar values. Now, we introduce our network mapping ∆-separated set of N inputs to some Z satisfying bZc ∈ ( [V ] N ) . First, we map the input vectors to scalar values using the following lemma. The proof of Lemma 5 is presented in Section E.2.\nLemma 5. For any ∆-separated X ∈ (Rdx N ) , there exist v ∈ Rdx , b ∈ R such that ⌊ {v>x+ b : x ∈\nX} ⌋ ∈ (\n[O(N2∆d1/2x )] N\n) .\nLemma 5 states that any ∆-separated N vectors can be mapped to some well-separated scalar values bounded by O(N2∆d1/2x ) using a simple projection using O(dx) parameters. Note that the direct combination of Lemma 4 and Lemma 5 gives us a σ network of O(N∆1/2d1/4x ) parameters which can memorize any ∆-separated N pairs. However, this combination is limited in at least two sense: The required number of parameters (1) has ∆1/2d1/4x multiplicative factor and (2) is not sub-linear to N . In what follows, we introduce our techniques for resolving these two issues.\nReducing upper bound to O(N2). First, we introduce the following lemma for improving the ∆1/2d 1/4 x multiplicative factor in the number of parameters to O(log ∆ + log dx) additional parameters. The proof of Lemma 6 is presented in Section E.3.\nLemma 6. For any X ∈ (R N ) such that bXc ∈ ( [K] N ) for some K ∈ N, there exists a σ network f of 1\nhidden layer and width 3 such that bf(X )c ∈ (\n[T ] N\n) where T := max {⌈ K/2 ⌉ , bN2/4 + 1c } .\nFrom Lemma 6, a network of O(log ∆ + log dx) hidden layers and width 3 can decrease the upper bound O(N2∆d1/2x ) to bN2/4 + 1c, which enables us to drop the dependence of ∆d1/2x on the upper bound using O(log ∆ + log dx) parameters. The intuition behind Lemma 6 is that if the number of target intervals T is large enough compared to the number of inputs N , then inputs can be easily mapped without collision (i.e., no two inputs are mapped into the same interval) by some simple network of a small number of parameters. For example, we construct f in Lemma 6 as\nf(x) :=\n{ x if x ∈ [ 0, T ) x+ b mod T if x ∈ [ T,K\n) (1) for some b ∈ [T ]. However, if T is not large enough compared to N (i.e., T < bN2/4 + 1c), then our network (1) cannot avoid the collision between inputs, i.e., b ∈ [T ] satisfying f(bXc) ∈ ( [T ] N )\nmay not exist. Hence, combining Lemmas 4–6 only gives us a σ network of O(N + log ∆ + log dx) parameters which can memorize any ∆-separated N pairs, i.e., the number of parameters is not sub-linear to N .\nReducing upper bound to o(N2). To resolve this issue, we further decrease the upper bound using the following lemma. The proof of Lemma 7 is presented in Section E.4.\nLemma 7. For any X ∈ (R N ) such that bXc ∈ ( [K] N ) for some K ∈ N, there exists a σ network f of 1\nhidden layer and width O ( N2/K ) such that bf(X )c ∈ ( [T ] N ) where T := max {⌈ K/2 ⌉ , N } .\nThe network in Lemma 7 can reduce the upper bound by approximately half, beyond bN2/4 + 1c; however, the required number of parameters will be doubled if both the current upper bound K decreases by half. Hence, in order to decrease the upper bound from O(N2) to V = o(N2) using Θ(log(N2/V )) applications of Lemma 7, we need Θ(log(N2/V )) layers and Θ(N2/V ) parameters. Here, we construct each application of Lemma 7 using two hidden layers: one hidden layer of Θ(N2/K) hidden neurons for implementing the function and the other hidden layer of one hidden neuron for the output.\nDeriving Theorem 1. Now, we choose V = Θ(N2−w) for some w ∈ [ 23 , 1]. Then, from Lemmas 5–7, O(log dx + log ∆ + logN) hidden layers and O(dx + log ∆ + Nw) parameters suffice for mapping any ∆-separated set of sizeN to someZ satisfying bZc ∈ ( [V ] N ) . Finally, from Lemma 4 and\nchoosing p = w2−w , additionalO (\nN2−2w 1+(1.5w−1) logN logC )\nhidden layers andO(Nw+N1−w/2 logC) parameters suffice for mapping Z to its labels. This completes the proof of Theorem 1." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we study the effect of depth and width. In particular, we empirically verify whether our theoretical finding extends to practices: Can deep and narrow networks memorize more training pairs than their shallow and wide counterparts under a similar number of parameters? For the experiments, we use residual networks (He et al., 2016) having the same number of channels for each layer. The detailed experimental setups are presented in Section A. In the following experiments, we observe the training and test accuracy of networks by varying the number of channels (c) and the number of residual blocks (b)." }, { "heading": "6.1 DEPTH-WIDTH TRADE-OFF IN MEMORIZATION", "text": "We verify the memorization power of different network architectures having a similar number of parameters. Figure 1 illustrates training and test accuracy of five different architectures with approximately 50000 parameters for classifying the CIFAR-10 dataset (Krizhevsky and Hinton, 2009) and the SVHN dataset (Netzer et al., 2011). One can observe that as the network architecture becomes deeper and narrower, the training accuracy increases. Namely, deep and narrow networks memorize better than shallow and wide networks under a similar number of parameters. This observation agrees with Theorem 1, which states that increasing depth reduces the required number of parameters for memorizing the same number of pairs.\nHowever, more memorization power does not always imply better generalization. In Figure 1b, as the depth increases, the test accuracy also increases for the SVHN dataset. In contrast, the test\naccuracy decreases for the CIFAR-10 dataset as the depth increases in Figure 1a. In other words, overfitting occurs for the CIFAR-10 dataset while classifying the SVHN data receive benefits from more expressive power. Note that a similar observation has also been made in the recent double descent phenomenon (Belkin et al., 2019; Nakkiran et al., 2020) that more expressive power can both hurt/improve the generalization.\nIn addition, this observation can provide guidance on the design of network architectures in applications where the training accuracy and small number of parameters are critical. For example, recent development in video streaming services tries to reduce the traffic by compressing a video by neural networks and send the compressed video and the decoder network (Yeo et al., 2018). Here, the corresponding decoder is often trained only for the video to send; hence, only the training accuracy/loss matters while the decoder’s number of parameters should also be small for the traffic." }, { "heading": "6.2 EFFECT OF WIDTH AND DEPTH", "text": "In this section, we observe the effect of depth and width by varying both. Figure 2 reports the training and test accuracy for the CIFAR-10 dataset by varying the number for channels from 5 to 30 and the number of residual blocks from 5 to 50. We present the same experimental results for the SVHN dataset in Section B. First, we observe that a network of 15 channels with feature map size 32× 32 successfully memorize (i.e., over 99% training accuracy). This size is much narrower than modern network architectures, e.g., ResNet-18 has 64 channels at the first hidden layer (He et al., 2016). On the other hand, too narrow network (e.g., 5-channels) fail to memorize. This result does not contradict with Theorem 2 as the test of memorization in experiments/practice involves the stochastic gradient descent. We note that similar phenomenons are observed for the SVHN dataset.\nFurthermore, once the network memorize, we observe that increasing width is more effective than increasing depth for improving test accuracy. These results indicate that width is not very critical for the memorization power, while it can be effective for generalization. Note that similar observations have been made in Zhang et al. (2017)." }, { "heading": "7 CONCLUSION", "text": "In this paper, we prove that Θ(N2/3) parameters are sufficient for memorizing arbitraryN input-label pairs under the mild ∆-separateness condition. Our result provides significantly improved results, compared to the prior results showing the sufficiency of Θ(N) parameters with/without conditions on pairs. In addition, Theorem 1 shows that deeper networks have more memorization power. This result coincides with the recent study on the benefits of depth for function approximation. On the other hand, Theorem 2 shows that network width is not important for the memorization power. We also provide generic criteria for identifying the memorization power of networks. Finally, we empirically confirm our theoretical results." }, { "heading": "A EXPERIMENTAL SETUP", "text": "In this section, we described the details on residual network architectures and hyper-parameter setups.\nWe use the residual networks of the following structure. First, a convolutional layer and RELU maps a 3-channel input image to a c-channel feature map. Here, the size of the feature map is identical to the size of input images. Then, we apply b residual blocks where each residual block maps x 7→ RELU(CONV ◦RELU ◦CONV(x) +x) while preserving the number of channels and the size of feature map. Finally, we apply an average pooling layer and a fully-connected layer. We train the model for 5× 105 iterations with batch size 64 by the stochastic gradient descent. We use the initial learning rate 0.1, weight decay 10−4, and the learning rate decay at the 1.5× 105-th iteration and the 3.5× 105-th iteration by a multiplicative factor 0.1. All presented results are averaged over three independent trials." }, { "heading": "B TRAINING AND TEST ACCURACY FOR SVHN DATASET", "text": "" }, { "heading": "C ∆-SEPARATENESS OF GAUSSIAN RANDOM VECTORS", "text": "While we mentioned in Section 1.1 that digital nature of data enables the ∆-separateness of inputs with small ∆, random inputs are also ∆-separated with small ∆ with high probability. In particular, we prove the following lemma. Lemma 8. For any dx ∈ N, consider a set of N vectors X = {x1, . . . , xN} ⊂ Rdx where each entry of xi is drawn from the i.i.d. standard normal distribution. Then, for any δ > 0, X is(\n(N/ √ δ)2/dx √ 3e+ 5edx ln(N/ √ δ) ) -separated with probability at least 1− δ.\nLemma 8 implies that Theorem 1 and Theorem 2 can be successfully applied to random Gaussian input vectors as the O ( (N/ √ δ)2/dx √ ln(N/ √ δ) ) -separateness condition in Lemma 8 is much\nweaker than our 2O(N 2/3)-separateness condition for memorization with o(N) parameters.\nProof of Lemma 8. To begin with, we first prove that for i 6= j, the following bound holds:\nP\n(√ 2\ne · ( N√\nδ\n)−2/dx ≤ 1√\ndx ‖xi − xj‖2 ≤\n√ 6 + 10\ndx ln N√ δ\n)\n= P ( 2 e · ( N√\nδ )−4/dx ≤ 1 dx ‖xi − xj‖22 ≤ 6 + 10 dx ln N√ δ ) = P ( 2 e · ( N√\nδ )−4/dx ≤ 2 dx X ≤ 6 + 10 dx ln N√ δ ) = 1− P ( 2 e · ( N√\nδ\n)−4/dx > 2\ndx X\n) − P ( 2\ndx X > 6 +\n10 dx ln N√ δ ) = 1− P ( 1 e · ( N√\nδ\n)−4/dx · dx > X ) − P ( X > ( 3 + 5\ndx ln N√ δ\n) · dx )\n≥ 1− ( 1 e · ( N√\nδ\n)−4/dx e 1− 1e ·( N√ δ )−4/dx )dx/2 − (( 3 + 5\ndx ln N√ δ\n) e−2 ( N√\nδ )−5/dx)dx/2 = 1− δ N2 · ( e − 1e ·( N√ δ )−4/dx )dx/2 − δ N2 · (( 3 + 5 dx ln N√ δ ) e−2 ( N√ δ )−1/dx)dx/2\n≥ 1− δ N2 − δ N2 · 3 e2N1/dx + 5 e2 · ln ( N√ δ )1/dx( N√ δ )1/dx dx/2\n≥ 1− δ N2 − δ N2 · ( 3 e2 + 5 e3 )dx/2 ≥ 1− δ\nN2 − δ N2 = 1− 2δ N2\n(2)\nwhere X denotes a chi-square random variable with dx degrees of freedom. For the first inequality in (2), we use the inequalities\nP(X < z · dx) ≤ inf t>0 E[e−tX ] e−t(z·dx) = inf t>0 (1 + 2t)−dx/2 e−t(z·dx) = (ze1−z)dx/2 for 0 < z < 1\nP(X > z · dx) ≤ inf t>0 E[etX ] et(z·dx) = inf 0<t<1/2 (1− 2t)−dx/2 et(z·dx) = (ze1−z)dx/2 for z > 1\nwhich directly follow from the Chernoff bound for the chi-square distribution. For the third inequality in (2), we use the fact that maxx>0(lnx)/x = 1/e and N ≥ 1. For the last inequality in (2), we use 3 e2 + 5 e3 < 1.\nThen, X is ( (N/ √ δ)2/dx √ 3e+ 5edx ln(N/ √ δ) )\n-separated with probability at least 1 − δ as the following bound holds:\nP\n(√ 2\ne δ1/dxN−2/dx ≤ 1√ dx ‖xi − xj‖2 ≤\n√ 6 + 10\ndx ln N√ δ , ∀i 6= j\n)\n= 1− P ( ∃i 6= j s.t. ‖xi − xj‖2 > √ 6 + 10\ndx ln N√ δ\nor ‖xi − xj‖2 < √ 2\ne δ1/dxN−2/dx\n)\n≥ 1− ∑ i 6=j P\n( ‖xi − xj‖2 > √ 6 + 10\ndx ln N√ δ\nor ‖xi − xj‖2 < √ 2\ne δ1/dxN−2/dx\n)\n= 1− ∑ i 6=j\n( 1− P (√ 2\ne δ1/dxN−2/dx ≤ 1√ dx ‖xi − xj‖2 ≤\n√ 6 + 10\ndx ln N√ δ\n))\n≥ 1− N(N − 1) 2 × 2δ N2 ≥ 1− δ\nwhere the first inequality follows from the union bound and the second inequality follows from (2). This completes the proof of Lemma 8." }, { "heading": "D MAIN PROOF IDEA: MEMORIZATION USING STEP AND ID", "text": "" }, { "heading": "D.1 MAIN IDEA", "text": "In the proofs of Theorem 1, Theorem 2, and Theorem 3, we use each sigmoidal activation for approximating either the identity function (ID : x 7→ x) or the binary step function (STEP : x 7→ 1[x ≥ 0]). Thus, we construct our network for the proofs using only ID and STEP activation functions, which directly provides the network construction using sigmoidal activation functions (see Section D.2 and Section D.3 for details)." }, { "heading": "D.2 TOOLS", "text": "We present the following claims important for the proofs. In particular, Claim 9 and Claim 10 guide how to approximate STEP and ID by a single sigmoidal neuron. Claim 9 [Kidger and Lyons (2020, Lemma 4.1)]. For any sigmoidal activation function σ, for any bounded interval I ⊂ R for any ε > 0, there exist a, b, c, d ∈ R such that |a ·σ(c ·x+d)+b−x| < ε for all x ∈ I. Claim 10. For any sigmoidal activation function σ, for any ε, δ > 0, there exist a, b, c ∈ R such that |a · σ(c · x) + b− 1[x ≥ 0]| < ε for all x /∈ [− δ, δ].\nProof of Claim 10. We assume that α := limx→−∞ σ(x) < limx→∞ σ(x) =: β where the case that β < α can be proved in a similar mannerwdd. From the definition of α, β, there exists k > 0 such that |σ(x) − α| < (β − α)ε if x < −k and |σ(x) − β| < (β − α)ε if x > k. Then, choosing a = 1β−α , b = − α β−α , and c = k δ completes the proof of Claim 10.\nClaim 11. For any a, x ∈ R such that a 6= 0, for any b ∈ N, it holds that ⌈ x a·b ⌉ = ⌈ dx/ae b ⌉ .\nProof of Claim 11. It trivially holds that d xa·be ≤ d dx/ae b e. Now, we show a contradiction if d x a·be < d dx/aeb e. Suppose that d x a·be < d dx/ae b e. Then, there exists an integer m such that\nx a · b ≤ m < dx/ae b and hence,\nx a ≤ ⌈x a ⌉ ≤ b ·m < ⌈x a ⌉ which leads to a contradiction. This completes the proof of Claim 11." }, { "heading": "D.3 TRANSFORMING STEP+ID NETWORK TO SIGMOIDAL NETWORK", "text": "In this section, we describe how to transform a STEP + ID network into a sigmoidal network within arbitrary error. Formally, we prove the following lemma. Lemma 12. For any finite set of inputs X , for any STEP + ID network f , for any ε > 0, for any sigmoidal activation function σ, there exists a σ network g having the same architecture with fθ such that |f(x)− g(x)| < ε for all x ∈ X .\nThen, the construction in Lemma 12 enables us to construct STEP + ID networks instead of networks using a sigmoidal activation function for proving Theorem 1, Theorem 2, and Theorem 3.\nProof of Lemma 12. Without loss of generality, we first assume that for any STEP + ID network h, in the evaluation of h(x), all inputs to STEP neurons (i.e., 1[x ≥ 0]) is non-zero for all x ∈ X . This assumption can be easily satisfied by adding some small bias to the inputs of STEP neurons where such a bias alwasy exists since |X | <∞. Furthermore, introducing this assumption does not change h(x) for all x ∈ X Now, we describe our construction of g. Let δ > 0 be a number satisfying that the absolute value of all inputs to STEP neurons in the evaluation of f(x) for all x ∈ X is at least δ. Such δ always exists due to the assumption we made. Let L be the number of hidden layers in f . Starting from f , we\niteratively substitute the STEP and ID hidden neurons into the sigmoidal activation σ, from the last hidden layer to the first hidden layer. In particular, by using Claim 9 and Claim 10, we replace ID and STEP neurons in the `-th hidden layer by σ neurons approximating ID and STEP.\nFirst, let gL be a network identical to f except for its L-th hidden layer consisting of σ neurons approximating ID and STEP in f . Here, we accurately approximate ID and STEP neurons by σ using Claim 9 and Claim 10 so that |gL(x)− f(x)| < ε/L for all x ∈ X . Note that such approximation always exists due to the existence of δ > 0. Now, let gL−1 be a network identical to fL except for its (L − 1)-th hidden layer consisting of σ neurons approximating ID and STEP of gL. Here, we also accurately approximate ID and STEP neurons by σ using Claim 9 and Claim 10 so that |gL(x)− gL−1(x)| < ε/L for all x ∈ X . If we repeat this procedure until replacing the first hidden layer, then g := g1 would be the desired network satisfying that |f(x) − g(x)| < ε for all x ∈ X . This completes the proof of Lemma 12." }, { "heading": "E PROOF OF LEMMAS FOR THEOREM 1", "text": "For proving each of Lemma 4–7, we prove stronger technical lemma, as stated in Sections E.1–E.4, and transform the STEP + ID networks to σ networks using Lemma 12." }, { "heading": "E.1 PROOF OF LEMMA 4", "text": "To prove Lemma 4, we introduce the following lemma. We note that Lemma 13 follows the construction for proving VC-dimension lower bound of RELU networks (Bartlett et al., 2019). The proof of Lemma 13 is presented in Section H.4.\nLemma 13. For any A,B,D,K,R ∈ N such that AB ≥ K, there exists a STEP + ID network fθ of 2dBDR e + 2 hidden layers and 4A + ( (2R + 5)2R + 2R2 + 8R + 7 ) dBDR e − R2\nR − R2 + 3 parameters satisfying the following property: For any finite set X ⊂ [0,K), for any y : [K]→ [2D], there exists θ such that fθ(x) = y(bxc) for all x ∈ X .\nBy choosing K ← V , A ← Θ(V p), B ← Θ(V 1−p), R ← 1 + (2p − 1) log V , and D ← dlogCe in Lemma 13, there exists a STEP + ID network of O ( V 1−p 1+(p−0.5) log V logC )\nhidden layers and O(V p) parameters which can memorize any X ⊂ R of size V with C classes satisfying bXc = [V ]. Combining this with Lemma 12 completes the proof of Lemma 4." }, { "heading": "E.2 PROOF OF LEMMA 5", "text": "Since the proof is trivial when dx = 1, we consider dx ≥ 2. To this end, we first project all x ∈ X to u>x ∈ R by choosing some unit vector u ∈ Rdx so that\nmax{x,x′}∈(X2) |u>(x− x′)| min{x,x′}∈(X2) |u>(x− x′)| < N2∆\n√ πdx\n8 . (3)\nSuch a unit vector u always exists due to the following lemma. The proof of Lemma 14 is presented in Section H.1.\nLemma 14. For any N, dx ∈ N, for any X ∈ (Rdx N ) , there exists a unit vector u ∈ Rdx such that√\n8 πdx 1 N2 ‖x− x ′‖2 ≤ |u>(x− x′)| ≤ ‖x− x′‖2 for all x, x′ ∈ X .\nFinally, we construct the desired map by\nx 7→ u >x−min{u>x : x ∈ X1}\nmin{x,x′}∈(X12 ) |u>(x− x′)|\n=: v>x+ b\nfor some v ∈ Rdx and b ∈ R so that b{v>x+ b : x ∈ X}c ∈ ([dN2∆√πdx/8e]\nN\n) . This completes the\nproof of Lemma 4." }, { "heading": "E.3 PROOF OF LEMMA 6", "text": "To prove Lemma 6, we introduce the following lemma. The proof of Lemma 15 is presented in Section H.2.\nLemma 15. For anyN,K, d ∈ N, for anyX such that bXc ∈ (\n[K] N\n) , there exists a STEP+ID network\nf of 1 hidden layer and width d such that bf(X )c ∈ (\n[T ] N\n) where T := max {⌈ K b(d+1)/2c ⌉ , bN 2 4 +1c } .\nFrom Lemma 15, one can observe that a STEP + ID network of 1 hidden layer consisting of 3 hidden neuron which can map any Z such that bZc ∈ ( [K] N ) to Z ′ such that bZ ′c ∈ ( [T ] N ) where\nT := max {⌈\nK b(d+1)/2c\n⌉ , bN 2 4 + 1c }\n. Combining Lemma 15 and Lemma 12 completes the proof of Lemma 6." }, { "heading": "E.4 PROOF OF LEMMA 7", "text": "In the compression 2 step, we further improve the bound U := bN 2\n4 + 1c on the hidden feature values to V = Θ(N2−w). Namely, we map X3 to X4 such that bX4c ∈ ( [V ] N ) . To construct such a mapping, we introduce the following lemma. The proof of Lemma 16 is presented in Section H.3. Lemma 16. For any N,K,L, d1, . . . , dL ∈ N such that N < K and d` ≥ 3 for all `, for any X such that bXc ∈ ( [K] N ) , there exists a STEP + ID network f of L hidden layers having d` neurons at\nthe `-th hidden layer such that bf(X )c ∈ (\n[T ] N\n) where T := min { K,max { N × dNC e, d K 2 e }} and\nC := b 12 + 1 2 ∑L `=1(d` − 2)c.\nFrom Lemma 16, a STEP + ID network f of one hidden layer having Θ ( N2\nK\n) hidden neurons can\nmap any X such that bXc ∈ (\n[K] N\n) to f(X ) such that bf(X )c = ( [T ] N ) where T := max{N, dK2 e}.\nCombining this and Lemma 12 completes the proof of Lemma 7." }, { "heading": "F PROOF OF THEOREM 2", "text": "In this proof, we construct a STEP + ID network and transform it to a σ network using Lemma 12, as in the proof of Theorem 1 in Section 5 and Section E. In particular, Theorem 2 is a direct consequence of Lemma 14, Lemma 15, Lemma 16, and Lemma 17 presented as follows. The proof Lemma 17 is presented in Section H.7. Lemma 17. For any A,B,D,K ∈ N such that AB ≥ K, there exists a STEP + ID network fθ of A + (2D + 1)B hidden layers and width 3 satisfying the following property: For any finite set X ∈ [0,K), for any y : [K]→ [2D], there exists θ such that fθ(x) = y(bxc).\nFrom Lemma 14, Lemma 15, and Claim 11, a STEP + ID network of dlog(∆ √\n2πdx)e hidden layers and width 3 can map a ∆-separated set of inputs X1 to X2 such that bX2c ∈ ( [U ] N ) where U := bN 2 4 +1c. From Lemma 16, a STEP+ID network of ∑dlog(U/V )e−1 i=1 (⌈ 2N bU/(2iN)c ⌉ −1 ) + ⌈ 2N bV/Nc ⌉ −1\nhidden layers and width 3 can map X2 to X3 such that bX3c ∈ ( [V ] N ) for some N ≤ V ≤ U . Here, we will choose V = Θ(N4/3). Finally, from Lemma 17, for any A,B ∈ N such that AB ≥ V , a STEP + ID network of A+ (2D + 1)B hidden layers and width 3 can map X3 to their labels where D := dlogCe. We will choose A,B = Θ(N2/3) to satisfy AB ≥ V = Θ(N4/3). Hence, for any A,B, V ∈ N such that N ≤ V ≤ U and AB ≥ V , a STEP + ID network of dlog(∆ √ 2πdx)e + ∑dlog(U/V )e−1 i=1 (⌈ 2N bU/(2iN)c ⌉ − 1 ) + ⌈ 2N bV/Nc ⌉ + A + (2D + 1)B − 1 hidden layers and width 3 can memorize arbitrary ∆-separated set of size N . Note that combining functions does not require additional hidden layer as the linear maps constructing the outputs of functions can be absorbed into the first linear map in the next function.\nFinally, substituting V ← dN4/3e, A ← d √ (2D + 1)V e, and B ← d √ V/(2D + 1)e and using Lemma 12 result in the statement of Theorem 2. This completes the proof of Theorem 2." }, { "heading": "G PROOF OF THEOREM 3", "text": "The proof of Theorem 3 have the same structure of the proof of Theorem 1 consisting of four steps: projection, compression 1, compression 2, and learning. In particular, we construct a STEP + ID network and use Lemma 12 as in the proofs of Theorem 1 and Theorem 2.\nFor the network construction, we divide the function of fθ into four disjoint parts, as in Section 5. The first part does not utilize a hidden layers but construct project input vectors into scalar values. The second part corresponds to the first L1 hidden layers decreases the upper bound on scalar values to O(N2). The third part corresponds to the next LK−1 − L1 hidden layers further decreases the upper bound to o(N2). The last part corresponds to the rest hidden layers construct a network mapping hidden features to their labels.\nNow, we describe our construction in detail. To begin with, let us denote a ∆-separated set of inputs by X1. First, from Lemma 14, one can project X1 to X2 such that bX2c ∈ ([dN2∆√πdx/8e]\nN\n) . Note\nthat the projection step does not require to use hidden layers as it can be absorbed into the linear map before the next hidden layer. Then, from Lemma 15, the first L1 hidden layers can map X2 to X3 such that bX3c ∈ ( [bN2/4+1c]\nN\n) since\nL1∏ `=1 ⌊d` + 1 2 ⌋ ≥ ∆ √ 2πdx ⇒ L1∏ `=1 ⌊d` + 1 2 ⌋ ≥ N2∆ √ πdx/8 N2/4\n⇒N 2\n4 ≥ N2∆ √ πdx/8∏L1\n`=1b(d` + 1)/2c ⇒ ⌊N2 4 + 1 ⌋ ≥ N2∆ √ πdx/8∏L1\n`=1b(d` + 1)/2c ⇒ ⌊N2 4 + 1 ⌋ ≥ ⌈ N2∆√πdx/8∏L1\n`=1b(d` + 1)/2c\n⌉ .\nNote that we also utilize Claim 11 for the sequential application of Lemma 15. Consecutively, from Lemma 16, the next LK−1 − L1 hidden layers can map X3 to X4 such that bX4c ∈\n( [dU/2K−2e]\nN ) since the third condition holds and\nLi∑ `=Li−1+1 (d` − 2) ≥ 2i+3 ⇒ 1 2 Li∑ `=Li−1+1 (d` − 2) ≥ 2i+2\n⇒ ⌊1\n2 +\n1\n2 Li∑ `=Li−1+1 (d` − 2) ⌋ ≥ 2i+2\n⇒ N⌊ 1 2 + 1 2 ∑Li `=Li−1+1 (d` − 2) ⌋ ≤ N 2i+2\n⇒ ⌈ N⌊\n1 2 + 1 2 ∑Li `=Li−1+1 (d` − 2) ⌋⌉ ≤ N 2i+1\n⇒N × ⌈ N⌊\n1 2 + 1 2 ∑Li `=Li−1+1 (d` − 2) ⌋⌉ ≤ N2/4 2i−1\n⇒N × ⌈ N⌊\n1 2 + 1 2 ∑Li `=Li−1+1 (d` − 2) ⌋⌉ ≤ bN2/4 + 1c 2i−1\n⇒N × ⌈ N⌊\n1 2 + 1 2 ∑Li `=Li−1+1 (d` − 2) ⌋⌉ ≤ ⌈bN2/4 + 1c 2i−1 ⌉\nwhere we use the inequality dae ≤ 2a for a ≥ 1/2 and the assumption K ≤ logN , i.e., N/2i+1 ≥ 1 for i ≤ logN − 1. Here, we also utilize Claim 11 for the sequential application of Lemma 16. Now, we reinterpret the last condition as follows.\n2K · LK∑ `=LK−1+1 (d` − 2) · ⌊(L− LK − 1)/dlogCe⌋ ≥ N2 − 4 ⇒\n LK∑ `=LK−1+1 (d` − 2) · ⌊(L− LK − 1)/dlogCe⌋ ≥ N2/4 + 1 2K−2\n⇒ LK∑ `=LK−1+1 (d` − 2) · ⌊(L− LK − 1)/dlogCe⌋ ≥ ⌈bN2/4 + 1c 2K−2 ⌉\nFinally, using the following lemma and the above inequality, the rest hidden layers can map X4 to their corresponding labels (by choosing L′ := L3). The proof of Lemma 18 is presented in Section H.8. Lemma 18. For D,K,L, d1, . . . , dL ∈ N, suppose that there exist 0 < L′ < L and rL′+1, . . . , rL−1 ∈ N satisfying that for rL′ = rL = 1 L′∑\n`=1\n(d` − 2) · ⌊∑L−1`=L′+1 r` D ⌋ ≥ K and 2r` + r` + r`−1 + 3 ≤ d` for all L′ + 1 ≤ ` ≤ L.\nThen, there exists a STEP + ID network fθ of L hidden layers having d` hidden neurons at the `-th hidden layer such that for any finite X ⊂ [0,K), for any y : [K] → [2D], there exists θ satisfying fθ(x) = y(bxc) for all x ∈ X .\nChoose K ← dU/2K−2e and r` = 1 for all ` in Lemma 18 completes our construction of the STEP + ID network. Note that we utilize the condition that d` ≥ 7 for all LK < ` here. Using Lemma 12 completes the proof of Theorem 3." }, { "heading": "H PROOFS OF TECHNICAL LEMMAS", "text": "" }, { "heading": "H.1 PROOF OF LEMMA 14", "text": "Since the upper bound holds for any unit vector u ∈ Rdx , we focus on the lower bound. In addition, if N = 1, the result of Lemma 14 trivially holds. Hence, we assume N ≥ 2, i.e., √ 8 πdx 1 N2 < 1. In this proof, we show that given any vector v ∈ Rdx such that ‖v‖2 6= 0, a random unit vector u ∈ Rdx from the uniform distribution satisfies that\nP ( |u>v| ‖v‖2 < √ 8 πdx 1 N2 ) < 2 N2 . (4)\nThis implies that there exists a unit vector u ∈ Rdx such that √\n8 πdx 1 N2 ‖x − x ′‖2 ≤ |u>(x − x′)| for all x, x′ ∈ X , due to the following union bound: for V = { x− x′ : {x, x′} ∈ (X 2 ) , x ≤ x′ } for some total order ≤ on X ,\nP (⋃ v∈V { |u>v| ‖v‖2 < √ 8 πdx 1 N2 }) ≤ ∑ v∈V P ( |u>v| ‖v‖2 < √ 8 πdx 1 N2 ) < N(N − 1) 2 × 2 N2 < 1.\nNow we prove (4). To begin with, we show that the following equality holds for any v ∈ R: P ( |u>v| ‖v‖2 < √ 8 πdx 1 N2 ) = P ( |u1| < √ 8 πdx 1 N2 ) . Here, the equality follows from choosing v/‖v‖2 = (1, 0, . . . , 0) using the symmetry. Furthermore, P ( |u1| < √ 8 πdx 1 N2 ) can be bounded by\nP ( |u1| < √ 8\nπdx\n1\nN2\n) = P ( 0 < u1 < √ 8\nπdx\n1\nN2\n) + P ( − √ 8\nπdx\n1\nN2 < u1 ≤ 0 ) = 2× P ( 0 < u1 < √ 8\nπdx\n1\nN2 ) = 2\nAreadx(1) × ∫ π 2 arccos (√\n8 πdx 1 N2\n) Areadx−1(sinφ)dφ\n= 2× Areadx−1(1) Areadx(1)\n× ∫ π 2\narccos (√\n8 πdx 1 N2\n) sindx−2 φdφ\n= 2× 2π\ndx−1 2 /Γ(dx−12 )\n2π dx 2 /Γ(dx2 )\n× ∫ π 2\narccos (√\n8 πdx 1 N2\n) sindx−2 φdφ\n< 2× √ dx 2π × ∫ π 2 arccos (√\n8 πdx 1 N2\n) 1dφ\n= √ 2dx π ( π 2 − arccos (√ 8\nπdx\n1\nN2 )) = √ 2dx π arcsin (√ 8\nπdx\n1\nN2 ) ≤ √\n2dx π × π 2 × √ 8\nπdx\n1\nN2 =\n2\nN2\nwhere Aread(r) := 2π d 2 rd−1/Γ ( d 2 ) denotes the surface area of a hypersphere of radius r in Rd and Γ(x) denotes the gamma function. Here, the second equality follows from the symmetry and P(u1 = 0) = 0. The first inequality follows from sinφ ≤ 1 and Γ( dx2 ) Γ( dx−12 ) < √ dx 2 from the Gautschi’s inequality (see Lemma 19). The second inequality follows from φ ≤ π2 sinφ for 0 ≤ φ ≤ π 2 . This completes the proof of Lemma 14.\nLemma 19 [Gautschi’s inequality (Gautschi, 1959)]. For any x > 0, for any s ∈ (0, 1),\nx1−s < Γ(x+ 1)\nΓ(x+ s) < (x+ 1)1−s." }, { "heading": "H.2 PROOF OF LEMMA 15", "text": "In this proof, we assume thatK > N2/4 and T = ⌈\nK b(d+1)/2c\n⌉ where other cases trivially follow from\nthis case. To begin with, we first define a network fb(x) : [0,K)→ [0, T ) for b = (bi)b(d−1)/2ci=1 ∈ [T ]b(d−1)/2c as\nfb(x) := x if x < T x+ b1 mod T if T ≤ x < 2T x+ b2 mod T if 2T ≤ x < 3T\n... x+ bb(d−1)/2c mod T if bd−12 cT ≤ x\n= x− T × 1[x ≥ T ] + b(d−1)/2c∑\ni=1 (bi − i−1∑ j=1 bj ) × 1[x ≥ iT ]− T × 1[x+ bi ≥ (i+ 1)T ] . (5)\nOne can easily observe that fb can be implemented by a STEP + ID network of 1 hidden layer and width d as T × 1[x ≥ T ] in (5) can be absorbed into ( bi − ∑i−1 j=1 bj ) × 1[x ≥ iT ] in (5) for i = 1.\nNow, we show that if T > N2/4, then there exist b ∈ [T ]b(d−1)/2c such that ∣∣bfb(X )c∣∣ = N to complete the proof. Our proof utilizes the mathematical induction on i: If there exist b1, . . . , bi−1 ∈ [T ] such that\nbfb({x ∈ X : x < iT})c = ( [T ]∣∣b{x ∈ X : x < iT}c∣∣ ) , (6)\nthen there exists bi ∈ [T ] such that bfb({x ∈ X : x < (i+ 1)T})c = ( [T ]∣∣b{x ∈ X : x < (i+ 1)T}c∣∣ ) . (7)\nHere, one can observe that the statement trivially holds for the base case, i.e., for {x ∈ X : x < T}. Now, using the induction hypothesis, suppose that there exist b1, . . . , bi−1 ∈ [T ] satisfying (6). Now, we prove that there exists bi ∈ [T ] such that Sbi := bfb({x ∈ X : iT ≤ x < (i+ 1)T})c does not intersect with T := bfb({x ∈ X : x < iT})c, i.e, (7) holds. Consider the following inequality:∑\nbi∈[T ]\n|Sbi ∩ T | = |Sbi | × |T | ≤ N2\n4\nwhere the equality follows from the fact that for each x ∈ {x ∈ X : iT ≤ x < (i+ 1)T}, there exists exactly |T | values of bi so that bx mod T c ∈ T . However, since the number of possible choices of bi is T , if T > N2/4, then there exists bi ∈ [T ] such that Sbi ∩T = ∅, i.e., (7) holds. This completes the proof of Lemma 15." }, { "heading": "H.3 PROOF OF LEMMA 16", "text": "The proof of Lemma 16 is similar to that of Lemma 15. To begin with, we first define a network fb(x) : [0,K)→ [0, T ) for b = (bi)Ci=1 ∈ [T ]C and T =: M1 < M2 < · · · < MC+1 := K so that |{x ∈ X : Mi ≤ x < Mi+1}| ≤ dNC e ≤ b T N c for all i as\nfb(x) := x if x < T x+ b1 mod T if M1 = T ≤ x < M2 x+ b2 mod T if M2 ≤ x < M3\n... x+ bC mod T if MC ≤ x\n= x− 2T × 1[x ≥ T ] + C∑ i=1\n(( T + bi −\ni−1∑ j=1 bj ) × 1[x ≥Mi]\n− T × 1 [ x ≥ min{2T − bi,Mi+1} ]) (8)\nwhere (8) holds as T ≥ dK2 e. Here, one can easily implement fb by a STEP + ID network of L hidden layer and d` neurons at the `-th hidden layer by utilizing one neuron for storing the input x, another one neuron for storing the temporary output, and other neurons implement indicator functions at each layer. This is because one do not require to store x in the last hidden layer and there exists 2C indicator functions to implement (T ×1[x ≥ T ] in (8) can be absorbed into ( T + bi− ∑i−1 j=1 bj ) ×1[x ≥Mi] in (8) for i = 1).\nNow, we show that if T ≥ N , then there exist b ∈ [T ]C such that ∣∣bfb(X )c∣∣ = N to completes the proof. Our proof utilizes the mathematical induction on i: If there exist b1, . . . , bi−1 ∈ [T ] such that\nbfb({x ∈ X : x < Mi})c = ( [T ]∣∣b{x ∈ X : x < Mi}c∣∣ ) , (9)\nthen there exists bi ∈ [T ] such that bfb({x ∈ X : x < Mi+1})c = ( [T ]∣∣b{x ∈ X : x < Mi+1}c∣∣ ) . (10)\nHere, one can observe that the statement trivially holds for the base case, i.e., for {x ∈ X : x < T}. From the induction hypothesis, suppose that there exist b1, . . . , bi−1 ∈ [T ] satisfying (9). Now, we prove that there exists bi ∈ [T ] such that Sbi := bfb({x ∈ X : Mi ≤ x < Mi+1})c does not intersect with T := bfb({x ∈ X : x ≤Mi})c, i.e., (10) holds. Consider the following inequality:∑\nbi∈[T ]\n|Sbi ∩ T | ≤ b TN c × ( N − b TN c ) < T\nwhere the first inequality follows from the fact that |Sbi | ≤ b TN c, |T | ≤ ( N − b TN c ) , and for each x ∈ {x ∈ X : Mi ≤ x < Mi+1}, there exists exactly |T | values of bi so that bx mod T c ∈ T . However, since the number of possible choices of bi is T , there exists bi ∈ [T ] such that Sbi ∩ T = ∅, i.e, (10) holds. This completes the proof of Lemma 16." }, { "heading": "H.4 PROOF OF LEMMA 13", "text": "In this proof, we explicitly construct fθ satisfying the desired property stated in Lemma 13. To begin with, we describe the high-level idea of the construction. First, we construct a map g : x 7→(⌊\nx B\n⌋ , x mod B ) to transforming an input x ∈ [0,K) to a pair (a, b) such that a ∈ [A] and bbc ∈ [B]. Here, we give labels to the pair (a, b) corresponding to the input x as y(a, bbc) := y(bxc). Note that this label is well-defined as if bx1c 6= bx2c, then bg(x1)c 6= bg(x2)c. Now, we construct parameters w0, . . . , wA−1 containing the label information of X as\nwa := ∑ c∈[B] y(a, c)× 2−(c+1)D, (11)\ni.e., from the (bbc ·D + 1)-th bit to the (bbc ·D +D)-th bit of the a-th parameter (wa) contains the label information of (a, bbc). Under this construction, we recover the label of x ∈ X by first mapping x to the pair (a, b) and extracting the z-th parameter wa. Then, using STEP, we extract bits from the (bbc ·D + 1)-th bit to the (bbc ·D +D)-th bit of wa to recover the label. Now, we explicitly construct the above procedure. To this end, we introduce the following lemma. The proof of Lemma 20 is presented in Section H.5. Lemma 20. For any A,B,K,L, d1, . . . dL ∈ N such that AB ≥ K and d` ≥ 2 for all `, for any w0, . . . , wA−1 ∈ R, for any finite setX ⊂ [0,K), if ∑L `=1(d`−2) ≥ A, then there exists a STEP+ID network f of L layers and d` neurons at the `-th layer such that f(x) = (wbx/Bc, x mod B) for all x ∈ X .\nFrom Lemma 20, a STEP + ID network of 1 hidden layer consisting of A+ 2 hidden neurons can map x ∈ X to (wbx/Bc, x mod B). Note that this network requires overall 4A+ 10 parameters (3A+ 6 edges and A+ 4 biases).\nFinally, we introduce the following lemma for extracting from the (bbc · D + 1)-th bit to the (bbc ·D +D)-th bit of wa. The proof of Lemma 21 is presented in Section H.6. Lemma 21. For any D,B,R ∈ N, for any finite set X ⊂ [0, B), there exists a STEP + ID network f of 2dBDR e hidden layers and ( (2R + 5)2R + 2R2 + 8R + 7 ) dBDR e − R2\nR − R2 + 3 parameters satisfying the following property: For any w = ∑BD i=1 ui × 2−i such that ui ∈ {0, 1}, f(x,w) =∑D\ni=1 ubxc·D+i × 2D−i for all x ∈ X .\nFrom Lemma 21, a STEP + ID network of 2dBDR e hidden layers and ( (2R + 5)2R + 2R2 + 8R +\n7 ) dBDR e −R2\nR −R2 + 3 parameters. Hence, by combining Lemma 20 and Lemma 21, fθ can be implemented by a STEP + ID network of 2dBDR e+ 2 hidden layers and 4A+ ( (2R+ 5)2R + 2R2 +\n8R+ 7 ) dBDR e −R2 R −R2 + 3 parameters. This completes the proof of Lemma 13." }, { "heading": "H.5 PROOF OF LEMMA 20", "text": "We design f as f := fL ◦ · · · ◦ f1(0, x) where each f` represents the function of the `-th layer consisting of d` neurons. In particular, we construct f` as follows:\nf`(w, x) := ( w + w0 × 1[` = 1] + d`−2∑ i=1 (wc`+i − wc`+i−1)× 1[x ≥ iB],\nx− d`−2∑ i=1 B × 1[x ≥ iB] )\nwhere w−1 := 0 and c` := ∑`−1 i=1(d` − 2). Then, f is the desired function and each f` can be implemented by a STEP + ID networks of 1 hidden layer consisting of d` hidden neurons (two neurons for storing x,w and other neurons are for d` − 2 indicator functions). This completes the proof of Lemma 20." }, { "heading": "H.6 PROOF OF LEMMA 21", "text": "We construct f(x,w) := 2RdBD/Re+D × fdBD/Re ◦ · · · ◦ f1(x,w) where f` is defined as\nf`(x, v) := ( x, v − ∑R i=1 u(`−1)R+i × 2−(`−1)R−i + ∑R i=1 ( u(`−1)R+i ∧ 1[mi,` ≤ x < mi,` + 1] ) × 2−ri,` ) if ` < dBDR e v − ∑R i=1 u(`−1)R+i × 2−(`−1)R−i\n+ ∑R i=1 ( u(`−1)R+i ∧ 1[mi,` ≤ x < mi,` + 1] ) × 2−ri,` if ` = dBDR e\n.\nwhere ui denotes the i-th bit of w in the binary representation, ∧ denotes the binary ‘and’ operation, and mi,`, ri,` are defined as\nmi,` := ⌊ (`− 1)R+ i\nD ⌋ ri,` := ⌈BD R ⌉ R+ ( (`− 1)R+ i− 1 mod D ) + 1.\nNamely, each f` extracts R bits from the input w and it store the extracted bits to the last bits of v if the extracted bits are in from (bxc ·D + 1)-th bit to the (bxc ·D +D)-th bit of w. Thus, f(x,w) is the desired function for Lemma 21.\nTo implement each f` by a STEP + ID network, we introduce Lemma 22. Note that we extract ui from w in Lemma 22, i.e., we do not assume that ui is given. From Lemma 22, a STEP + ID network of 2dBDR e hidden layers consisting of 2\nR +R+ 1 and R+ 2 hidden neurons alternatively can map (x,w) to ∑D i=1 ubxc·D+i × 2D−i for all x ∈ X . By considering the input dimension 2 and the output\ndimension 1, this network requires ( (2R+5)2R+2R2 +8R+7 ) dBDR e−R2\nR−R2 +3 parameters ( ( (2R+ 4)2R + 2R2 + 6R+ 4 ) dBDR e−R2\nR−R2 + 2 edges and (2R + 2R+ 3)dBDR e+ 1 biases). This completes the proof of Lemma 21. Lemma 22. A STEP + ID network of 2 hidden layers having 2R +R+ 1 and R+ 2 hidden neurons at the first and the second hidden layer, respectively, can implement f`.\nProof of Lemma 22. We construct f` := g3 ◦ (g2 ⊕ g1) where g2 ⊕ g1 denotes the function concatenating the outputs of g1, g2. In this proof, we mainly focus on constructing f` for ` < dBDR e since fdBD/Re can be implemented similarly. We define g1, g2, g3 as\ng1(x, v) :=\n( x, v,\n2R−1∑ i=0 η1,i × 1[i× 2−`R ≤ v < (i+ 1)× 2−`R],\n..., 2R−1∑ i=0 ηR,i × 1[i× 2−`R ≤ v < (i+ 1)× 2−`R] ) =(x, v, u(`−1)R+1, . . . , u`R)\ng2 ( x, v) := ( 1[m1,` ≤ x < m1,` + 1], . . . ,1[mR,` ≤ x < mR,` + 1] ) g3 ◦ (g1 ⊕ g2) := ( x, v −\nR∑ i=1 u(`−1)R+i × 2−(`−1)R−i\n+ R∑ i=1 1 [ u(`−1)R+i + 1[mi,` ≤ x < mi,` + 1] ≥ 2 ] × 2−ri,`\n)\n= ( x, v −\nR∑ i=1 u(`−1)R+i × 2−(`−1)R−i)\n+ R∑ i=1\n(u(`−1)R+i ∧ 1[mi,` ≤ x < mi,` + 1])× 2−ri,` ) .\nwhere ηr,i is a constant such that ηr,i = 1 if i × 2−`R ≤ x < (i + 1) × 2−`R implies that the ((`− 1)R + r)-th bit of x is 1 and ηr,i = 0 otherwise. Here, one can easily observe that g1 can be implemented by a linear combinations of 1[v ≥ 2−`R], . . . ,1[v ≥ (2R − 1)× 2−`R] as it trivially holds that 1[v ≥ 0] and 1[v < 2−(`−1)R], i.e., 2R − 1 indicator functions are enough for g1. Hence, g1 can be implemented by a STEP + ID network of 1 hidden layer consisting of 2R + 1 hidden neurons where additional 2 neurons are for passing x, v. In addition, g2 can be implemented by a STEP + ID network of R hidden neurons. Finally, g3 can be implemented by a STEP + ID network of 1 hidden layer consisting of R + 2 hidden neurons (R neurons for R indicator functions and 2 neurons for passing x, v).\nTherefore, f` can be implemented by a STEP + ID network of 2 hidden layers consisting of 2R+R+1 hidden neurons for the first hidden layer and R+ 2 hidden neurons for the second hidden layer. Note that implementation within two hidden layer is possible since the outputs of g1, g2 are simply linear combination of their hidden activation values and hence, can be absorbed into the linear map between hidden layers. This completes the proof of Lemma 22." }, { "heading": "H.7 PROOF OF LEMMA 17", "text": "The main idea of the proof of Lemma 17 is identical to that of Lemma 13. Recall A,B and w0, . . . , wA−1 ∈ R from the proof of Lemma 13. From Lemma 20, for any finite set X ⊂ [0,K), for any w0, . . . , wA−1 ∈ R, a STEP + ID network of A hidden layers and width 3 can map x to (wbx/Bc, x mod B) for all x ∈ X . Now, we introduce the following lemma replacing Lemma 21.\nLemma 23. For any D,B ∈ N, for any finite set X ⊂ [0, B), for any w = ∑DB i=1 ui × 2−i for some ui ∈ {0, 1}, there exists a STEP + ID network f of (2D + 1)B hidden layers and width 3 such that f(x,w) = ∑D i=1 ubxc·D+i × 2−i for all x ∈ X .\nUsing Lemma 20 and Lemma 23, one can easily find a STEP + ID network of A+ (2D+ 1)B hidden layers and width 3 satisfying the condition in Lemma 17. This completes the proof of Lemma 17.\nProof of Lemma 23. We construct f(x,w) := 2D(B+1) × fDB ◦ gDB ◦ hDB ◦ · · · ◦ f1 ◦ g1 ◦ h1 where f`, g`, h` are defined as\nh`(x,w) :=\n{( x,w ) if ` mod D 6= 1(\nx− 1 + 3K × 1[x < 0], w ) if ` mod D = 1\ng`(x,w) := ( x,w − 2−` × 1[w ≥ 2−`],1[w ≥ 2−`] ) f`(x,w, n) = ( x,w + 2−DK−d(`) × 1[x− n < −1]\n) where d(`) := ` − b `−1D c + 1. Now, we explain the constructions of f`, g`, h`. Let b ∈ [0, B) and v ∈ [0, 1) be inputs of f , i.e., consider f(b, v). First, the indicator function in h` is activated only at ` = (bbc + 1) · D + 1 as b < B. In particular, the first entry of the output of h(bbc+1)·D+1 is greater than 2B and this is the maximum value of the first entry of the output of h(bbc+1)·D+1 as it monotonically decreases as ` grows. The indicator function in g` extracts and outputs the `-th bit of u. Lastly, f` add the `-th bit of u if and only if ` ∈ {bbc ·D + 1, . . . , (bbc+ 1) ·D}. This is because x ∈ [−1, 0) if and only if ` ∈ {bbc ·D + 1, . . . , (bbc+ 1) ·D}. Here, h` at ` mod D = 1 can be implemented by a STEP + ID network of 1 hidden layer and width 3, g` can be implemented by a STEP + ID network of 1 hidden layer and width 3, and f` can be implemented by a STEP + ID network of 1 hidden layer and width 3. Hence, f can be implemented by a STEP + ID network of (2D + 1)B hidden layers and width 3. This completes the proof of Lemma 23." }, { "heading": "H.8 PROOF OF LEMMA 18", "text": "The proof of Lemma 18 is almost identical to the proof of Lemma 13. Recall A,B and w0, . . . , wA−1 ∈ R as in the proof of Lemma 13. From Lemma 24, for any finite set X ⊂ [0,K), for any w0, . . . , wA−1 ∈ R, the first L′ hidden layers of the STEP + ID network fθ can map x to (wbx/Bc, x mod B) for all x ∈ X . Now, we introduce the following lemma replacing Lemma 21. Using Lemma 24 completes the proof of Lemma 18. Lemma 24. For any D,B,R,L, d1, . . . , dL ∈ N, for any finite set X ⊂ [0, B), suppose that there exists r1, . . . , rL−1 ∈ N satisfying that for r0 = rL = 1,\nL−1∑ `=1 r` ≥ BD, 2r` + r` + r`−1 + 3 ≤ d` for all 1 ≤ ` ≤ L.\nThen, there exists a STEP + ID network f of L hidden layers having d` hidden neurons at the `-th hidden layer satisfying the following property: For any w = ∑BD i=1 ui × 2−i such that ui ∈ {0, 1},\nf(x,w) = ∑D i=1 ubxc·D+i × 2D−i for all x ∈ X .\nProof of Lemma 24. The proof of Lemma 24 utilizes the network constructions in the proofs of Lemma 21 and Lemma 22. In particular, we construct f(x,w) := 2D+ ∑L−1 `=1 r`×fL−1◦· · ·◦f1(x,w) defined as\nf`(x, v) := ( x, v − ∑r` i=1 uR`+i × 2−R`−i + ∑r` i=1 ( uR`+i ∧ 1[mi,` ≤ x < mi,` + 1] ) × 2−si,` ) if ` < dBDR e\nv − ∑r` i=1 uR`+i × 2−R`−i + ∑r` i=1 ( uR`+i ∧ 1[mi,` ≤ x < mi,` + 1] ) × 2−si,` if ` = dBDR e .\nwhere ui denotes the i-th bit of w in the binary representation, ∧ denotes the binary ‘and’ operation, and R`,mi,`, si,` are defined as\nR` := `−1∑ i=1 r`,\nmi,` := ⌊R` + i\nD\n⌋ ,\nsi,` := RL + ( R` + i− 1 mod D ) + 1.\nNote that R1 = 0 as the summation starts from i = 1 and RL = ∑L−1 `=1 r`. Namely, each f` extracts r` bits from the input w and it store the extracted bits to the last bits of v if the extracted bits are in from (bxc ·D + 1)-th bit to the (bxc ·D +D)-th bit of w. Thus, f(x,w) is the desired function for Lemma 24.\nWe construct f`(x, v) using the `-th hidden layer and the (`+ 1)-th hidden layer, i.e., there exists an overlap between constructions of f`(x, v) and f`+1(x, v). In particular, Lemma 22 directly allows us to obtain such a construction under the assumption in Lemma 18 that\n2r` + r` + r`−1 + 3 ≤ d` for all 1 ≤ ` ≤ L.\nThis completes the proof of Lemma 24." } ]
2,020
null
SP:aed4e9af07b32dc6f38e851db17287e7a29f6f09
[ "This paper addresses the visual question answering in a multi-turn or conversational setting. Given a video (series of frames or images), a model has to reason across space and time to arrive at a correct answer for a given question. This task involves understanding the content and context of dialogue turns, i.e., given a question and N dialogue turns, only M<<N of the dialogue turns are strongly related to the question posed. This paper proposes to simulate the dependencies between dialogue turns, forming a reasoning path, to answer a given question. In a way, the proposed approach selects relevant dialogue turns that are useful to answer the question. " ]
Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multiturn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues.
[ { "affiliations": [], "name": "VIDEO-GROUNDED DIALOGUES" }, { "affiliations": [], "name": "Hung Le" }, { "affiliations": [], "name": "Nancy F. Chen" }, { "affiliations": [], "name": "Steven C.H. Hoi" } ]
[ { "authors": [ "Huda Alamri", "Vincent Cartillier", "Abhishek Das", "Jue Wang", "Stefan Lee", "Peter Anderson", "Irfan Essa", "Devi Parikh", "Dhruv Batra", "Anoop Cherian", "Tim K. Marks", "Chiori Hori" ], "title": "Audio-visual sceneaware dialog", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Stanislaw Antol", "Aishwarya Agrawal", "Jiasen Lu", "Margaret Mitchell", "Dhruv Batra", "C Lawrence Zitnick", "Devi Parikh" ], "title": "Vqa: Visual question answering", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Akari Asai", "Kazuma Hashimoto", "Hannaneh Hajishirzi", "Richard Socher", "Caiming Xiong" ], "title": "Learning to retrieve reasoning paths over wikipedia graph for question answering", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Satanjeev Banerjee", "Alon Lavie" ], "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "venue": null, "year": 2005 }, { "authors": [ "Regina Barzilay", "Mirella Lapata" ], "title": "Modeling local coherence: An entity-based approach", "venue": "Computational Linguistics,", "year": 2008 }, { "authors": [ "Filip Boltužić", "Jan Šnajder" ], "title": "Back up your stance: Recognizing arguments in online discussions", "venue": "In Proceedings of the First Workshop on Argumentation Mining, pp", "year": 2014 }, { "authors": [ "Zi Chai", "Xiaojun Wan" ], "title": "Learning to ask more: Semi-autoregressive sequential question generation under dual-graph interaction", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 225–237,", "year": 2020 }, { "authors": [ "Tuhin Chakrabarty", "Christopher Hidey", "Smaranda Muresan", "Kathy McKeown", "Alyssa Hwang" ], "title": "AMPERSAND: argument mining for persuasive online discussions", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Yun-Wei Chu", "Kuan-Yen Lin", "Chao-Chun Hsu", "Lun-Wei Ku" ], "title": "Multi-step joint-modality attention network for scene-aware dialogue system", "venue": "DSTC Workshop @ AAAI,", "year": 2020 }, { "authors": [ "Kevin Clark", "Christopher D. Manning" ], "title": "Deep reinforcement learning for mention-ranking coreference models", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Abhishek Das", "Satwik Kottur", "Khushi Gupta", "Avi Singh", "Deshraj Yadav", "José MF Moura", "Devi Parikh", "Dhruv Batra" ], "title": "Visual dialog", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Nicola De Cao", "Wilker Aziz", "Ivan Titov" ], "title": "Question answering by reasoning across documents with graph convolutional networks", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association", "year": 2019 }, { "authors": [ "Ming Ding", "Chang Zhou", "Qibin Chen", "Hongxia Yang", "Jie Tang" ], "title": "Cognitive graph for multi-hop reading comprehension at scale", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Rory Duthie", "Katarzyna Budzynska" ], "title": "A deep modular rnn approach for ethos mining", "venue": "In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Vanessa Wei Feng", "Graeme Hirst" ], "title": "Classifying arguments by scheme. In Proceedings of the 49th annual meeting of the association for computational linguistics", "venue": "Human language technologies,", "year": 2011 }, { "authors": [ "Vanessa Wei Feng", "Ziheng Lin", "Graeme Hirst" ], "title": "The impact of deep hierarchical discourse structures in the evaluation of text coherence", "venue": "In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers,", "year": 2014 }, { "authors": [ "Peng Gao", "Zhengkai Jiang", "Haoxuan You", "Pan Lu", "Steven CH Hoi", "Xiaogang Wang", "Hongsheng Li" ], "title": "Dynamic fusion with intra-and inter-modality attention flow for visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Shijie Geng", "Peng Gao", "Chiori Hori", "Jonathan Le Roux", "Anoop Cherian" ], "title": "Spatio-temporal scene graphs for video dialog", "venue": "arXiv preprint arXiv:2007.03848,", "year": 2020 }, { "authors": [ "Deepanway Ghosal", "Navonil Majumder", "Soujanya Poria", "Niyati Chhaya", "Alexander Gelbukh" ], "title": "DialogueGCN: A graph convolutional neural network for emotion recognition in conversation", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Debanjan Ghosh", "Aquila Khanam", "Yubo Han", "Smaranda Muresan" ], "title": "Coarse-grained argumentation features for scoring persuasive essays", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Barbara J. Grosz", "Scott Weinstein", "Aravind K. Joshi" ], "title": "Centering: A framework for modeling the local coherence of discourse", "venue": "Comput. Linguist.,", "year": 1995 }, { "authors": [ "Camille Guinaudeau", "Michael Strube" ], "title": "Graph-based local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2013 }, { "authors": [ "Ivan Habernal", "Iryna Gurevych" ], "title": "Argumentation mining in user-generated web discourse", "venue": "Computational Linguistics,", "year": 2017 }, { "authors": [ "C. Hori", "H. Alamri", "J. Wang", "G. Wichern", "T. Hori", "A. Cherian", "T.K. Marks", "V. Cartillier", "R.G. Lopes", "A. Das", "I. Essa", "D. Batra", "D. Parikh" ], "title": "End-to-end audio visual scene-aware dialog using multimodal attention-based video features", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Chiori Hori", "Anoop Cherian", "Tim K Marks", "Takaaki Hori" ], "title": "Joint student-teacher learning for audio-visual scene-aware dialog", "venue": "Proc. Interspeech 2019,", "year": 2019 }, { "authors": [ "Wenpeng Hu", "Zhangming Chan", "Bing Liu", "Dongyan Zhao", "Jinwen Ma", "Rui Yan" ], "title": "Gsn: A graph-structured network for multi-party dialogues", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yunseok Jang", "Yale Song", "Youngjae Yu", "Youngjin Kim", "Gunhee Kim" ], "title": "Tgif-qa: Toward spatiotemporal reasoning in visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shiyan Jiang", "Kexin Yang", "Chandrakumari Suvarna", "Pooja Casula", "Mingtong Zhang", "Carolyn Rosé" ], "title": "Applying Rhetorical Structure Theory to student essays for providing automated writing feedback", "venue": "In Proceedings of the Workshop on Discourse Relation Parsing and Treebanking", "year": 2019 }, { "authors": [ "Yohan Jo", "Seojin Bang", "Emaad Manzoor", "Eduard Hovy", "Chris Reed" ], "title": "Detecting attackable sentences in arguments", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Gi-Cheon Kang", "Jaeseo Lim", "Byoung-Tak Zhang" ], "title": "Dual attention networks for visual reference resolution in visual dialog", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Seokhwan Kim", "Michel Galley", "Chulaka Gunasekara", "Sungjin Lee", "Adam Atkinson", "Baolin Peng", "Hannes Schulz", "Jianfeng Gao", "Jinchao Li", "Mahmoud Adada" ], "title": "The eighth dialog system technology challenge", "venue": null, "year": 1911 }, { "authors": [ "Diederick P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Shachi H Kumar", "Eda Okur", "Saurav Sahay", "Jonathan Huang", "Lama Nachman" ], "title": "Leveraging topics and audio features with multimodal attention for audio visual scene-aware dialog. 3rd Visually Grounded Interaction and Language (ViGIL) Workshop, NeurIPS, 2019", "venue": null, "year": 2019 }, { "authors": [ "Souvik Kundu", "Tushar Khot", "Ashish Sabharwal", "Peter Clark" ], "title": "Exploiting explicit paths for multi-hop reading comprehension", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hung Le", "Steven C.H. Hoi" ], "title": "Video-grounded dialogues with pretrained generation language models", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5842–5848,", "year": 2020 }, { "authors": [ "Hung Le", "Doyen Sahoo", "Nancy Chen", "Steven Hoi" ], "title": "Multimodal transformer networks for endto-end video-grounded dialogue systems", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hwanhee Lee", "Seunghyun Yoon", "Franck Dernoncourt", "Doo Soon Kim", "Trung Bui", "Kyomin Jung" ], "title": "Dstc8-avsd: Multimodal semantic transformer network with retrieval style word generator", "venue": "DSTC Workshop", "year": 2020 }, { "authors": [ "Jialu Li", "Esin Durmus", "Claire Cardie" ], "title": "Exploring the role of argument structure in online debate persuasion", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Zekang Li", "Zongjia Li", "Jinchao Zhang", "Yang Feng", "Cheng Niu", "Jie Zhou" ], "title": "Bridging text and video: A universal multimodal transformer for video-audio scene-aware dialog", "venue": "DSTC Workshop @ AAAI,", "year": 2020 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "Text Summarization Branches Out,", "year": 2004 }, { "authors": [ "Ziheng Lin", "Hwee Tou Ng", "Min-Yen Kan" ], "title": "Automatically evaluating text coherence using discourse relations", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Thao Minh Le", "Vuong Le", "Svetha Venkatesh", "Truyen Tran" ], "title": "Dynamic language binding in relational visual reasoning", "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Gaku Morio", "Katsuhide Fujita" ], "title": "End-to-end argument mining for discussion threads based on parallel constrained pointer architecture", "venue": "In Noam Slonim and Ranit Aharonov (eds.), Proceedings of the 5th Workshop on Argument Mining, ArgMining@EMNLP", "year": 2018 }, { "authors": [ "Akiko Murakami", "Rudy Raymond" ], "title": "Support or oppose? classifying positions in online debates from reply activities and opinion expressions", "venue": "In Coling 2010: Posters,", "year": 2010 }, { "authors": [ "Dat Tien Nguyen", "Shikhar Sharma", "Hannes Schulz", "Layla El Asri" ], "title": "From film to video: Multi-turn question answering with multi-modal context", "venue": "AAAI", "year": 2019 }, { "authors": [ "Vlad Niculae", "Joonsuk Park", "Claire Cardie" ], "title": "Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Andreas Peldszus", "Manfred Stede" ], "title": "Joint prediction in mst-style discourse parsing for argumentation mining", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Isaac Persing", "Vincent Ng" ], "title": "End-to-end argumentation mining in student essays", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Jan Wira Gotama Putra", "Takenobu Tokunaga" ], "title": "Evaluating text coherence based on semantic similarity graph", "venue": "In Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing,", "year": 2017 }, { "authors": [ "Lin Qiu", "Yunxuan Xiao", "Yanru Qu", "Hao Zhou", "Lei Li", "Weinan Zhang", "Yong Yu" ], "title": "Dynamically fused graph network for multi-hop reasoning", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Ramon Sanabria", "Shruti Palaskar", "Florian Metze" ], "title": "Cmu sinbad’s submission for the dstc7 avsd challenge", "venue": "In DSTC7 at AAAI2019 workshop,", "year": 2019 }, { "authors": [ "Idan Schwartz", "Alexander G Schwing", "Tamir Hazan" ], "title": "A simple baseline for audio-visual sceneaware dialog", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Idan Schwartz", "Seunghak Yu", "Tamir Hazan", "Alexander G Schwing" ], "title": "Factor graph attention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhouxing Shi", "Minlie Huang" ], "title": "A deep sequential model for discourse parsing on multi-party dialogues", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Gunnar A Sigurdsson", "Gül Varol", "Xiaolong Wang", "Ali Farhadi", "Ivan Laptev", "Abhinav Gupta" ], "title": "Hollywood in homes: Crowdsourcing data collection for activity understanding", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Richard Socher", "Andrej Karpathy", "Quoc V. Le", "Christopher D. Manning", "Andrew Y. Ng" ], "title": "Grounded compositional semantics for finding and describing images with sentences", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2014 }, { "authors": [ "Christian Stab", "Iryna Gurevych" ], "title": "Identifying argumentative discourse structures in persuasive essays", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Kai Sun", "Dian Yu", "Jianshu Chen", "Dong Yu", "Yejin Choi", "Claire Cardie" ], "title": "Dream: A challenge data set and models for dialogue-based reading comprehension", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Reid Swanson", "Brian Ecker", "Marilyn Walker" ], "title": "Argument mining: Extracting arguments from online dialogue", "venue": "In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Chenhao Tan", "Vlad Niculae", "Cristian Danescu-Niculescu-Mizil", "Lillian Lee" ], "title": "Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions", "venue": "In Proceedings of the 25th international conference on world wide web,", "year": 2016 }, { "authors": [ "Zeyun Tang", "Yongliang Shen", "Xinyin Ma", "Wei Xu", "Jiale Yu", "Weiming Lu" ], "title": "Multi-hop reading comprehension across documents with path-based graph convolutional network", "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ming Tu", "Guangtao Wang", "Jing Huang", "Yun Tang", "Xiaodong He", "Bowen Zhou" ], "title": "Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ramakrishna Vedantam", "C Lawrence Zitnick", "Devi Parikh" ], "title": "Cider: Consensus-based image description evaluation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Johannes Welbl", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Constructing datasets for multi-hop reading comprehension across documents", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Huiyuan Xie", "Ignacio Iacobacci" ], "title": "Audio visual scene-aware dialog system using dynamic memory networks", "venue": null, "year": 2020 }, { "authors": [ "Zhilin Yang", "Peng Qi", "Saizheng Zhang", "Yoshua Bengio", "William Cohen", "Ruslan Salakhutdinov", "Christopher D. Manning" ], "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Zhilin Yang", "Peng Qi", "Saizheng Zhang", "Yoshua Bengio", "William W. Cohen", "Ruslan Salakhutdinov", "Christopher D. Manning" ], "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "venue": "In Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2018 }, { "authors": [ "Koichiro Yoshino", "Chiori Hori", "Julien Perez", "Luis Fernando D’Haro", "Lazaros Polymenakos", "Chulaka Gunasekara", "Walter S Lasecki", "Jonathan K Kummerfeld", "Michel Galley", "Chris Brockett" ], "title": "Dialog system technology challenge", "venue": "arXiv preprint arXiv:1901.03461,", "year": 2019 }, { "authors": [ "Zilong Zheng", "Wenguan Wang", "Siyuan Qi", "Song-Chun Zhu" ], "title": "Reasoning visual dialogs with structural and partial observations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Henghui Zhu", "Feng Nan", "Zhiguo Wang", "Ramesh Nallapati", "Bing Xiang" ], "title": "Who did they respond to? conversation structure modeling using masked hierarchical transformer", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Traditional visual question answering (Antol et al., 2015; Jang et al., 2017) involves answering questions about a given image. Extending from this line of research, recently Das et al. (2017); Alamri et al. (2019) add another level of complexity by positioning each question and answer pair in a multi-turn or conversational setting (See Figure 1 for an example). This line of research has promising applications to improve virtual intelligent assistants in multi-modal scenarios (e.g. assistants for people with visual impairment). Most state-of-the-part approaches in this line of research (Kang et al., 2019; Schwartz et al., 2019b; Le et al., 2019) tackle the additional complexity in the multi-turn setting by learning to process dialogue context sequentially turn by turn. Despite the success of these approaches, they often fail to exploit the dependencies between dialogue turns of long distance, e.g. the 2nd and 5th turns in Figure 1. In long dialogues, this shortcoming becomes more obvious and necessitates an approach for learning long-distance dependencies between dialogue turns.\nTo reason over dialogue context with long-distance dependencies, recent research in dialogues discovers graph-based structures at the turn level to predict the speaker’s emotion (Ghosal et al., 2019) or generate sequential questions semi-autoregressively (Chai & Wan, 2020). Recently Zheng et al. (2019) incorporate graph neural models to connect the textual cues between all pairs of dialogue turns. These methods, however, involve a fixed graphical structure of dialogue turns, in which only a small number of nodes contains lexical overlap with the question of the current turn, e.g. the 1st, 3rd, and 5th turns in Figure 1. These methods also fail to factor in the temporality of dialogue turns as the graph structures do not guarantee the sequential ordering among turns. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model learns a reasoning path that traverses through dialogue turns to propagate contextual cues that are densely related to the semantics of the current questions. Our approach balances between a sequential and graphical process to exploit dialogue information.\nOur work is related to the long-studied research domain of discourse structures, e.g. (Barzilay & Lapata, 2008; Feng & Hirst, 2011; Tan et al., 2016; Habernal & Gurevych, 2017). A form of discourse structure is argument structures, including premises and claims and their relations. Argument structures have been studied to assess different characteristics in text, such as coherence, persuasiveness, and susceptibility to attack. However, most efforts are designed for discourse study in monologues and much less attention is directed towards conversational data. In this work, we investigate a form of discourse structure through semantic graphs built upon the overlap of component representations among dialogue turns. We further enhance the models with a reasoning path learning model to learn the best information path for the next utterance generation.\nTo learn a reasoning path, we incorporate our method with bridge entities, a concept often seen in reading comprehension research, and earlier used in entity-based discourse analysis (Barzilay & Lapata, 2008). In reading comprehension problems, bridge entities denote entities that are common between two knowledge bases e.g. Wikipedia paragraphs in HotpotQA (Yang et al., 2018b). In discourse analysis, entities and their locations in text are used to learn linguistic patterns that indicate certain qualities of a document. In our method, we first reconstruct each dialogue turn (including question and answer) into a set of component sub-nodes (e.g. entities, action phrases) using common syntactical dependency parsers. Each result dialogue turn contains sub-nodes that can be used as bridge entities. Our reasoning path learning approach contains 2 phases: (1) first, at each dialogue turn, a graph network is constructed at the turn level. Any two turns are connected if they have an overlapping sub-node or if two of their sub-nodes are semantically similar. (2) secondly, a path generator is trained to predict a path from the current dialogue turn to past dialogue turns that provide additional and relevant cues to answer the current question. The predicted path is used as a skeleton layout to propagate visual features through each step of the path.\nSpecifically, in PDC, we adopt non-parameterized approaches (e.g. cosine similarity) to construct the edges in graph networks and each sub-node is represented by pre-trained word embedding vectors. Our path generator is a transformer decoder that regressively generates the next turn index conditioned on the previously generated turn sequence. Our reasoning model is a combination of a vanilla graph convolutional network (Kipf & Welling, 2017) and transformer encoder (Vaswani et al., 2017). In each traversing step, we retrieve visual features conditioned by the corresponding dialogue turn and propagate the features to the next step. Finally, the propagated multimodal features are used as input to a transformer decoder to predict the answer.\nOur experimental results show that our method can improve the results on the Audio-Visual SceneAware Dialogues (AVSD) generation settings (Alamri et al., 2019), outperform previous state-ofthe-art methods. We evaluate our approach through comprehensive ablation analysis and qualitative study. PDC model also provides additional insights on how the inherent contextual cues in dialogue context are learned in neural networks in the form of a reasoning path." }, { "heading": "2 RELATED WORK", "text": "Discourses in monologues. Related to our work is the research of discourse structures. A longstudied line of research in this domain focuses on argument mining to identify the structure of argument, claims and premises, and relations between them (Feng & Hirst, 2011; Stab & Gurevych, 2014; Peldszus & Stede, 2015; Persing & Ng, 2016; Habernal & Gurevych, 2017). More recently,\nGhosh et al. (2016); Duthie & Budzynska (2018); Jiang et al. (2019) propose to learn argument structures in student essays and official debates. In earlier approaches, Barzilay & Lapata (2008); Lin et al. (2011); Feng et al. (2014) study discourses to derive coherence assessment methods through entity-based representations of text. These approaches are proposed from linguistic theories surrounding entity patterns in discourses, i.e. how they are introduced and discussed (Grosz et al., 1995). Guinaudeau & Strube (2013); Putra & Tokunaga (2017) extend prior work with graphical structures in which sentence similarity is calculated based on semantic vectors representing those sentences. These lines of research show that studying discourse structures is useful in many tasks, such as document ranking and discrimination. However, most of these approaches are designed for monologues rather than dialogues.\nDiscourses in dialogues. More related to our problem setting is discourse research on text in a multi-turn setting. Murakami & Raymond (2010); Boltužić & Šnajder (2014); Swanson et al. (2015); Tan et al. (2016); Niculae et al. (2017); Morio & Fujita (2018); Chakrabarty et al. (2019) introduce new corpus and different methods to mine arguments in online discussion forums. Their models are trained to extract claims and premises in each user post and identify their relations between argument components in each pair of user posts. More recently, Li et al. (2020a); Jo et al. (2020) extend argument mining in online threads to identify attackability and persuasiveness in online posts.\nIn this work, we address the problem of video-grounded dialogue, in which dialogue turns are often semantically connected by a common grounding information source, a video. In this task, a discourse-based approach enables dialogue models to learn to anticipate the upcoming textual information in future dialogue turns. However, directly applying prior work on discourse or argument structures into video-grounded dialogues is not straightforward due to the inherent difference between online discussion posts and video-grounded dialogues. In video-grounded dialogues, the language is often closer to spoken language and there are fewer clear argument structures to be learned. Moreover, the presence of video necessitates the interaction between multiple modalities, text and vision. Incorporating traditional discourse structures to model cross-modality interaction is not straightforward. In this work, we propose to model dialogue context by using compositional graphical structures and constructing information traversal paths through dialogue turns.\nGraph-based dialogue models. Related to our work is research study that investigates different types of graph structures in dialogue. Hu et al. (2019); Shi & Huang (2019); Zhu et al. (2020) address the “reply_to” relationship among multi-party dialogues through graph networks that incorporate conversational flows in comment threads on social networks, e.g. Reddit and Ubuntu IRC, and online games. Zheng et al. (2019) propose a fully connected graph structure at the turn level for visual dialogues. Concurrently, Ghosal et al. (2019) also propose a fully connected graph structure with heterogeneous edges to detect the emotion of participating speakers. All of these methods discover graph structures connecting pairs of dialogue turns of little lexical overlap, resulting in sub-optimal feature propagation. This drawback becomes more significant in question answering problems in multi-turn settings. Our approach constructs graph networks based on compositional similarities.\nReasoning path learning. Our method is also motivated by the recent research of machine reading comprehension, e.g. WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018a). De Cao et al. (2019); Qiu et al. (2019) construct graph networks of supporting documents with entity nodes that are connected based on different kinds of relationships. Tu et al. (2019); Tang et al. (2020) enhance these methods with additional edges connecting output candidates and documents. Extended from these methods are path-based approaches that learn to predict a reasoning path through supporting documents. Kundu et al. (2019); Asai et al. (2020) score and rank path candidates that connect entities in question to the target answer. A common strategy among these methods is the use of bridge entities. However, unlike reading comprehension, dialogues are normally not entity-centric and it is not trivial to directly adopt bridge entities into dialogue context.\nCross-modality feature learning. Our work is related to study that integrates visual and linguistic information representation. A line of research in this domain is the problem of visual QA, e.g. (Minh Le et al., 2020; Gao et al., 2019). Closer to our method are methods that adopt compositionality in textual features. Specifically, Socher et al. (2014) introduce image and language representation learning by detecting the component lexical parts in sentences and combining them with image features. The main difference between these approaches and our work is the study of cross-modalities in a multi-turn setting. Our approach directly tackles the embedded sequential order in dialogue utterances and examines how cross-modality features are passed from turn to turn." }, { "heading": "3 METHOD", "text": "To describe our PDC model, we introduce a new graph-based method (Section 3.2) that constructs a graph structure to connect turn-level representations in dialogue context based on their compositional semantics. The compositional semantics consists of sub-nodes detected through syntactical dependency parsing methods. We enhance our approach with a path-based propagation method (Section 3.3) to narrow down the contextual information that facilitates question answering of the current turn. Our approach integrates a strong strategy to model dialogue flows in the form of graphical and path-based information such that contextual linguistic information is exploited to propagate relevant visual features (Section 3.4). Figure 2 demonstrates an overview of our method." }, { "heading": "3.1 PROBLEM DEFINITION", "text": "The inputs to a question answering problem in a multi-turn setting consist of a dialogue D and the visual input of a video I. Each dialogue contains a sequence of dialogue turns, each of which is a pair of question Q and answer A. At each dialogue turn t, we denote the dialogue context Ct as all previous dialogue turns Ct = {(Qi,Ai)}|i=t−1i=1 . Since it is positioned in a dialogue, the question of turn t Qt might be dependent on a subset of the dialogue context Ct. The output is the answer of the current turn Ât. Each textual component, i.e. Q and A, is represented as a sequence of token or word indices {wm}|m=Lm=1 ∈ |V|, where L is the sequence length and V is the vocabulary set. The objective of the task is the generation objective that output answers of the current turn:\nÂt = arg max At P (At|I, Ct,Qt;θ) = arg max At LA∏ m=1 Pm(wm|At,1:m−1, I, Ct,Qt;θ) (1)" }, { "heading": "3.2 COMPOSITIONAL SEMANTIC GRAPH OF DIALOGUE CONTEXT", "text": "The semantic relations between dialogue turns are decomposed to semantic relations between subnodes that constitute each turn. These composition relations serve as strong clues to determine how a dialogue turn is related to another. We first employ a co-reference resolution system, e.g. (Clark & Manning, 2016), to replace pronouns with the original entities. We then explore using the Stanford parser system1 to discover sub-nodes. The parser decomposes each sentence into grammatical components, where a word and its modifier are connected in a tree structure. For each dialogue turn, we concatenate the question and answer of that turn as input to the parser. The output dependency tree is pruned to remove unimportant constituents and merge adjacent nodes to form a semantic unit.\n1v3.9.2 retrieved at https://nlp.stanford.edu/software/lex-parser.shtml\nA graph structure G is then constructed. Any two turns are connected if one of their corresponding sub-nodes are semantically similar. To calculate the similarity score, we obtain their pre-trained word2vec embeddings2 and compute the cosine similarity score. Algorithm 1 provides the details of the procedure to automatically construct a semantic graph. Note that our approach can also be applied with other co-reference resolution systems, parser, or pre-trained embeddings. Unlike graph structures in machine reading comprehension such as Wikipedia graph, the semantic graph G is not fixed throughout the sample population but is constructed for each dialogue and at each turn.\nAlgorithm 1: Compositional semantic graph of dialogue context Data: Dialogue context Ct, question of the current turn Qt Result: Semantic graph G = (V, E)\n1 begin 2 T ←− ∅; G = {V, E}; E ←− ∅; V ←− ∅; S ←− ∅; 3 H ←− Coreference_Resolution([Ct;Qt]); 4 for each dialogue turn h ∈ H do 5 Th ←− Merge_Nodes(Prune_Tree(Dependency_Parse(h))); T ←− T ∪ {Th}; 6 V ←− V ∪ {h}; E ←− E ∪ {〈Turn_Position(h),Turn_Position(h)〉} 7 for each dependency tree T = (VT , ET ) ∈ T do S ←− S ∪ {VT } 8 for each sub-node si ∈ S do 9 for each sub-node sj ∈ S do 10 if not In_Same_Turn(si, sj) and Is_Similar(si, sj) then 11 E ←− E ∪ {〈Get_Dial_Turn(si),Get_Dial_Turn(sj)〉} 12 E ←− E ∪ {〈Get_Dial_Turn(sj),Get_Dial_Turn(si)〉}\n13 return G" }, { "heading": "3.3 LEARNING TO GENERATE REASONING PATHS", "text": "Our proposed compositional approach to construct a semantic graph in dialogue context ensures lexical overlaps with the question, but the graph structure does not guarantee the temporal order of dialogue turns. To ensure this sequential information is maintained, we train a generator to predict reasoning paths that traverse through current dialogue turn to past dialogue turns.\nWe use a Transformer decoder to model the reasoning paths from the current turn t. The first position of the path, z0 is initialized with the turn-level position embedding of t. The next turn index is generated auto-regressively by conditioning on the previously generated path sequence:\nz0 = Embed(t) ∈ Rd (2) Z0:m−1 = Embed([t; r̂1, ..., r̂m−1]) (3)\nwhere r̂i denotes a predicted dialogue turn index. The dialogue context and question of the current turn are represented by embedding vectors of their component tokens. Following Vaswani et al. (2017), their representations are enhanced with the sine-cosine positional encoding PosEncode.\nQt = Embed(Qt) + PosEncode(Qt) ∈ RLQt×d (4) Ct = Embed(Ct) + PosEncode(Ct) ∈ RLCt×d (5)\nNote that the dialogue context representation Ct is the embedding of dialogue turns up to the last turn t− 1, excluding answer embedding of the current turn At. We denote a Transformer attention block as Transformer(query, key, value). The path generator incorporates contextual information through attention layers on dialogue context and question.\nD (1) path = Transfromer(Z0:m−1, Z0:m−1, Z0:m−1) ∈ R m×d (6)\nD (2) path = Transfromer(D (1) path, Qt, Qt) ∈ R m×d (7)\nD (3) path = Transfromer(D (2) path, Ct, Ct) ∈ R m×d (8)\n2https://code.google.com/archive/p/word2vec/\nAt the m-th decoding step (m ≥ 1), our model selects the next dialogue turn among the set of dialogue turns that are adjacent to one at (m − 1)-th decoding step in the semantic graph. This is enforced through masking the softmax output scores in which non-adjacent turn indices are assigned to a very low scalar smasked. We denote the adjacency matrix of semantic graph G = (V, E) as a square matrix A of size |V| × |V| where Ai,j = 1 if 〈i, j〉 ∈ E and Ai,i = 1∀i = 1, ..., |V|. The probability of decoded turns at the m-th decoding step is:\nPm = softmax(D (3) path,mWpath) ∈ R |V |, Pm,i = smasked∀i|Ar̂m−1,i = 0 (9)\nwhere Wpath ∈ Rd×|V |. The decoding process is terminated when the next decoded token is an [EOP] (end-of-path) token. During inference time, we adopt a greedy decoding approach. Due to the small size of V , we found that a greedy approach can perform as well as beam search methods. The computational cost of generating reasoning paths in dialogue context is, thus, only dependent on the average path length, which is bounded by the maximum number of dialogue turns.\nData Augmentation. We train our path generator in a supervision manner. At each dialogue turn t with a semantic graph G, we use a graph traversal method, e.g. BFS, to find all paths that start from the current turn to any past turn. We maintain the ground-truth paths with dialogue temporal order by keeping the dialogue turn index in path position m lower than the turn index in path position m− 1. We also narrow down ground-truth paths based on their total lexical overlaps with the expected output answers. Using the dialogue in Figure 1 as an example, using BFS results in three potential path candidates: 5→ 4, 5→ 2, and 5→ 4→ 2. We select 5→ 4→ 2 as the ground-truth path because it can cover the most sub-nodes in the expected answers. If two paths have the same number of lexical overlaps, we select one with a shorter length. If two paths are equivalent, we randomly sample one path following uniform distribution at each training step. Ground-truth reasoning paths are added with [EOP] token at the final position for termination condition. The objective to train the path generator is the generation objective of reasoning path at each dialogue turn:\nR̂t = arg max Rt P (Rt|Ct,Qt;φ) = arg max Rt Lpath∏ m=1 Pm(rm|Rt,1:m−1, Ct,Qt;φ) (10)" }, { "heading": "3.4 MULTIMODAL REASONING FROM REASONING PATHS", "text": "The graph structure G and generated path R̂t are used as layout to propagate features of both textual and visual inputs. For each dialogue turn from V , we obtain the corresponding embeddings and apply mean pooling to get a vector representation. We denote the turn-level representations of V as V ∈ Rd×|V |. We use attention to retrieve the turn-dependent visual features from visual input.\nM = Transformer(V, I, I) ∈ Rd×|V | (11)\nwhere I is a two-dimensional feature representation of visual input I. We define a new multimodal graph based on semantic graph G: Gmm = (Vmm, Emm) where Vmm = M and edges 〈mi,mj〉 ∈ Emm∀i, j|〈i, j〉 ∈ E . We employ a vanilla graph convolution network (Kipf & Welling, 2017) to update turn-level multimodal representations through message passing along all edges.\nek = 1 |Ωk| ∑\nmj∈Ωk\nf(mk,mj), e = 1 |V | ∑ k ek, m̃k = g(mk, ek, e) (12)\nwhere Ωk is the set of adjacent nodes of mk and f(.) and g(.) are non-linear layers, e.g. MLP and their inputs are just simply concatenated. To propagate features along a reasoning path R̂t, we utilize the updated turn-level multimodal representations M̃ ∈ |V | and traverse the path sequentially through the representation of the corresponding turn index rm in each traversing step. Specifically, We obtain G = {m̃r̂0 , m̃r̂1 ...} ∈ RLpath×d. The traversing process can be done through a recurrent network or a transformer encoder.\nG̃ = Transformer(G,G,G) ∈ RLpath×d (13)\nTo incorporate propagated features into the target response, we adopt a state-of-the-art decoder model from (Le et al., 2019) that exploits multimodal attention over contextual features. Specifically, We integrate both M̃ and G̃ at each response decoding step through two separate attention layers.\nBesides, we also experiment with integrating propagated features with decoder as Transformer language models. Transformer language models have shown impressive performance recently in generation tasks by transferring language representations pretrained in massive data (Radford et al., 2019). To integrate, we simply concatenate M̃ and G̃ to the input sequence embeddings as input to language models, similar as (Le & Hoi, 2020; Li et al., 2020b).\nOptimization. The multimodal reasoning model is learned jointly with other model components. All model parameters are optimized through the objectives from both Equation 1 and 10. We use the standard cross-entropy loss which calculates the logarithm of each softmax score at each decoding position of Ât and R̂t." }, { "heading": "4 EXPERIMENTS", "text": "Dataset. We use the Audio-Visual Sene-Aware Dialogue (AVSD) benchmark developed by Alamri et al. (2019). The benchmark focuses on dialogues grounded on videos from the Charades dataset (Sigurdsson et al., 2016). Each dialogue can have up to 10 dialogue turns, which makes it an appropriate choice to evaluate our approach of reasoning paths over dialogue context. We used the standard visual features I3D to represent the video input. We experimented with the test splits used in the 7th Dialogue System Technology Challenge (DSTC7) (Yoshino et al., 2019) and DSTC8 (Kim et al., 2019). Please see the Appendix A for our experimental setups.\nOverall Results. The dialogues in the AVSD benchmark focuses on question answering over multiple turns and entail less semantic variance than open-domain dialogues. Therefore, we report the objective scores, including BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015), which are found to have strong correlation with human subjective scores (Alamri et al., 2019). In Table 1 and 3, we present the test results of our models in comparison with previous models in DSTC7 and DSTC8 respectively. In both test splits, our models achieve very strong performance against models without using pre-trained language models. Comparing with models using pre-trained models and additional fine-tuning, our models achieve competitive performances in both test splits. The performance gain of our models when using GPT2 indicates current model sensitivity to language modelling as a generator. A unique benefit of our\nmodels from prior approaches is the insights of how the models exploit information from dialogue turns in the form of reasoning paths (Please see example outputs in Figure 3).\nAblation Analysis. In Table 4 we report the results of path learning in a global semantic graph. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. In this case, to train the path generator, we obtain the ground-truth path by using BFS to traverse to the node with the most sentence-level similarity score to the expected answer. We observe that: (1) models that learn paths based on component lexical overlaps results in better performance than paths based on global lexical overlaps in most of the objective metrics. (2) Propagation by reasoning path alone without using GCN does not result in better performance. This can be explained as the information in each traversal step is not independent but still contains semantic dependencies to other turns. It is different from standard reading comprehension problems where each knowledge base is independent and it is not required to propagate features through a graph structure to obtain contextual updates. Please see the Appendix B for additional analysis of Table 4.\nImpacts of Reasoning Path Learning. We compare models that can learn reasoning paths against those that use a fixed propagation path through the past dialogue turns. From Table 5, we observe that: (1) learning dynamic instance-based reasoning paths outperforms all models that propagate through a default path. This is achieved by using the reasoning path as a skeleton for feature propagation as well as adopting the joint training strategy. We can consider dynamically learned paths as an ideal traversal path to propagate visual cues among all possible paths within the semantic graph of the dialogue context. (2) our path generator can generate reasoning paths well and the model with learned paths can perform as well as one using the oracle paths. (3) due to the short length of reasoning paths (limited by the maximum dialogue length), either beam search or greedy decoding approach is good enough to generate paths. The greedy approach has the advantage of much lower computational cost.\nQualitative Analysis. In Figure 3, we demonstrate some examples of our predicted responses and the corresponding reasoning paths. Specifically, we showcase samples in which the reasoning paths are 2-hops (Example A and B) and 3-hops (Example C and D), and the distance in each hop can be over one dialogue turn (Example B and D) or more (Example A and C). The example reasoning paths\nshow to be able to connect a sequence of dialogue turns that are most relevant to questions of the current turn. For instance, in Example A, the reasoning path can connect the 7th and 9th turn to the current turn as they contain lexical overlaps, i.e. “the bag”, and “the cushion”. The path skips the 8th turn which is not relevant to the current question. Likewise, in Example C, the path skips the 4− 8th turns. All examples show that dialogue context can be used to extract additional visual clues relevant to the current turn. Information from dialogues, thus, deserves more attention than just being used as a background text input. Please see the Appendix C for additional analysis." }, { "heading": "5 CONCLUSION", "text": "We proposed PDC, a novel approach to learning a reasoning path over dialogue turns for videogrounded dialogues. Our approach exploits the compositional semantics in each dialogue turn to construct a semantic graph, which is then used to derive an optimal path for feature propagation. Our experiments demonstrate that our model can learn to retrieve paths that are most relevant to the current question. We hope our approach can motivate further study to investigate reasoning over multiple turns, especially in complex settings with interconnected dialogue flows (Sun et al., 2019)." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank all reviewers for their insightful feedback on the manuscript of this paper. The first author of this paper is supported by the Agency for Science, Technology and Research (A*STAR) Computing and Information Science scholarship." }, { "heading": "A EXPERIMENTAL SETUP", "text": "We experiment with the Adam optimizer (Kingma & Ba, 2015). The models are trained with a warm-up learning rate period of 5 epochs before the learning rate decays and the training finishes up to 50 epochs. The best model is selected by the average loss in the validation set. All model parameters, except the decoder parameters when using pre-trained language models, are initialized with uniform distribution (Glorot & Bengio, 2010). The Transformer hyper-parameters are fine-tuned by validation results over d = {128, 256}, h = {1, 2, 4, 8, 16}, and a dropout rate from 0.1 to 0.5. Label smoothing (Szegedy et al., 2016) is applied on labels of Ât (label smoothing does not help when optimizing over R̂t as the labels are limited by the maximum length of dialogues, i.e. 10 in AVSD).\nB IMPACTS OF COMPOSITIONAL SEMANTIC GRAPH\nWe experiment with model variants based on different types of graph structures. Specifically, we compare our compositional semantic graph against a graph built upon the turn-level global semantics. In these graphs, we do not decompose each dialogue turn into component sub-nodes (line 5 in Algorithm 1) but directly compute the similarity score based on the whole sentence embedding. We also experiment with a fully connected graph structure. In each graph structure, we experiment with temporally ordered edges (TODirect). This is enforce by adding a check whether Get_Dial_Turn(sj) > Get_Dial_Turn(si) in line 11 and removing line 12 in Algorithm 1. From the results in Table 4, we observe that: (1) based on the CIDEr metric, the best performing graph structure is the compositional semantic graph while the global semantic graph and fully connected graph structure are almost equivalent. This is consistent with the previous insight in machine reading comprehension research that entity lexical overlaps between knowledge bases are often overlooked by global embeddings (Ding et al., 2019) and it is not reliable to construct a knowledge graph based on global representations alone. (2) regarding the direction of edges, bidirectional edges and temporally ordered edges perform similarly, indicating that processing dialogue turns following temporal orders provides enough information and backward processing is only supplementary." }, { "heading": "C ADDITIONAL QUALITATIVE ANALYSIS", "text": "In Figure 4, we demonstrate examples outputs of reasoning paths and dialogue responses and have the following observations:\n• For questions that do not involve actions and can be answered by a single frame, there is typically no reasoning path, i.e. the path only includes the current turn (Example A and B). These questions are usually simple and they are rarely involved in multiple dialogue turns.\n• In many cases, the dialogue agent can predict an appropriate path but still not generate the correct answers (Example D and G). These paths are able to connect turns that are most relevant to the current turns but these past turns do not contain or contain very limited clues to the expected answers. For example, in Example F, the 2nd and 4th turn are linked by the lexical component for “the woman”. However, they do not have useful information relevant to the current turn, i.e. her clothes.\n• Finally, our approach shows that the current benchmark, AVSD, typically contains one-hop (Example C, D, E) to two-hop (Example F, G, H) reasoning paths over dialogue context. We hope future dialogue benchmarks will factor in the complexity of dialogue context in terms of reasoning hops to facilitate better research of intelligent dialogue systems.\nDiscussion of failure cases. From the above observations, we identify the following scenarios that our models are susceptible to and propose potential directions for improvement.\n• Long complex utterances. One limitation of our methods is its dependence on syntactical parser methods to decompose a sentence into sub-nodes. In most dialogues, this problem is not too serious due to the short length of utterances, usually just a single sentence. However, in cases that the utterance contains multiple sentences/clauses or exhibits usage of spoken language with loose linguistic syntax, the parser may fail to decompose it properly. For\ninstance, in Example G in Figure 4, the ground-truth answer contains a causality-based clause (“because”), making it harder to identify sub-nodes such as “sneeze” or “dusty”.\n• Contextualized semantic similarity. Another area we can improve upon this method is to inject some forms of sentence-level contextual cues into each sub-node to improve their semantic representations. For instance, in a hypothetical dialogue that involves 2 question utterances such as the 2nd turn in Example A and the 6th turn in Example E in Figure 4, our method might not detect the connection between these two as they do not have overlap component sub-nodes. However, they are both related to the audio aspect of the video and a reasoning path between these two turns is appropriate." }, { "heading": "D STATISTICS OF LOCAL VS. GLOBAL SEMANTIC GRAPHS", "text": "In Table 6, we report the statistics of graph structures constructed by local and global semantics in all data splits of the AVSD benchmark. We observe that constructing graphs with local semantics result in a lower number of instances with no reasoning paths than making graphs with global semantics. This is due to compositionality in our method, resulting in higher lexical overlap between dialogue turns. With our method, the number of sub-nodes per dialogue turn is more than 4 on average, making it easier to connect dialogue turns. This also leads to a larger and more diverse set of reasoning paths for supervision learning. In local semantic graphs, the average number of reasoning paths per dialogue turn is 2 to 3 on average, higher than this number in global semantic graphs. Although our method requires additional computational effort to constructing these graphs, it is scalable to the size of the dialogue, i.e. number of the dialogue turns. To efficiently construct these graphs in a dialogue, the semantic graph of a dialogue turn can be built on top of the semantic graph of the last turn. This is done by simply adding the new sub-nodes to the last turn’s semantic graph and defining new edges adjacent to these sub-nodes only. In this way, the complexity of our graph construction method is linear to the number of dialogue turns." } ]
2,021
null
SP:d8c1eee3aad4cbe04e5602c4a4da1da44a8ca9d3
[ "This paper proposes a framework called GEBM that combine an implicit generator and an EBM to define a probabilistic model on low dimensional manifold. Specifically, the implicit generator defines the base distribution, and the EBM refines the base. Finally, this method is equivalent to define an EBM on the latent space of the implicit generator together with a mapping from the latent space to the data space. The authors propose to use the KALE to train the generator, and provide a theoretical guarantee about the validness of the KALE." ]
We introduce the Generalized Energy Based Model (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator"). GEBMs are trained by alternating between learning the energy and the base. We show that both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of much better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. When using normalizing flows as base measures, GEBMs succeed on density modelling tasks, returning comparable performance to direct maximum likelihood of the same networks.
[ { "affiliations": [], "name": "Michael Arbel" }, { "affiliations": [], "name": "Liang Zhou" } ]
[ { "authors": [ "M. Arbel", "A. Gretton" ], "title": "Kernel Conditional Exponential Family", "venue": "International Conference on Artificial Intelligence and Statistics, pages 1337–1346.", "year": 2018 }, { "authors": [ "M. Arbel", "A. Gretton", "W. Li", "G. Montufar" ], "title": "Kernelized Wasserstein Natural Gradient", "venue": null, "year": 2019 }, { "authors": [ "M. Arbel", "D. Sutherland", "M. Binkowski", "A. Gretton" ], "title": "On gradient regularizers for mmd gans", "venue": "Advances in Neural Information Processing Systems 31. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "M. Arjovsky", "S. Chintala", "L. Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, International Convention Centre, Sydney, Australia. PMLR.", "year": 2017 }, { "authors": [ "S. Arora", "R. Ge", "Y. Liang", "T. Ma", "Y. Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (GANs)", "venue": "Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 224–232. PMLR.", "year": 2017 }, { "authors": [ "S. Azadi", "C. Olsson", "T. Darrell", "I. Goodfellow", "A. Odena" ], "title": "Discriminator rejection sampling", "venue": "International Conference on Learning Representations.", "year": 2019 }, { "authors": [ "D. Belanger", "A. McCallum" ], "title": "Structured prediction energy networks", "venue": "International Conference on Machine Learning, pages 983–992.", "year": 2016 }, { "authors": [ "M. Betancourt", "S. Byrne", "S. Livingstone", "M. Girolami" ], "title": "The geometric foundations of Hamiltonian Monte Carlo", "venue": "Bernoulli, 23(4A):2257–2298.", "year": 2017 }, { "authors": [ "M. Bińkowski", "D.J. Sutherland", "M. Arbel", "A. Gretton" ], "title": "Demystifying MMD GANs", "venue": "International Conference on Learning Representations.", "year": 2018 }, { "authors": [ "L. Bottou", "M. Arjovsky", "D. Lopez-Paz", "M. Oquab" ], "title": "Geometrical insights for implicit generative modeling", "venue": "Braverman Readings in Machine Learning.", "year": 2017 }, { "authors": [ "J. Brehmer", "K. Cranmer" ], "title": "Flows for simultaneous manifold learning and density estimation", "venue": "arXiv preprint arXiv:2003.13913.", "year": 2020 }, { "authors": [ "A. Brock", "J. Donahue", "K. Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096.", "year": 2018 }, { "authors": [ "T. Che", "R. Zhang", "J. Sohl-Dickstein", "H. Larochelle", "L. Paull", "Y. Cao", "Y. Bengio" ], "title": "Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling", "venue": null, "year": 2020 }, { "authors": [ "T. Chen", "X. Zhai", "M. Ritter", "M. Lucic", "N. Houlsby" ], "title": "Self-supervised gans via auxiliary rotation loss", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12154–12163.", "year": 2019 }, { "authors": [ "X. Cheng", "N.S. Chatterji", "P.L. Bartlett", "M.I. Jordan" ], "title": "Underdamped langevin mcmc: A non-asymptotic analysis", "venue": "arXiv preprint arXiv:1707.03663.", "year": 2017 }, { "authors": [ "C. Chu", "K. Minami", "K. Fukumizu" ], "title": "Smoothness and stability in gans", "venue": "International Conference on Learning Representations.", "year": 2020 }, { "authors": [ "R. Cornish", "A.L. Caterini", "G. Deligiannidis", "A. Doucet" ], "title": "Relaxing bijectivity constraints with continuously indexed normalising flows", "venue": null, "year": 2020 }, { "authors": [ "K. Cranmer", "J. Pavez", "G. Louppe" ], "title": "Approximating likelihood ratios with calibrated discriminative classifiers", "venue": null, "year": 2016 }, { "authors": [ "B. Dai", "H. Dai", "A. Gretton", "L. Song", "D. Schuurmans", "N. He" ], "title": "Kernel exponential family estimation via doubly dual embedding", "venue": "Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 2321–2330. PMLR.", "year": 2019 }, { "authors": [ "B. Dai", "Z. Liu", "H. Dai", "N. He", "A. Gretton", "L. Song", "D. Schuurmans" ], "title": "Exponential Family Estimation via Adversarial Dynamics Embedding", "venue": "arXiv:1904.12083 [cs, stat]. arXiv: 1904.12083.", "year": 2019 }, { "authors": [ "D. Davis", "D. Drusvyatskiy" ], "title": "Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on weakly convex functions", "venue": "arXiv:1802.02988 [cs, math].", "year": 2018 }, { "authors": [ "P. Del Moral", "A. Doucet", "A. Jasra" ], "title": "Sequential monte carlo samplers", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411–436.", "year": 2006 }, { "authors": [ "Y. Deng", "A. Bakhtin", "M. Ott", "A. Szlam", "M. Ranzato" ], "title": "Residual energy-based models for text generation", "venue": "arXiv preprint arXiv:2004.11714.", "year": 2020 }, { "authors": [ "X. Ding", "Z.J. Wang", "W.J. Welch" ], "title": "Subsampling Generative Adversarial Networks: Density Ratio Estimation in Feature Space with Softplus Loss", "venue": null, "year": 2019 }, { "authors": [ "L. Dinh", "J. Sohl-Dickstein", "S. Bengio" ], "title": "Density estimation using real nvp", "venue": null, "year": 2016 }, { "authors": [ "J. Donahue", "K. Simonyan" ], "title": "Large Scale Adversarial Representation Learning", "venue": "arXiv:1907.02544 [cs, stat]. arXiv: 1907.02544.", "year": 2019 }, { "authors": [ "M.D. Donsker", "S.R.S. Varadhan" ], "title": "Asymptotic evaluation of certain markov process expectations for large time, i", "venue": "28(1):1–47. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpa.3160280102.", "year": 1975 }, { "authors": [ "A. Doucet", "Freitas", "N. d.", "N. Gordon" ], "title": "Sequential Monte Carlo Methods in Practice", "venue": "Information Science and Statistics. Springer-Verlag, New York.", "year": 2001 }, { "authors": [ "Y. Du", "I. Mordatch" ], "title": "Implicit generation and modeling with energy based models", "venue": "Advances in Neural Information Processing Systems 32, pages 3608–3618. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "J. Duchi", "E. Hazan", "Y. Singer" ], "title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", "venue": "Journal of Machine Learning Research, 12(Jul):2121–2159.", "year": 2011 }, { "authors": [ "A. Eberle", "A. Guillin", "R. Zimmer" ], "title": "Couplings and quantitative contraction rates for Langevin dynamics", "venue": "The Annals of Probability.", "year": 2017 }, { "authors": [ "I. Ekeland", "R. Témam" ], "title": "Convex Analysis and Variational Problems", "venue": "Classics in Applied Mathematics. Society for Industrial and Applied Mathematics.", "year": 1999 }, { "authors": [ "J. Feydy", "T. Séjourné", "Vialard", "F.-X.", "Amari", "S.-i.", "A. Trouvé", "G. Peyré" ], "title": "Interpolating between optimal transport and mmd using sinkhorn divergences", "venue": "The 22nd International Conference on Artificial Intelligence and Statistics, pages 2681–2690.", "year": 2019 }, { "authors": [ "I. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc.", "year": 2014 }, { "authors": [ "W. Grathwohl", "Wang", "K.-C", "Jacobsen", "J.-H", "D. Duvenaud", "M. Norouzi", "K. Swersky" ], "title": "Your classifier is secretly an energy based model and you", "venue": null, "year": 2020 }, { "authors": [ "A. Grover", "J. Song", "A. Kapoor", "K. Tran", "A. Agarwal", "E.J. Horvitz", "S. Ermon" ], "title": "Bias correction of learned generative models using likelihood-free importance weighting", "venue": "Advances in Neural Information Processing Systems 32. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "I. Gulrajani", "F. Ahmed", "M. Arjovsky", "V. Dumoulin", "A. Courville" ], "title": "Improved training of wasserstein gans", "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA. Curran Associates Inc.", "year": 2017 }, { "authors": [ "M.U. Gutmann", "A. Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "The Journal of Machine Learning Research, 13(null):307–361.", "year": 2012 }, { "authors": [ "M. Haugh" ], "title": "Mcmc and bayesian modeling", "venue": "IEOR E4703 Monte-Carlo Simulation, Columbia University.", "year": 2017 }, { "authors": [ "M. Heusel", "H. Ramsauer", "T. Unterthiner", "B. Nessler", "S. Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "Advances in Neural Information Processing Systems 30, pages 6626–6637. Curran Associates, Inc.", "year": 2017 }, { "authors": [ "G.E. Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural Computation, 14(8):1771–1800.", "year": 2002 }, { "authors": [ "J. Ho", "S. Ermon" ], "title": "Generative adversarial imitation learning", "venue": "Advances in neural information processing systems, pages 4565–4573.", "year": 2016 }, { "authors": [ "A. Hyvärinen" ], "title": "Estimation of Non-Normalized Statistical Models by Score Matching", "venue": "The Journal of Machine Learning Research, 6:695–709.", "year": 2005 }, { "authors": [ "T. Kanamori", "T. Suzuki", "M. Sugiyama" ], "title": "f -divergence estimation and two-sample homogeneity test under semiparametric density-ratio models", "venue": "IEEE Transactions on Information Theory, 58(2):708–720.", "year": 2011 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "arXiv:1412.6980", "year": 2014 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "ICLR.", "year": 2014 }, { "authors": [ "A. Klenke" ], "title": "Probability Theory: A Comprehensive Course", "venue": "World Publishing Corporation.", "year": 2008 }, { "authors": [ "N. Kodali", "J. Abernethy", "J. Hays", "Z. Kira" ], "title": "On Convergence and Stability of GANs", "venue": "arXiv:1705.07215 [cs]. arXiv: 1705.07215.", "year": 2017 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto.", "year": 2009 }, { "authors": [ "J. Lawson", "G. Tucker", "B. Dai", "R. Ranganath" ], "title": "Energy-inspired models: Learning with sampler-induced distributions", "venue": "Advances in Neural Information Processing Systems 32, pages 8501–8513. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "Y. LeCun", "S. Chopra", "R. Hadsell", "M. Ranzato", "Huang", "F.-J." ], "title": "Predicting Structured Data, chapter A Tutorial on Energy-Based Learning", "venue": "MIT Press.", "year": 2006 }, { "authors": [ "Li", "C.-L.", "Chang", "W.-C.", "Y. Cheng", "Y. Yang", "B. Poczos" ], "title": "Mmd gan: Towards deeper understanding of moment matching network", "venue": "Advances in Neural Information Processing Systems 30, pages 2203–2213. Curran Associates, Inc.", "year": 2017 }, { "authors": [ "S. Liu", "O. Bousquet", "K. Chaudhuri" ], "title": "Approximation and Convergence Properties of Generative Adversarial Learning", "venue": null, "year": 2017 }, { "authors": [ "Z. Liu", "P. Luo", "X. Wang", "X. Tang" ], "title": "Deep learning face attributes in the wild", "venue": null, "year": 2015 }, { "authors": [ "P. Milgrom", "I. Segal" ], "title": "Envelope Theorems for Arbitrary Choice Sets", "venue": "Econometrica, 70.", "year": 2002 }, { "authors": [ "T. Miyato", "T. Kataoka", "M. Koyama", "Y. Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "International Conference on Learning Representations.", "year": 2018 }, { "authors": [ "V. Nagarajan", "J.Z. Kolter" ], "title": "Gradient descent gan optimization is locally stable", "venue": null, "year": 2017 }, { "authors": [ "R.M. Neal" ], "title": "Mcmc using hamiltonian dynamics", "venue": "Handbook of Markov Chain Monte Carlo.", "year": 2010 }, { "authors": [ "K. Neklyudov", "E. Egorov", "D. Vetrov" ], "title": "The implicit metropolis-hastings", "venue": null, "year": 2019 }, { "authors": [ "T. Nguyen", "T. Le", "H. Vu", "D. Phung" ], "title": "Dual discriminator generative adversarial nets", "venue": "Advances in Neural Information Processing Systems, pages 2670–2680.", "year": 2017 }, { "authors": [ "X. Nguyen", "M.J. Wainwright", "M.I. Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory, 56(11):5847–5861.", "year": 2010 }, { "authors": [ "S. Nowozin", "B. Cseke", "R. Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "Advances in Neural Information Processing Systems 29, pages 271–279. Curran Associates, Inc.", "year": 2016 }, { "authors": [ "Oord", "A. v. d.", "N. Kalchbrenner", "K. Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759.", "year": 2016 }, { "authors": [ "G. Ostrovski", "W. Dabney", "R. Munos" ], "title": "Autoregressive quantile networks for generative modeling", "venue": "arXiv preprint arXiv:1806.05575.", "year": 2018 }, { "authors": [ "G. Papamakarios", "T. Pavlakou", "I. Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "NIPS.", "year": 2017 }, { "authors": [ "A. Radford", "L. Metz", "S. Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434.", "year": 2015 }, { "authors": [ "M. Raginsky", "A. Rakhlin", "M. Telgarsky" ], "title": "Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis", "venue": null, "year": 2017 }, { "authors": [ "J.R. Retherford" ], "title": "Review: J", "venue": "diestel and j. j. uhl, jr., vector measures. Bull. Amer. Math. Soc., 84(4):681–685.", "year": 1978 }, { "authors": [ "D.J. Rezende", "S. Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 1530–1538. JMLR.org.", "year": 2015 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "ICML, pages 1278–1286.", "year": 2014 }, { "authors": [ "R.T. Rockafellar" ], "title": "Convex analysis", "venue": "Princeton Mathematical Series. Princeton University Press, Princeton, N. J.", "year": 1970 }, { "authors": [ "O. Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Z. Huang", "A. Karpathy", "A. Khosla", "M. Bernstein", "A.C. Berg", "L. Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "arXiv:1409.0575 [cs]. arXiv: 1409.0575.", "year": 2014 }, { "authors": [ "M. Sachs", "B. Leimkuhler", "V. Danos" ], "title": "Langevin Dynamics with Variable Coefficients and Nonconservative Forces: From Stationary States to Numerical Methods", "venue": "Entropy, 19.", "year": 2017 }, { "authors": [ "M. Sanjabi", "J. Ba", "M. Razaviyayn", "J.D. Lee" ], "title": "On the convergence and robustness of training gans with regularized optimal transport", "venue": "Advances in Neural Information Processing Systems 31, pages 7091–7101. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "D. Siegmund" ], "title": "Importance sampling in the monte carlo study of sequential tests", "venue": "The Annals of Statistics, pages 673–684.", "year": 1976 }, { "authors": [ "Simon-Gabriel", "C.-J.", "B. Scholkopf" ], "title": "Kernel distribution embeddings: Universal kernels, characteristic kernels and kernel metrics on distributions", "venue": "Journal of Machine Learning Research, 19(44):1–29.", "year": 2018 }, { "authors": [ "U. Simsekli", "L. Zhu", "Y.W. Teh", "M. Gurbuzbalaban" ], "title": "Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise", "venue": "arXiv:2002.05685 [cs, stat]. arXiv: 2002.05685.", "year": 2020 }, { "authors": [ "B. Sriperumbudur", "K. Fukumizu", "R. Kumar", "A. Gretton", "A. Hyvärinen" ], "title": "Density estimation in infinite dimensional exponential families", "venue": "Journal of Machine Learning Research.", "year": 2017 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "T. Kanamori" ], "title": "Density ratio estimation in machine learning", "venue": "Cambridge University Press.", "year": 2012 }, { "authors": [ "D. Sutherland", "H. Strathmann", "M. Arbel", "A. Gretton" ], "title": "Efficient and principled score estimation with Nystrom kernel exponential families", "venue": "International Conference on Artificial Intelligence and Statistics, pages 652–660.", "year": 2018 }, { "authors": [ "A. Tanaka" ], "title": "Discriminator optimal transport", "venue": "Advances in Neural Information Processing Systems 32. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "K.K. Thekumparampil", "P. Jain", "P. Netrapalli", "S. Oh" ], "title": "Efficient algorithms for smooth minimax optimization", "venue": "Advances in Neural Information Processing Systems 32, pages 12680–12691. Curran Associates, Inc.", "year": 2019 }, { "authors": [ "L. Thiry", "M. Arbel", "E. Belilovsky", "E. Oyallon" ], "title": "The unreasonable effectiveness of patches in deep convolutional kernels methods", "venue": "International Conference on Learning Representations.", "year": 2021 }, { "authors": [ "Y. Tsuboi", "H. Kashima", "S. Hido", "S. Bickel", "M. Sugiyama" ], "title": "Direct density ratio estimation for large-scale covariate shift adaptation", "venue": "Journal of Information Processing, 17:138–155.", "year": 2009 }, { "authors": [ "L. Tu", "K. Gimpel" ], "title": "Learning approximate inference networks for structured prediction", "venue": "arXiv preprint arXiv:1803.03376.", "year": 2018 }, { "authors": [ "R. Turner", "J. Hung", "E. Frank", "Y. Saatchi", "J. Yosinski" ], "title": "Metropolis-Hastings generative adversarial networks", "venue": "Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6345–6353, Long Beach, California, USA. PMLR.", "year": 2019 }, { "authors": [ "C. Villani" ], "title": "Optimal transport: Old and new", "venue": "Technical report.", "year": 2009 }, { "authors": [ "L. Wenliang", "D. Sutherland", "H. Strathmann", "A. Gretton" ], "title": "Learning deep kernels for exponential family densities", "venue": "International Conference on Machine Learning, pages 6737–6746.", "year": 2019 }, { "authors": [ "Y. Wu", "J. Donahue", "D. Balduzzi", "K. Simonyan", "T. Lillicrap" ], "title": "LOGAN: Latent Optimisation for Generative Adversarial Networks", "venue": "arXiv:1912.00953 [cs, stat]. arXiv: 1912.00953.", "year": 2019 }, { "authors": [ "Y. Wu", "M. Rosca", "T. Lillicrap" ], "title": "Deep compressed sensing", "venue": "Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6850–6860, Long Beach, California, USA. PMLR.", "year": 2019 }, { "authors": [ "J. Xie", "Y. Lu", "R. Gao", "Y.N. Wu" ], "title": "Cooperative learning of energy-based model and latent variable model via mcmc teaching", "venue": "AAAI, volume 1, page 7.", "year": 2018 }, { "authors": [ "J. Xie", "Y. Lu", "R. Gao", "Zhu", "S.-C.", "Y.N. Wu" ], "title": "Cooperative training of descriptor and generator networks", "venue": "IEEE transactions on pattern analysis and machine intelligence, 42(1):27–45.", "year": 2018 }, { "authors": [ "J. Xie", "Y. Lu", "Zhu", "S.-C.", "Y. Wu" ], "title": "A theory of generative convnet", "venue": "International Conference on Machine Learning, pages 2635–2644.", "year": 2016 }, { "authors": [ "J. Xie", "Z. Zheng", "R. Gao", "W. Wang", "Zhu", "S.-C.", "Y. Nian Wu" ], "title": "Learning descriptor networks for 3d shape synthesis and analysis", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8629–8638.", "year": 2018 }, { "authors": [ "J. Xie", "Zhu", "S.-C.", "Y. Nian Wu" ], "title": "Synthesizing dynamic patterns by spatial-temporal generative convnet", "venue": "Proceedings of the ieee conference on computer vision and pattern recognition, pages 7093–7101.", "year": 2017 }, { "authors": [ "J. Xie", "Zhu", "S.-C.", "Y.N. Wu" ], "title": "Learning energy-based spatial-temporal generative convnets for dynamic patterns", "venue": "IEEE transactions on pattern analysis and machine intelligence.", "year": 2019 }, { "authors": [ "K. Xu", "C. Du", "C. Li", "J. Zhu", "B. Zhang" ], "title": "Learning implicit generative models by teaching density estimators", "venue": "arXiv preprint arXiv:1807.03870.", "year": 2018 }, { "authors": [ "F. Yu", "A. Seff", "Y. Zhang", "S. Song", "T. Funkhouser", "J. Xiao" ], "title": "LSUN: Construction of a large-scale image dataset using deep learning with humans", "venue": null, "year": 2015 }, { "authors": [ "L. Yu", "Y. Song", "J. Song", "S. Ermon" ], "title": "Training deep energy-based models with f-divergence minimization", "venue": null, "year": 2020 }, { "authors": [ "F. Zenke", "B. Poole", "S. Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "Proceedings of machine learning research, 70:3987.", "year": 2017 }, { "authors": [ "P. Zhang", "Q. Liu", "D. Zhou", "T. Xu", "X. He" ], "title": "On the Discrimination-Generalization Tradeoff in GANs", "venue": "arXiv:1711.02771 [cs, stat]. arXiv: 1711.02771.", "year": 2017 }, { "authors": [ "Nguyen" ], "title": "2016) derived a variational formulation for the KL using Fenchel duality. By the duality theorem (Rockafellar, 1970), the convex and lower semi-continuous function ζ :u 7→ulog(u) that appears", "venue": null, "year": 1970 }, { "authors": [ "Nguyen" ], "title": "provided the variational formulation for the reverse KL using a different choice for ζ: (ζ(u) =−log(u)). We refer to (Nowozin et al., 2016) for general f -divergences. Choosing a smaller set of functionsH in the variational objective (16) will lead to a lower bound on the KL", "venue": null, "year": 2010 }, { "authors": [ "Sachs" ], "title": "Figure 4: Samples from the GEBM at different stages of sampling using Algorithm 3 and inverse temperature β= 1, on Cifar10 and LSUN (Right). Each row represents a sampling trajectory from early stages (leftmost images) to later stages (rightmost images)", "venue": "Sampling In Algorithm 3,", "year": 2021 }, { "authors": [ "Tanaka" ], "title": "Introducing the minus sign in (31) leads to a degradation in performance", "venue": "We perform", "year": 2019 }, { "authors": [ "Turner" ], "title": "MCMC chain for 1000 iterations. G.2 DENSITY ESTIMATION Pre-processing We use code and pre-processing steps from Wenliang et al. (2019) which we describe here for completeness. For RedWine and WhiteWine, we added uniform noise with support equal to the median distances", "venue": null, "year": 2019 }, { "authors": [ "Papamakarios" ], "title": "We split all datasets, except HepMass into three splits. The test split consists of 10% of the total data. For the validation set, we use 10% of the remaining data with an upper limit of 1000 to reduce the cost of validation at each iteration", "venue": "For Hepmass and MiniBoone,", "year": 2017 }, { "authors": [ "Feydy" ], "title": "2019) between each model and the data distribution", "venue": "In all experiments,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Energy-based models (EBMs) have a long history in physics, statistics and machine learning (LeCun et al., 2006). They belong to the class of explicit models, and can be described by a family of energies E which define probability distributions with density proportional to exp(−E). Those models are often known up to a normalizing constantZ(E), also called the partition function. The learning task consists of finding an optimal function that best describes a given system or target distribution P. This can be achieved using maximum likelihood estimation (MLE), however the intractability of the normalizing partition function makes this learning task challenging. Thus, various methods have been proposed to address this (Hinton, 2002; Hyvärinen, 2005; Gutmann and Hyvärinen, 2012; Dai et al., 2019a;b). All these methods estimate EBMs that are supported over the whole space. In many applications, however, P is believed to be supported on an unknown lower dimensional manifold. This happens in particular when there are strong dependencies between variables in the data (Thiry et al., 2021), and suggests incorporating a low-dimensionality hypothesis in the model .\nGenerative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a particular way to enforce low dimensional structure in a model. They rely on an implicit model, the generator, to produce samples supported on a low-dimensional manifold by mapping a pre-defined latent noise to the sample space using a trained function. GANs have been very successful in generating high-quality samples on various tasks, especially for unsupervised image generation (Brock et al., 2018). The generator is trained adversarially against a discriminator network whose goal is to distinguish samples produced by the generator from the target data. This has inspired further research to extend the training procedure to more general losses (Nowozin et al., 2016; Arjovsky et al., 2017; Li et al., 2017; Bińkowski et al., 2018; Arbel et al., 2018) and to improve its stability (Miyato et al., 2018; Gulrajani et al., 2017; Nagarajan and Kolter, 2017; Kodali et al., 2017). While the generator of a GAN has effectively a low-dimensional support, it remains challenging to refine the distribution of mass on that support using pre-defined latent noise. For instance, as shown by Cornish et al. (2020) for normalizing flows, when the latent distribution is unimodal and the target distribution possesses multiple disconnected low-dimensional components, the generator, as a continuous map, compensates for this mismatch using steeper slopes. In practice, this implies the need for more complicated generators.\n∗Correspondence: michael.n.arbel@gmail.com.\nIn the present work, we propose a new class of models, called Generalized Energy Based Models (GEBMs), which can represent distributions supported on low-dimensional manifolds, while offering more flexibility in refining the mass on those manifolds. GEBMs combine the strength of both implicit and explicit models in two separate components: a base distribution (often chosen to be an implicit model) which learns the low-dimensional support of the data, and an energy function that can refine the probability mass on that learned support. We propose to train the GEBM by alternating between learning the energy and the base, analogous to f -GAN training (Goodfellow et al., 2014; Nowozin et al., 2016). The energy is learned by maximizing a generalized notion of likelihood which we relate to the Donsker-Varadhan lower-bound (Donsker and Varadhan, 1975) and Fenchel duality, as in (Nguyen et al., 2010; Nowozin et al., 2016). Although the partition function is intractable in general, we propose a method to learn it in an amortized fashion without introducing additional surrogate models, as done in variational inference (Kingma and Welling, 2014; Rezende et al., 2014) or by Dai et al. (2019a;b). The resulting maximum likelihood estimate, the KL Approximate Lower-bound Estimate (KALE), is then used as a loss for training the base. When the class of energies is rich and smooth enough, we show that KALE leads to a meaningful criterion for measuring weak convergence of probabilities. Following recent work by Chu et al. (2020); Sanjabi et al. (2018), we show that KALE possesses well defined gradients w.r.t. the parameters of the base, ensuring well-behaved training. We also provide convergence rates for the empirical estimator of KALE when the variational family is sufficiently well behaved, which may be of independent interest.\nThe main advantage of GEBMs becomes clear when sampling from these models: the posterior over the latents of the base distribution incorporates the learned energy, putting greater mass on regions in this latent space that lead to better quality samples. Sampling from the GEBM can thus be achieved by first sampling from the posterior distribution of the latents via MCMC in the low-dimensional latent space, then mapping those latents to the input space using the implicit map of the base. This is in contrast to standard GANs, where the latents of the base have a fixed distribution. We focus on a class of samplers that exploit gradient information, and show that these samplers enjoy fast convergence properties by leveraging the recent work of Eberle et al. (2017). While there has been recent interest in using the discriminator to improve the quality of the generator during sampling (Azadi et al., 2019; Turner et al., 2019; Neklyudov et al., 2019; Grover et al., 2019; Tanaka, 2019; Wu et al., 2019b), our approach emerges naturally from the model we consider.\nWe begin in Section 2 by introducing the GEBM model. In Section 3, we describe the learning procedure using KALE, then derive a method for sampling from the learned model in Section 4. In Section 5 we discuss related work. Finally, experimental results are presented in Section 6 with code available at https://github.com/MichaelArbel/GeneralizedEBM." }, { "heading": "2 GENERALIZED ENERGY-BASED MODELS", "text": "In this section, we introduce generalized energy based models (GEBM), that combine the strengths of both energy-based models and implicit generative models, and admit the first of these as a special case. An energy-based model (EBM) is defined by a set E of real valued functions called energies, where eachE∈E specifies a probability density over the data spaceX ⊂Rd up to a normalizing constant,\nQ(dx)=exp(−E(x)−A)dx, A=log Å∫ exp(−E(x))dx ã . (1)\nWhile EBMs have been shown recently to be powerful models for representing complex high dimensional data distributions, they still unavoidably lead to a blurred model whenever data are concentrated on a lower-dimensional manifold. This is the case in Figure 1(a), where the ground truth distribution is\nsupported on a 1-D line and embedded in a 2-D space. The EBM in Figure 1(d) learns to give higher density to a halo surrounding the data, and thus provides a blurred representation. That is a consequence of EBM having a density defined over the whole space, and can result in blurred samples for image models.\nAn implicit generative model (IGM) is a family of probability distributions Gθ parametrized by a learnable generator function G :Z 7→X that maps latent samples z from a fixed latent distribution η to the data space X . The latent distribution η is required to have a density over the latent space Z and is often easy to sample from. Thus, Sampling from G is simply achieved by first sampling z from η then applyingG,\nx∼G ⇐⇒ x=G(z), z∼η. (2)\nGANs are popular instances of these models, and are trained adversarially (Goodfellow et al., 2014). When the latent spaceZ has a smaller dimension than the input spaceX , the IGM will be supported on a lower dimensional manifold of X , and thus will not possess a Lebesgue density on X (Bottou et al., 2017). IGMs are therefore good candidates for modelling low dimensional distributions. While GANs can accurately learn the low-dimensional support of the data, they can have limited power for representing the distribution of mass on the support. This is illustrated in Figure 1(b).\nA generalized energy-based model (GEBM) Q is defined by a combination of a base G and an energyE defined over a subsetX of Rd. The base component can typically be chosen to be an IGM as in (2). The generalized energy component can refine the mass on the support defined by the base. It belongs to a class E of real valued functions defined on the input spaceX , and represents the negative log-density of a sample from the GEBM with respect to the base G,\nQ(dx)=exp(−E(x)−AG,E)G(dx), AG,E=log Å∫ exp(−E(x))G(dx) ã , (3)\nwhere AG,E is the logarithm of the normalizing constant of the model w.r.t. G. Thus, a GEBM Q re-weights samples from the base according to the un-normalized importance weights exp(−E(x)). Using the latent structure of the base G, this importance weight can be pulled-back to the latent space to define a posterior latent distribution ν,\nν(z) :=η(z)exp(−E(G(z))−AG,E). (4)\nHence, the posterior latent ν can be used instead of the latent noise η for sampling from Q, as summarized by Proposition 1: Proposition 1. Sampling from Q requires sampling a latent z from ν (4) then applying the mapG,\nx∼Q ⇐⇒ x=G(z), z∼ν. (5)\nIn order to hold, Proposition 1 does not need the generatorG to be invertible. We provide a proof in Appendix C.1 which relies on a characterization of probability distribution using generalized moments. We will see later in Section 4 how equation (5) can be used to provide practical sampling algorithms from the GEBM. Next we discuss the advantages of GEBMs.\nAdvantages of Generalized Energy Based Models. The GEBM defined by (3) can be related to exponential tilting (re-weighting) (Siegmund, 1976; Xie et al., 2016) of the base G. The important difference over classical EBMs is that the base G is allowed to change its support and shape in space. By learning the base G, GEBMs can accurately learn the low-dimensional support of data, just like IGMs do. They also benefit from the flexibility of EBMs for representing densities using an energyE to refine distribution of mass on the support defined by G, as seen in Figure 1(c).\nCompared to EBMs, that put mass on the whole space by construction (positive density), GEBMs have the additional flexibility to concentrate the probability mass on a low-dimensional support learned by the base G, provided that the dimension of the latent spaceZ is smaller than the dimension of the ambient space X : see Figure 1(c) vs Figure 1(d). In the particular case when the dimension of Z is equal to the ambient dimension andG is invertible, the baseG becomes supported over the whole space X , and GEBM recover usual EBMs. The next proposition further shows that any EBM can be viewed as a particular cases of GEBMs, as proved in Appendix C.1. Proposition 2. Any EBM with energyE (as in (1)) can be expressed as a GEBM with baseG given as a normalizing flow with density exp(−r(x)) and a generalized energy Ẽ(x)=E(x)−r(x). In this particular case, the dimension of the latent is necessarily equal to the data dimension, i.e. dim(Z)=dim(X ).\nCompared to IGMs, that rely on a fixed pre-determined latent noise distribution η, GEBMs offer the additional flexibility of learning a richer latent noise distribution. This is particularly useful when the data is multimodal. In IGMs, such a GANs, the latent noise η is usually unimodal thus requiring a more sophisticated generator to distort a unimodal noise distribution into a distribution with multiple modes, as shown by Cornish et al. (2020). Instead, GEBMs allow to sample from a posterior ν over the latent noise defined in (4). This posterior noise can be multimodal in latent space (by incorporating information from the energy) and thus can put more or less mass in specific regions of the manifold defined by the base G. This allows GEBMs to capture multimodality in data, provided the support of the base is broad enough to subsume the data support Figure 1(c). The base can be simpler, compared to GANs, as it doesn’t need to distort the input noise too much to produce multimodal samples (see Figure 8 in Appendix G.4). This additional flexibility comes at no additional training cost compared to GANs. Indeed, GANs still require another model during training, the discriminator network, but do not use it for sampling. Instead, GEBMs avoid this waist since the base and energy can be trained jointly, with no other additional model, and then both are used for sampling." }, { "heading": "3 LEARNING GEBMS", "text": "In this section we describe a general procedure for learning GEBMs. We decompose the learning procedure into two steps: an energy learning step and a base learning step. The overall learning procedure alternates between these two steps, as done in GAN training (Goodfellow et al., 2014)." }, { "heading": "3.1 ENERGY LEARNING", "text": "When the base G is fixed, varying the energy E leads to a family of models that all admit a density exp(−E−AG,E) w.r.t. G. When the base G admits a density exp(−r) defined over the whole space, it is possible to learn the energyE by maximizing the likelihood of the model− ∫ (E+r)dP−AG,E . However, in general G is supported on a lower-dimensional manifold so that r is ill-defined and the usual notion of likelihood cannot be used. Instead, we introduce a generalized notion of likelihood which does not require a well defined density exp(−r) for G: Definition 1 (Generalized Likelihood). The expected G-log-likelihood under a target distribution P of a GEBM model Q with base G and energyE is defined as\nLP,G(E) :=− ∫ E(x)dP(x)−AG,E . (6)\nTo provide intuitions about the generalized likelihood in Definition 1, we start by discussing the particular case where KL(P||G) < +∞. We then present the training method in the general case where P and G might not share the same support, i.e. KL(P||G)=+∞. Special case of finite KL(P||G). When the Kullback-Leibler divergence between P and G is well defined, (6) corresponds to the Donsker-Varadhan (DV) lower bound on the KL (Donsker and Varadhan, 1975), meaning that KL(P||G)≥LP,G(E) for allE. Moreover, the following proposition holds: Proposition 3. Assume thatKL(P||G)<+∞ and 0∈E . If, in addition,E? maximizes (6), then:\nKL(P||Q)≤KL(P||G). (7) In addition, we have thatKL(P||Q)=0 whenE? is the negative log-density ratio of P w.r.t. G.\nWe refer to Appendix C.1 for a proof. According to (7), the GEBM systematically improves over the IGM defined by G, with no further improvement possible in the limit case when G=P. Hence as long as there is an error in mass on the common support of P and G, the GEBM improves over the base G.\nEstimating the likelihood in the General setting. Definition 1 can be used to learn a maximum likelihood energyE? by maximizingLP,G(E) w.r.t. E even when the KL(P||G) is infinite and when P and G don’t necessarily share the same support. Such an optimal solution is well defined whenever the set of energies is suitably constrained. This is the case if the energies are parametrized by a compact set Ψ with ψ 7→Eψ continuous over Ψ. Estimating the likelihood is then achieved using i.i.d. samples (Xn)1:N ,(Ym)1:M from P and G (Tsuboi et al., 2009; Sugiyama et al., 2012; Liu et al., 2017):\nL̂P,G(E)=− 1\nN N∑ n=1 E(Xn)−log\n( 1\nM M∑ m=1 exp(−E(Ym))\n) . (8)\nIn the context of mini-batch stochastic gradient methods, however,M typically ranges from 10 to 1000, which can lead to a poor estimate for the log-partition functionAG,E . Moreover, (8) doesn’t exploit estimates ofAG,E from previous gradient iterations. Instead, we propose an estimator which introduces a variational parameter A ∈R meant to estimate AG,E in an amortized fashion. The key idea is to exploit the convexity of the exponential which directly implies−AG,E≥−A−exp(−A+AG,E)+1 for anyA∈R, with equality only whenA=AG,E . Therefore, (6) admits a lower-bound of the form\nLP,G(E)≥− ∫ (E+A)dP− ∫ exp(−(E+A))dG+1:=FP,G(E+A),\nwhere we introduced the functional FP,G for concision. Maximizing FP,G(E+A) over A recovers the likelihoodLP,G(E). Moreover, jointly maximizing overE andA yields the maximum likelihood energyE? and its corresponding log-partition functionA?=AG,E? . This optimization is well-suited for stochastic gradient methods using the following estimator Kanamori et al. (2011):\nF̂P,G(E+A)=− 1\nN N∑ n=1 (E(Xn)+A)− 1 M M∑ m=1 exp(−(E(Ym)+A))+1. (9)" }, { "heading": "3.2 BASE LEARNING", "text": "Unlike in Section 3.1, varying the base G does not need to preserve the same support. Thus, it is generally not possible to use maximum likelihood methods for learning G. Instead, we propose to use the generalized likelihood (6) evaluated at the optimal energyE? as a meaningful loss for learning G, and refer to it as the KL Approximate Lower-bound Estimate (KALE),\nKALE(P||G)= sup (E,A)∈E×R FP,G(E+A). (10)\nFrom Section 3.1, KALE(P||G) is always a lower bound on KL(P,G). The bound becomes tight whenever the negative log density of P w.r.t. G is well-defined and belongs to E (Appendix A). Moreover, Proposition 4 shows that KALE is a reliable criterion for measuring convergence, and is a consequence of (Zhang et al., 2017, Theorem B.1), with a proof in Appendix C.2.1: Proposition 4. Assume all energies in E are L-Lipschitz and that any continuous function can be well approximated by linear combinations of energies in E (Assumptions (A) and (B) of Appendix C.2), then KALE(P||G)≥0 with equality only if P=G and KALE(P||Gn)→0 iff Gn→P in distribution.\nThe universal approximation assumption holds in particular when E contains feedforward networks. In fact networks with a single neuron are enough, as shown in (Zhang et al., 2017, Theorem 2.3). The Lipschitz assumption holds when additional regularization of the energy is enforced during training by methods such as spectral normalization (Miyato et al., 2018) or additional regularization I(ψ) on the energyEψ such as the gradient penalty (Gulrajani et al., 2017) as done in Section 6.\nEstimating KALE. According to Arora et al. (2017), accurate finite sample estimates of divergences that result from an optimization procedures (such as in (10)) depend on the richness of the class E ; and richer energy classes can result in slower convergence. Unlike divergences such as Jensen-Shannon, KL and the Wasserstein distance, which result from optimizing over a non-parametric and rich class of functions, KALE is restricted to a class of parametric energiesEψ . Thus, (Arora et al., 2017, Theorem 3.1) applies, and guarantees good finite sample estimates, provided optimization is solved accurately. In Appendix B, we provide an analysis for the more general case where energies are not necessarily parametric but satisfy some further smoothness properties; we emphasize that our rates do not require the strong assumption that the density ratio is bounded above and below as in (Nguyen et al., 2010).\nSmoothness of KALE. Learning the base is achieved by minimizingK(θ) :=KALE(P||Gθ) over the set of parameters Θ of the generatorGθ using first order methods (Duchi et al., 2011; Kingma and Ba, 2014; Arbel et al., 2019). This requiresK(θ) to be smooth enough so that gradient methods converge to local minima and avoid instabilities during training (Chu et al., 2020). Ensuring smoothness of losses that result from an optimization procedure, as in (10), can be challenging. Results for the regularized Wasserstein are provided by Sanjabi et al. (2018), while more general losses are considered by Chu et al. (2020), albeit under stronger conditions than for our setting.Theorem 5 shows that whenE,Gθ and their gradients are all Lipschitz thenK(θ) is smooth enough.We provide a proof for Theorem 5 in Appendix C.2.1.\nTheorem 5. Under Assumptions (I) to (III) of Appendix C.2, sub-gradient methods onK converge to local optima. Moreover,K is Lipschitz and differentiable for almost all θ∈Θ with:\n∇K(θ)=exp(−AGθ,E?) ∫ ∇xE?(Gθ(z))∇θGθ(z)exp(−E?(Gθ(z)))η(z)dz. (11)\nEstimating the gradient in (11) is achieved by first optimizing over Eψ andAusing (9), with additional regularization I(ψ). The resulting estimators Ê? and Â? are plugged in (12) to estimate ∇K(θ) using samples (Zm)1:M from η. Unlike for learning the energyE?, which benefits from using the amortized estimator of the log-partition function, we found that using the empirical log-partition for learning the base was more stable. We summarize the training procedure in Algorithm 1, which alternates between learning the energy and the base in a similar fashion to adversarial training. Algorithm 1 Training GEBM 1: Input P,N ,M , nb, ne 2: Output Trained generatorGθ and energyEψ . 3: Initialize θ , ψ andA. 4: for k=1,...,nb do 5: for j=1,...,ne do 6: Sample {Xn}1:N ∼P and {Yn}1:N ∼Gθ 7: gψ←−∇ψF̂P,Gθ (Eψ+A)+I(ψ) 8: Ã← log Ä 1 M ∑M m=1exp(−Eψ(Ym)) ä 9: gA←exp(A−Ã)−1\n10: Update ψ andA using gψ and gA. 11: end for 12: Set Ê?←Eψ and Â?←A. 13: Update θ using ÷∇K(θ) from (12) 14: end for÷∇K(θ)= exp(−Â?)\nM\nM∑ m=1 ∇xÊ?(Gθ(Zm))∇θGθ(Zm)exp(−Ê?(Gθ(Zm))). (12)" }, { "heading": "4 SAMPLING FROM GEBMS", "text": "A simple estimate of the empirical distribution of observations under the GEBM is via importance sampling (IS). This consists in first sampling multiple points from the baseG, and then re-weighting the samples according to the energyE. Although straightforward, this approach can lead to highly unreliable estimates, a well known problem in the Sequential Monte Carlo (SMC) literature which employs IS extensively (Doucet et al., 2001; Del Moral et al., 2006). Other methods such as rejection sampling are known to be inefficient in high dimensions Haugh (2017). Instead, we propose to sample from the posterior ν using MCMC. Recall from (5) that a samplex fromQ is of the formx=G(z) with z sampled from the posterior latent ν of (4) instead of the priorη. While sampling fromη is often straightforward (for instance if η is a Gaussian), sampling from ν is generally harder, due to dependence of its density on complex functionsE andG. It is still possible to use MCMC methods to sample fromν, however, since we have access to its density up to a normalizing constant (4). In particular, we are interested in methods that exploit the gradient of ν, and consider two classes of samplers: Overdamped samplers and Kinetic samplers.\nOverdamped samplers are obtained as a time-discretization of the Overdamped Langevin dynamics:\ndzt=(∇zlogη(zt)−∇zE(G(zt)))+ √ 2dwt, (13)\nwherewt is a standard Brownian motion. The simplest sampler arising from (13) is the Unadjusted Langevin Algorithm (ULA):\nZk+1 =Zk+λ(∇zlogη(Zk)−∇zE(G(Zk)))+ √ 2λWk+1, Z0∼η,\nwhere (Wk)k≥0 are i.i.d. standard Gaussians and λ is the step-size. For large k,Zk is an approximate sample from ν (Raginsky et al., 2017, Proposition 3.3). Hence, settingX=G(Zk) for a large enough k provides an approximate sample from the GEBM Q, as summarized in Algorithm 2 of Appendix F.\nKinetic samplers arise from the Kinetic Langevin dynamics which introduce a momentum variable: dzt=vtdt, dvt=−γvtdt+u(∇logη(zt)−∇E(G(zt)))dt+ √ 2γudwt. (14)\nwith friction coefficient γ≥0, inverse massu≥0, momentum vector vt and standard Brownian motion wt. When the mass u−1 becomes negligible compared to the friction coefficient γ, i.e. uγ−2 ≈ 0, standard results show that (14) recovers the Overdamped dynamics (13). Discretization in time of (14)\nleads to Kinetic samplers similar to Hamiltonian Monte Carlo (Cheng et al., 2017; Sachs et al., 2017). We consider a particular algorithm from Sachs et al. (2017) which we call Kinetic Langevin Algorithm (KLA) (see Algorithm 3 in Appendix F). Kinetic samplers were shown to better explore the modes of the invariant distribution ν compared to Overdamped ones (see (Neal, 2010; Betancourt et al., 2017) for empirical results and (Cheng et al., 2017) for theory), as also confirmed empirically in Appendix D for image generation tasks using GEBMs. Next, we provide the following convergence result:\nProposition 6. Assume that logη(z) is strongly concave and has a Lipschitz gradient, thatE,G and their gradients are allL-Lipschitz. Set xt=G(zt), where zt is given by (14) and call Pt the probability distribution of xt. Then Pt converges to Q in the Wasserstein sense,\nW2(Pt,Q)≤LCe−cγt,\nwhere c andC are positive constants independent of t, with c=O(exp(−dim(Z))).\nProposition 6 is proved in Appendix C.1 using (Eberle et al., 2017, Corollary 2.6), and implies that (xt)t≥0 converges at the same speed as (zt)t≥0. When the dimension q of Z is orders of magnitude smaller than the input space dimension d, the process (xt)t≥0 converges faster than typical sampling methods onX , for which the exponent controlling the convergence rate is of orderO(exp(−d))." }, { "heading": "5 RELATED WORK", "text": "Energy based models. Usually, energy based models are required to have a density w.r.t. to a Lebesgue measure, and do not use a learnable base measure; in other words, models are supported on the whole space. Various methods have been proposed in the literature to learn EBMs. Contrastive Divergence (Hinton, 2002) approximates the gradient of the log-likelihood by sampling from the energy model with MCMC. More recently, (Belanger and McCallum, 2016; Xie et al., 2016; 2017; 2018c; 2019; Tu and Gimpel, 2018; Du and Mordatch, 2019; Deng et al., 2020) extend the idea using more sophisticated models and MCMC sampling strategies that lead to higher quality estimators. Score Matching (Hyvärinen, 2005) calculates an alternative objective (the score) to the log-likelihood which is independent of the partition function, and was recently used in the context non-parametric energy functions to provide estimators of the energy that are provably consistent (Sriperumbudur et al., 2017; Sutherland et al., 2018; Arbel and Gretton, 2018; Wenliang et al., 2019). In Noise-Contrastive Estimation (Gutmann and Hyvärinen, 2012), a classifier is trained to distinguish between samples from a fixed proposal distribution and the target P. This provides an estimate for the density ratio between the optimal energy model and the proposal distribution. In a similar spirit, Cranmer et al. (2016) uses a classifier to learn likelihood ratios. Conversely, Grathwohl et al. (2020) interprets the logits of a classifier as an energy model obtained after marginalization over the classes. The resulting model is then trained using Contrastive Divergence. In more recent work, Dai et al. (2019a;b) exploit a dual formulation of the logarithm of the partition function as a supremum over the set of all probability distributions of some functional objective. Yu et al. (2020) explore methods for using general f-divergences, such as Jensen-Shannon, to train EBMs.\nGenerative Adversarial Networks. Recent work proposes using the discriminator of a trained GAN to improve the generator quality. Rejection sampling (Azadi et al., 2019) and MetropolisHastings correction (Turner et al., 2019; Neklyudov et al., 2019) perform sampling directly on the high-dimensional input space without using gradient information provided by the discriminator. Moreover, the data distribution is assumed to admit a density w.r.t. the generator. Ding et al. (2019) perform sampling on the feature space of some auxiliary pre-trained network; while Lawson et al. (2019) treat the sampling procedure as a model on its own, learned by maximizing the ELBO. In our case, no auxiliary model is needed. In the present work, sampling doesn’t interfere with training, in contrast to recently considered methods to optimize over the latent space during training Wu et al. (2019b;a). In Tanaka (2019), the discriminator is viewed as an optimal transport map between the generator and the data distribution and is used to compute optimized samples from latent space. This is in contrast to the diffusion-based sampling that we consider. In (Xie et al., 2018b;a), two independent models, a full support EBM and a generator network, are trained cooperatively using MCMC. By contrast, in the present work, the energy and base are part of the same model, and the model support is lower-dimensional than the target space X . While we do not address the mode collapse problem, Xu et al. (2018); Nguyen et al. (2017) showed that KL-based losses are resilient to it thanks to the zero-avoiding property of the KL, a good sign for KALE which is derived from KL by Fenchel duality.\nThe closest related approach appears in a study concurrent to the present work (Che et al., 2020), where the authors propose to use Langevin dynamics on the latent space of a GAN generator, but with a different discriminator to ours (derived from the Jensen-Shannon divergence or a Wasserstein-based divergence). Our theory results showing the existence of the loss gradient (Theorem 5), establishing weak convergence of distributions under KALE (Proposition 4), and demonstrating consistency of the KALE estimator (Appendix B) should transfer to the JS and Wasserstein criteria used in that work. Subsequent to the present work, an alternative approach has been recently proposed, based on normalising flows, to learn both the low-dimensional support of the data and the density on this support (Brehmer and Cranmer, 2020). This approach maximises the explicit likelihood of a data projection onto a learned manifold, and may be considered complementary to our approach." }, { "heading": "6 EXPERIMENTS", "text": "We train the models for 150000 generator iterations using Algorithm 1. After training is completed, we rescale the energy by β=100 to get a colder version of the GEBM and sample from it using either Algorithm 2 (ULA) or Algorithm 3 (KLA) with parameters (γ=100,u=1). This colder temperature leads to an improved FID score, and needs relatively few MCMC iterations, as shown in Figure 6 of Appendix D. Sampler convergence to visually plausible modes at low tempteratures is demonstrated in Figure 2. We perform 1000 MCMC iterations with initial step-size of λ=10−4 decreased by 10 every 200 iterations. As a baseline we consider samples generated from the base of the GEBM only (without using information from the energy) and call this KALE-GAN. More details are given in Appendix G.\nResults: Table 1 shows that GEBM outperforms both KALE and standard GANs when using the same networks for the base/generator and energy/critic. Moreover, KALE-GAN matches the performance of a standard GAN (with Jensen-Shannon critic), showing that the improvement of GEBM cannot be explained by the switch from Jensen-Shannon to a KALE-based critic. Rather, the improvement is largely due to incorporating the energy function into the model, and sampling using Algorithm 3.\nThis finding experimentally validates our claim that incorporating the energy improves the model, and that all else being equal, a GEBM outperforms a GAN with the same generator and critic architecture. Indeed, if the critic is not zero at convergence, then by definition it contains information on the remaining mismatch between the generator (base) and data mass, which the GEBM incorporates, but the GAN does not. The GEBM also outperforms an EBM even when the latter was trained using a larger network (ResNet) with supervision (S) on ImageNet, which is an easier task ( Chen et al. (2019)). More comparisons on Cifar10 and ImageNet are provided in Table 4 of Appendix D.\nTable 2 shows different sampling methods using the same trained networks (generator and critic), with KALE-GAN as a baseline. All energy-exploiting methods outperform the unmodified KALE-GAN with the same architecture. That said, our method (both ULA and KLA) outperforms both (IHM) (Turner et al., 2019) and (DOT) (Tanaka, 2019), which both use the energy information.\nCifar10 LSUN CelebA ImageNet\nKALE-GAN 32.03 21.67 6.91 19.37 IHM 30.47 20.63 6.39 18.15 DOT 26.35 20.41 5.93 16.21 GEBM (ULA) 23.02 16.23 5.21 14.00 GEBM (KLA) 24.29 15.25 5.38 13.94\nTable 2: FID scores for different sampling methods using the same trained SNGAN (ConvNet): KALE-GAN as a baseline w/o critic information.\nIn Table 2, KLA was used in the high friction regime γ=100 and thus behaves like ULA. This allows to obtain sharper samples concentrated around the modes of the GEBM thus improving the FID score. If, instead, the goal is to encourage more exploration of the modes of the GEBM, then KLA with a smaller γ is a better alternative than ULA, as the former can explore multiple modes/images within the same MCMC chain, unlike (ULA): see Figures 3 to 5 of Appendix D. Moving from one mode to another results in an increased FID score while between modes, however, which can be avoided by decreasing λ." }, { "heading": "6.2 DENSITY ESTIMATION", "text": "Motivation. We next consider the particular setting where the likelihood of the model is well-defined, and admits a closed form expression. This is intended principally as a sanity check that our proposed training method in Algorithm 1 succeeds in learning maximum likelihood solutions. Outside of this setting, closed form expressions of the normalizing constant are not available for generic GEBMs. While this is not an issue (since the proposed method doesn’t require a closed form expression for the normalizing constant), in this experiment only, we want to have access to closed form expressions, as they enable a direct comparison with other density estimation methods.\nExperimental setting. To have a closed-form likelihood, we consider the case where the dimension of the latent space is equal to data-dimension, and choose the baseG of the GEBM to be a Real NVP (Ding et al. (2019) ) with density exp(−r(x)) and energyE(x)=h(x)−r(x). Thus, in this particular case, the GEBM has a well defined likelihood over the whole space, and we are precisely in the setting of Proposition 2, which shows that this GEBM is equal to an EBM with density proportional to exp(−h). We further require the EBM to be a second Real NVP so that its density has a closed form expression. We consider 5 UCI datasets (Dheeru and Taniskidou, 2017) for which we use the same pre-processing as in (Wenliang et al., 2019). For comparison, we train the EBM by direct maximum likelihood (ML) and contrastive divergence (CD). To train the GEBM, we use Algorithm 1, which doesn’t directly exploit the closed-form expression of the likelihood (unlike direct ML). We thus use either (8) (KALE-DV) or (9) (KALE-F) to estimate the normalizing constant. More details are given in Appendix G.2.\nResults. Table 3 reports the Negative Log-Likelihood (NLL) evaluated on the test set and corresponding to the best performance on the validation set. Training the GEBM using Algorithm 1 leads to comparable performance to (CD) and (ML). As shown in Figure 7 of Appendix E, (KALE-DV) and (KALEF) maintain a small error gap between the training and test NLL and, as discussed in Section 3.1 and Appendix F, (KALE-F) leads to more accurate estimates of the log-partition function, with a relative error of order 0.1% compared to 10% for (KALE-DV)." }, { "heading": "7 ACKNOWLEDGMENTS", "text": "We thank Mihaela Rosca for insightful discussions and Song Liu, Bo Dai and Hanjun Dai for pointing us to important related work." }, { "heading": "A KL APPROXIMATE LOWER-BOUND ESTIMATE", "text": "We discuss the relation between KALE (10) and the Kullback-Leibler divergence via Fenchel duality. Recall that a distribution P is said to admit a density w.r.t. G if there exists a real-valued measurable function r0 that is integrable w.r.t. G and satisfies dP = r0dG. Such a density is also called the Radon-Nikodym derivative of P w.r.t. G. In this case, we have:\nKL(P||G)= ∫ r0log(r0)dG. (15)\nNguyen et al. (2010); Nowozin et al. (2016) derived a variational formulation for the KL using Fenchel duality. By the duality theorem (Rockafellar, 1970), the convex and lower semi-continuous function ζ :u 7→ulog(u) that appears in (15) can be expressed as the supremum of a concave function:\nζ(u)=sup v uv−ζ?(v).\nThe function ζ? is called the Fenchel dual and is defined as ζ?(v) = supuuv−ζ(u). By convention, the value of the objective is set to −∞ whenever u is outside of the domain of definition of ζ?. When ζ(u) = u log(u), the Fenchel dual ζ?(v) admits a closed form expression of the form ζ?(v)=exp(v−1). Using the expression of ζ in terms of its Fenchel dual ζ?, it is possible to express KL(P||G) as the supremum of the variational objective (16) over all measurable functions h.\nF(h) :=− ∫ hdP− ∫ exp(−h)dG+1. (16)\nNguyen et al. (2010) provided the variational formulation for the reverse KL using a different choice for ζ: (ζ(u) =−log(u)). We refer to (Nowozin et al., 2016) for general f -divergences. Choosing a smaller set of functionsH in the variational objective (16) will lead to a lower bound on the KL. This is the KL Approximate Lower-bound Estimate (KALE):\nKALE(P||G)= sup h∈H F(h) (17)\nIn general, KL(P||G) ≥ KALE(P||G). The bound is tight whenever the negative log-density h0 =−logr0 belongs toH; however, we do not require r0 to be well-defined in general. Equation (17) has the advantage that it can be estimated using samples from P and G. Given i.i.d. samples (X1, ...,XN ) and (Y1, ...,YM ) from P and G, we denote by P̂ and Ĝ the corresponding empirical distributions. A simple approach to estimate KALE(P||G) is to use anM -estimator. This is achieved by optimizing the penalized objective\nĥ :=argmax h∈H “F(h)− λ 2 I2(h), (18)\nwhere “F is an empirical version ofF and I2(h) is a penalty term that prevents overfitting due to finite samples. The penalty I2(h) acts as a regularizer favoring smoother solutions while the parameter λ determines the strength of the smoothing and is chosen to decrease as the sample size N and M increase. TheM -estimator of KALE(P||G) is obtained simply by plugging in ĥ into the empirical objective “F(h): ÷KALE(P||G) := “F(ĥ). (19) We defer the consistency analysis of (19) to Appendix B where we provide convergence rates in a setting where the set of functions H is a Reproducing Kernel Hilbert Space and under weaker assumptions that were not covered by the framework of Nguyen et al. (2010)." }, { "heading": "B CONVERGENCE RATES OF KALE", "text": "In this section, we provide a convergence rate for the estimator in (19) whenH is an RKHS. The theory remains the same whetherH contains constants or not. With this choice, the Representer Theorem allows us to reduce the potentially infinite-dimensional optimization problem in (18) to a convex finite-dimensional one. We further restrict ourselves to the well-specified case where the density r0 of P w.r.t. G is well-defined and belongs toH, so that KALE matches the KL. While Nguyen et al.\n(2010) (Theorem 3) provides a convergence rate of 1/ √ N for a related M -estimator, this requires the density r0 to be lower-bounded by 0 as well as (generally) upper-bounded. This can be quite restrictive if, for instance, r0 is the density ratio of two gaussians. In Theorem 7, we provide a similar convergence rate for the estimator defined in (19) without requiring r0 to be bounded. We start by briefly introducing some notations, the working assumptions and the statement of the convergence result in Appendix B.1 and provide the proofs in Appendix B.2." }, { "heading": "B.1 STATEMENT OF THE RESULT", "text": "We recall that an RKHSH of functions defined on a domain X ⊂Rd and with kernel k is a Hilbert space with dot product 〈.,.〉, such that y 7→k(x,y) belongs toH for any x∈X , and\nk(x,y)=〈k(x,.),k(y,.)〉, ∀x,y∈X .\nAny function h inH satisfies the reproducing property f(x)=〈f,k(x,.)〉 for any x∈X . Recall that KALE(P||G) is obtained as an optimization problem\nKALE(P||G)= sup h∈H F(h) (20)\nwhereF is given by:\nF(h) :=− ∫ hdP− ∫ exp(−h)dG+1.\nSince the negative log density ratio h0 is assumed to belong to H, this directly implies that the supremum of F is achieved at h0 and F(h0) = KALE(P||G). We are interested in estimating KALE(P||G) using the empirical distributions P̂ and Ĝ,\nP̂ := 1\nN N∑ n=1 δXn , Ĝ := 1 N N∑ n=1 δYn ,\nwhere (Xn)1≤n≤N and (Yn)1≤n≤N are i.i.d. samples from P and G. For this purpose we introduce the empirical objective functional,“F(h) :=−∫ hdP̂−∫ exp(−h)dĜ+1. The proposed estimator is obtained by solving a regularized empirical problem,\nsup h∈H “F(h)− λ 2 ‖h‖2, (21)\nwith a corresponding population version,\nsup h∈H F(h)− λ 2 ‖h‖2. (22)\nFinally, we introduceD(h,δ) and Γ(h,δ):\nD(h,δ)= ∫ δexp(−h)dG− ∫ δdP,\nΓ(h,δ)=− ∫ ∫ 1\n0\n(1−t)δ2exp(−(h+tδ))dG.\nThe empirical versions of D(h,δ) and Γ(h,δ) are denoted D̂(h,δ) and Γ̂(h,δ). Later, we will show thatD(h,δ) D̂(h,δ) are in fact the gradients ofF(h) and “F(h) along the direction δ. We state now the working assumptions:\n(i) The supremum ofF overH is attained at h0.\n(ii) The following quantities are finite for some positive :∫ » k(x,x) dP(x),∫ » k(x,x)exp((‖h0‖+ ) » k(x,x)) dG(x),∫\nk(x,x)exp((‖h0‖+ ) » k(x,x)) dG(x).\n(iii) For any h∈H, ifD(h,δ)=0 for all δ then h=h0. Theorem 7. Fix any 1>η > 0. Under Assumptions (i) to (iii), and provided that λ= 1√\nN , it holds\nwith probability at least 1−2η that |“F(ĥ)−F(h0)|≤M ′(η,h0)√ N\nfor a constantM ′(η,h0) that depends only on η and h0.\nThe assumptions in Theorem 7 essentially state that the kernel associated to the RKHSH needs to satisfy some integrability requirements. That is to guarantee that the gradient δ 7→∇F(h)(δ) and its empirical version are well-defined and continuous. In addition, the optimality condition∇F(h)=0 is assumed to characterize the global solution h0. This will be the case if the kernel is characteristic Simon-Gabriel and Scholkopf (2018). The proof of Theorem 7, in Appendix B.2, takes advantage of the Hilbert structure of the setH, the convexity of the functionalF and the optimality condition∇“F(ĥ)=λĥ of the regularized problem, all of which turn out to be sufficient for controlling the error of (19)." }, { "heading": "B.2 PROOFS", "text": "We state now the proof of Theorem 7 with subsequent lemmas and propositions.\nProof of Theorem 7. We begin with the following inequalities:\nλ 2 (‖ĥ‖2−‖h0‖2)≤ “F(ĥ)−“F(h0)≤〈∇“F(h0),ĥ−h0〉.\nThe first inequality is by definition of ĥ while the second is obtained by concavity of “F . For simplicity we write B=‖ĥ−h0‖ and C=‖∇“F(h0)−L(h0)‖. Using Cauchy-Schwarz and triangular inequalities, it is easy to see that\n−λ 2\n( B2+2B‖h0‖ ) ≤ “F(ĥ)−“F(h0)≤CB.\nMoreover, by triangular inequality, it holds that\nB≤‖hλ−h0‖+‖ĥ−hλ‖. Lemma 11 ensures thatA(λ)=‖hλ−h0‖ converges to 0 as λ→0. Furthermore, by Proposition 12, we have ‖ĥ−hλ‖≤ 1λD whereD(λ)=‖∇“F(hλ)−∇L(hλ)‖. Now choosing λ= 1√N and applying Chebychev inequality in Lemma 8, it follows that for any 1>η>0,we have with probability greater than 1−2η that both\nD(λ)≤ C(‖h0‖η)√ N , C≤ C(‖h0‖,η)√ N ,\nwhere C(‖h0‖,η) is defined in Lemma 8. This allows to conclude that for any η > 0, it holds with probability at least 1−2η that |“F(ĥ)−“F(h0)|≤ M ′(η,h0)√N whereM ′(η,h0) depends only on η and h0. We proceed using the following lemma, which provides an expression forD(h,δ) and D̂(h,δ) along with a probabilistic bound:\nLemma 8. Under Assumptions (i) and (ii), for any h∈H such that ‖h‖≤‖h0‖+ , there existsD(h) inH satisfying\nD(h,δ)=〈δ,D(h)〉, and for any h∈H, there exists “D(h) satisfying“D(h,δ)=〈δ,“D(h)〉. Moreover, for any 0<η<1 and any h∈H such that ‖h‖≤‖h0‖+ :=M , it holds with probability greater than 1−η that\n‖D(h)−“D(h)‖≤ C(M,η)√ N ,\nwhereC(M,η) depends only onM and η.\nProof. First, we show that δ 7→D(h,δ) is a bounded linear operator. Indeed, Assumption (ii) ensures that k(x,.) and k(x,.)exp(−h(x)) are Bochner integrable w.r.t. P and G (Retherford (1978)), hence D(h,δ) is obtained as\nD(h,δ) :=〈δ,µexp(−h)G−µP〉, where µexp(−h)G = ∫ k(x,.)exp(−h(x))dG and µP = ∫ k(x,.)dP. DefiningD(h) to be =µexp(−h)G− µP leads to the desired result. “D(h) is simply obtained by taking the empirical version ofD(h). Finally, the probabilistic inequality is a simple consequence of Chebychev’s inequality.\nThe next lemma states thatF(h) and “F(h) are Frechet differentiable. Lemma 9. Under Assumptions (i) and (ii) , h 7→F(h) is Frechet differentiable on the open ball of radius ‖h0‖+ while h 7→ “F(h) is Frechet differentiable onH. Their gradients are given by D(h) and “D(h) as defined in Lemma 8,\n∇F(h)=D(h), ∇“F(h)=“D(h) Proof. The empirical functional “F(h) is differentiable since it is a finite sum of differentiable functions, and its gradient is simply given by “D(h). For the population functional, we use second order Taylor expansion of exp with integral remainder, which gives\nF(h+δ)=F(h)−D(h,δ)+Γ(h,δ).\nBy Assumption (ii) we know that Γ(h,δ)‖δ‖ converges to 0 as soon as ‖δ‖→ 0. This allows to directly conclude that F is Frechet differentiable, with differential given by δ 7→D(h,δ). By Lemma 8, we conclude the existence of a gradient∇F(h) which is in fact given by∇F(h)=D(h). From now on, we will only use the notation∇F(h) and∇“F(h) to refer to the gradients ofF(h) and“F(h). The following lemma states that (21) and (22) have a unique global optimum, and gives a first order optimality condition.\nLemma 10. The problems (21) and (22) admit unique global solutions ĥ and hλ inH. Moreover, the following first order optimality conditions hold: λĥ=∇“F(ĥ), λhλ=∇F(hλ). Proof. For (21), existence and uniqueness of a minimizer ĥ is a simple consequence of continuity and strong concavity of the regularized objective. We now show the existence result for (22). Let’s introduce Gλ(h)=−F(h)+ λ2 ‖h‖\n2 for simplicity. Uniqueness is a consequence of the strong convexity of Gλ. For the existence, consider a sequence of elements fk ∈H such that Gλ(fk)→ infh∈HGλ(h). If h0\nis not the global solution, then it must hold for k large enough that Gλ(fk)≤Gλ(h0). We also know that F(fk)≤F(h0), hence, it is easy to see that ‖fk‖≤‖h0‖ for k large enough. This implies that fk is a bounded sequence, therefore it admits a weakly convergent sub-sequence by weak compactness. Without loss of generality we assume that fk weakly converges to some element hλ ∈ H and that ‖fk‖≤‖h0‖. Hence, ‖hλ‖≤ liminfk‖fk‖≤‖h0‖. Recall now that by definition of weak convergence, we have fk(x)→k hλ(x) for all x∈X . By Assumption (ii), we can apply the dominated convergence theorem to ensure thatF(fk)→F(hλ). Taking the limit of Gλfk, the following inequality holds:\nsup h∈H Gλ(h)=limsup k Gλ(fk)≤Gλ(hλ).\nFinally, by Lemma 9 we know thatF is Frechet differentiable, hence we can use Ekeland and Témam (1999) (Proposition 2.1) to conclude that∇F(hλ) = λhλ. We use exactly the same arguments for (21).\nNext, we show that hλ converges towards h0 inH. Lemma 11. Under Assumptions (i) to (iii) it holds that:\nA(λ) :=‖hλ−h0‖→0.\nProof. We will first prove that hλ converges weakly towards h0, and then conclude that it must also converge strongly. We start with the following inequalities:\n0≥F(hλ)−F(h0)≥ λ\n2 (‖hλ‖2−‖h0‖2).\nThese are simple consequences of the definitions of hλ and h0 as optimal solutions to (20) and (21). This implies that ‖hλ‖ is always bounded by ‖h0‖. Consider now an arbitrary sequence (λm)m≥0 converging to 0. Since ‖hλm‖ is bounded by ‖h0‖, it follows by weak-compactness of balls inH that hλm admits a weakly convergent sub-sequence. Without loss of generality we can assume that hλm is itself weakly converging towards an element h∗. We will show now that h∗ must be equal to h0. Indeed, by optimality of hλm , it must hold that\nλmhλm =∇F(hm).\nThis implies that ∇F(hm) converges weakly to 0. On the other hand, by Assumption (ii), we can conclude that ∇F(hm) must also converge weakly towards ∇F(h∗), hence ∇F(h∗) = 0. Finally by Assumption (iii) we know that h0 is the unique solution to the equation∇F(h)=0 , hence h∗=h0. We have shown so far that any subsequence of hλm that converges weakly, must converge weakly towards h0. This allows to conclude that hλm actually converges weakly towards h0. Moreover, we also have by definition of weak convergence that:\n‖h0‖≤ lim inf m→∞ ‖hλm‖.\nRecalling now that ‖hλm‖≤ ‖h0‖ it follows that ‖hλm‖ converges towards ‖h0‖. Hence, we have the following two properties:\n• hλm converges weakly towards h0,\n• ‖hλm‖ converges towards ‖h0‖.\nThis allows to directly conclude that ‖hλm−h0‖ converges to 0.\nProposition 12. We have that:\n‖ĥ−hλ‖≤ 1\nλ ‖∇F̂(hλ)−∇F(hλ)‖\nProof. By definition of ĥ and hλ the following optimality conditions hold: λĥ=∇“F(ĥ), λhλ=∇F(hλ).\nWe can then simply write: λ(ĥ−hλ)−(∇“F(ĥ)−∇“F(hλ))=∇“F(hλ)−∇F(hλ). Now introducing δ := ĥ−hλ andE :=∇“F(ĥ)−∇“F(hλ) for simplicity and taking the squared norm of the above equation, it follows that\nλ2‖δ‖2+‖E‖2−2λ〈δ,E〉=‖∇“F(hλ)−∇F(hλ)‖2. By concavity of “F onHwe know that−〈ĥ−hλ,E〉≥0. Therefore:\nλ2‖ĥ−hλ‖2≤‖∇“F(hλ)−∇F(hλ)‖2." }, { "heading": "C LATENT NOISE SAMPLING AND SMOOTHNESS OF KALE", "text": "" }, { "heading": "C.1 LATENT SPACE SAMPLING", "text": "Here we prove Proposition 6 for which we make the assumptions more precise: Assumption 1. We make the following assumption:\n• logη is strongly concave and admits a Lipschitz gradient.\n• There exists a non-negative constantL such that for any x,x′∈X and z,z′∈Z:\n|E(x)−E(x′)|≤‖x−x′‖, ‖∇xE(x)−∇xE(x′)‖≤‖x−x′‖ |G(z)−G(z′)|≤‖z−z′‖, ‖∇zG(z)−∇zG(z′)‖≤‖z−z′‖\nThroughout this section, we introduceU(z) :=−log(η(z))+E(G(z)) for simplicity.\nProof of Proposition 1 . To sample fromQG,E , we first need to identify the posterior latent distribution νG,E used to produce those samples. We rely on (23) which holds by definition of QG,E for any test function h onX : ∫\nh(x)dQ(x)= ∫ h(G(z))f(G(z))η(z)dz, (23)\nHence, the posterior latent distribution is given by ν(z) = η(z)f(G(z)), and samples from GEBM are produced by first sampling from νG,E , then applying the implicit mapG,\nX∼Q ⇐⇒ X=G(Z), Z∼ν.\nProof of Proposition 2. the base distribution G admits a density on the whole space denoted by exp(−r(x)) and the energy Ẽ is of the form Ẽ(x) =E(x)−r(x) for some parametric function E, it is easy to see that Q has a density proportional to exp(−E) and is therefore equivalent to a standard EBM with energyE.\nThe converse holds as well, meaning that for any EBM with energy E, it is possible to construct a GEBM using an importance weighting strategy. This is achieved by first choosing a base G, which is required to have an explicit density exp(−r) up to a normalizing constant, then defining the energy of the GEBM to be Ẽ(x)=E(x)−r(x) so that:\ndQ(x)∝exp(−Ẽ(x))dGθ(x)∝exp(−E(x))dx (24)\nEquation (24) effectively depends only on E(x) and not on G since the factor exp(r) exactly compensates for the density of G. The requirement that the base also admits a tractable implicit map G can be met by choosing G to be a normalizing flow (Rezende and Mohamed, 2015) and does not restrict the class of possible EBMs that can be expressed as GEBMs.\nProof of Proposition 6. Let πt be the probability distribution of (zt,vt) at time t of the diffusion in (14), which we recall that\ndzt=vtdt, dvt =−(γvt+u∇U(zt))+ √ 2λudwt,\nWe call π∞ its corresponding invariant distribution given by π∞(z,v)∝exp Å −U(z)− 1 2 ‖v‖2 ã\nBy Lemma 13 we know thatU is dissipative, bounded from below, and has a Lipschitz gradient. This allows to directly apply (Eberle et al., 2017)(Corollary 2.6.) which implies that\nW2(πt,π∞)≤Cexp(−tc),\nwhere c is a positive constant and C only depends on π∞ and the initial distribution π0. Moreover, the constant c is given explicitly in (Eberle et al., 2017, Theorem 2.3) and is of order 0(e−q) where q is the dimension of the latent spaceZ . We now consider an optimal coupling Πt between πt and π0. Given joints samples ((zt,vt),(z,v)) from Πt, we consider the following samples in input space (xt,x) := (G(zt),G(z)). Since zt and z have marginals πt and π∞, it is easy to see that xt ∼ Pt and x∼Q. Therefore, by definition of the W2 distance, we have the following bound:\nW 22 (Pt,Q)≤E [ ‖xt−x‖2 ] ≤ ∫ ‖G(zt)−G(z)‖2dΠt(zt,z)\n≤L2 ∫ ‖zt−z‖2dΠt(zt,z)\n≤L2W 22 (πt,π∞)≤C2L2exp(−2tc).\nThe second line uses the definition of (xt,x) as joint samples obtained by mapping (zt,z). The third line uses the assumption thatB isL-Lipschitz. Finally, the last line uses that Πt is an optimal coupling between πt and π∞.\nLemma 13. Under Assumption 1, there existsA>0 and λ∈(0, 14 ] such that\n1 2 z>t∇U(z)≥λ\nÅ U(z)+ γ2 4u ‖z‖2 ã −A, ∀z∈Z, (25)\nwhere γ and u are the coefficients appearing in (14). Moreover, U is bounded bellow and has a Lipschitz gradient.\nProof. For simplicity, let’s call u(z) = − log η(z), w(z) = E? ◦ Bθ?(z), and denote by M an upper-bound on the Lipschitz constant ofw and∇w which is guaranteed to be finite by assumption. HenceU(z)=u(z)+w(z). Equation (25) is equivalent to having\nz>∇u(z)−2λu(z)− γ 2\n2u ‖z‖2≥2λw(z)−z>∇w(z)−2A. (26)\nUsing that w is Lipschitz, we have that w(z) ≤ w(0) +M‖z‖ and −z>∇w(z) ≤M‖z‖. Hence, 2λw(z)−z>∇w(z)−2A≤2λw(0)+(2λ+1)M‖z‖−2A. Therefore, a sufficient condition for (26) to hold is\nz>∇u(z)−2λu(z)− γ 2\n2u ‖z‖2≥+(2λ+1)M‖z‖−2A+2λw(0). (27)\nWe will now rely on the strong convexity of u,which holds by assumption, and implies the existence of a positive constantm>0 such that\n−u(z)≥−u(0)−z>∇u(z)+m 2 ‖z‖2,\nz>∇u(z)≥−‖z‖‖∇u(0)‖+m‖z‖2.\nThis allows to write the following inequality,\nz>∇u(z)−2λu(z)− γ 2 2u ≥(1−2λ)z>∇u(z)+λ(m+ γ 2 2u )‖z‖2−2λu(0)\n≥(1−λ(m+ γ 2\n2u ))‖z‖2−(1−2λ)‖z‖‖∇u(0)‖−2λu(0).\nCombining the previous inequality with (27) and denotingM ′=‖∇u(0)‖ , it is sufficient to findA and λ satisfyingÅ 1−λ Å m+ γ2\n2u\nãã ‖z‖2−(M+M ′+2λ(M−M ′))‖z‖−2λ(u(0)+w(0))+2A≥0.\nThe l.h.s. in the above equation is a quadratic function in ‖z‖ and admits a global minimum when λ< Ä m+ γ 2\n2u\nä−1 . The global minimum is always positive provided thatA is large enough.\nTo see that U is bounded below, it suffice to note, by Lipschitzness of w, that w(z)≥w(0)−M‖z‖ and by strong convexity of u that\nu(z)≥u(0)+M ′‖z‖+m 2 ‖z‖2.\nHence,U is lower-bounded by a quadratic function in ‖z‖with positive leading coefficient m2 , hence it must be lower-bounded by a constant. Finally, by assumption, u and w have Lipschitz gradients, which directly implies thatU has a Lipschitz gradient.\nProof of Proposition 3. By assumption KL(P||G)<+∞, this implies that P admits a density w.r.t. G which we call r(x). As a result P admits also a density w.r.t. Q given by:\nZexp(E?(x))r(x).\nWe can then compute theKL(P||Q) explicitly:\nKL(P||Q)=EP[E]+log(Z)+EP[log(r)] =−LP,G(E?)+KL(P||G).\nSince 0 belongs to E and by optimality ofE?, we know thatLP,G(E?)≥LP,G(0)=0. The result then follows directly." }, { "heading": "C.2 TOPOLOGICAL AND SMOOTHNESS PROPERTIES OF KALE", "text": "Topological properties of KALE. Denseness and smoothness of the energy class E are the key to guarantee that KALE is a reliable criterion for measuring convergence. We thus make the following assumptions on E :\n(A) For all E ∈ E , −E ∈ E and there is CE > 0 such that cE ∈ E for 0 ≤ c ≤ CE . For any continuous function g, any compact support K in X and any precision > 0, there exists a finite linear combination of energiesG= ∑r i=1aiEi such that supx∈K |f(x)−G(x)|≤ .\n(B) All energiesE in E are Lipschitz in their input with the same Lipschitz constantL>0.\nAssumption (A) holds in particular when E contains feedforward networks with a given number of parameters. In fact networks with a single neuron are enough, as shown in (Zhang et al., 2017, Theorem 2.3). Assumption (B) holds when additional regularization of the energy is enforced during training by methods such as spectral normalization Miyato et al. (2018) or gradient penalty Gulrajani et al. (2017) as done in Section 6. Proposition 4 states the topological properties of KALE ensuring that it can be used as a criterion for weak convergence. A proof is given in Appendix C.2.1 and is a consequence of (Zhang et al., 2017, Theorem B.1). Proposition 14. Under Assumptions (A) and (B) it holds that:\n1. KALE(P||G)≥0 with KALE(P||G)=0 if and only if P=G.\n2. KALE(P||Gn)→0 if and only if Gn→P under the weak topology." }, { "heading": "C.2.1 TOPOLOGICAL PROPERTIES OF KALE", "text": "In this section we prove Proposition 4. We first start by recalling the required assumptions and make them more precise:\nAssumption 2. Assume the following holds:\n• The setX is compact.\n• For all E ∈ E , −E ∈ E and there is CE > 0 such that cE ∈ E for 0 ≤ c ≤ CE . For any continuous function g, any compact support K in X and any precision > 0, there exists a finite linear combination of energiesG= ∑r i=1aiEi such that |f(x)−G(x)|≤ onK.\n• All energiesE in E are Lipschitz in their input with the same Lipschitz constantL>0.\nFor simplicity we consider the setH=E+R, i.e.: H is the set of functions h of the form h=E+c where E ∈E and c∈R. In all what follows P1 is the set of probability distributions with finite first order moments. We consider the notion of weak convergence on P1 as defined in (Villani, 2009, Definition 6.8) which is equivalent to convergence in the Wasserstein-1 distanceW1.\nProof of Proposition 4 . We proceed by proving the separation properties (1st statement), then the metrization of the weak topology (2nd statement).\nSeparation. We have by Assumption 2 that 0∈E , hence by definition KALE(PP ||G)≥FP,G(0)=0. On the other hand, whenever P=G, it holds that:\nFP,G(h)=− ∫ (exp(−h)+h−1)dP, ∀h∈H.\nMoreover, by convexity of the exponential, we know that exp(−x)+x−1≥0 for all x∈R. Hence, FP,G(h)≤FP,G(0) = 0 for all h∈H. This directly implies that KALE(P|G) = 0. For the converse, we will use the same argument as in the proof of (Zhang et al., 2017, Theorem B.1). Assume that KALE(P|G)=0 and let h be inH. By Assumption 2, there existsCh>0 such that ch∈H and we have:\nF(ch)≤KALE(P||G)=0. Now dividing by c and taking the limit to 0, it is easy to see that− ∫ hdP+ ∫ hdG≤0.Again, by Assump-\ntion 2, we also know that−h∈H, hence, ∫ hdP− ∫ hdG≤0. This necessarily implies that ∫ hdP−∫\nhdG=0 for allh∈H. By the density ofH in the set continuous functions on compact sets, we can conclude that the equality holds for any continuous and bounded function, which in turn implies that P=G.\nMetrization of the weak topology. We first show that for any P and G with finite first moment, it holds that KALE(P|G)≤LW1(P,G), whereW1(P,G) is the Wasserstein-1 distance between P and G. For any h∈H the following holds:\nF(h)=− ∫ hdP− ∫ exp(−h)dG+1\n= ∫ h(x)dG(x)−h(x′)dP(x′)\n− ∫\n(exp(−h)+h−1)︸ ︷︷ ︸ ≥0 dG\n≤ ∫ h(x)dG(x)−h(x′)dP(x′)≤LW1(P,G)\nThe first inequality results from the convexity of the exponential while the last one is a consequence of h being L-Lipschitz. This allows to conclude that KALE(P||G) ≤ LW1(P,G) after taking the supremum over all h∈H. Moreover, sinceW1 metrizes the weak convergence onP1 (Villani, 2009, Theorem 6.9), it holds that whenever a sequence Gn converges weakly towards P inP1 we also have W1(P,Gn)→0 and thus KALE(P||Gn)→0. The converse is a direct consequence of (Liu et al., 2017, Theorem 10) since by assumptionX is compact.\nWell-defined learning. Assume that for any > 0 and any h and h′ in E there exists f in 2E such that ‖h+h′−f‖∞≤ then there exists a constantC such that:\nKALE(P,Q)≤CKALE(P,G)\nThis means that the proposed learning procedure which first finds the optimal energy E? given a base G by maximum likelihood then minimizes KALE(P,G) ensures ends up minimizing the distance between the data end the generalized energy-based model Q.\nKALE(P,Q)=sup h∈E LP,QG(h)\n=−KALE(P,G)+sup h∈E LP,G(h+E?)\nLet’s choose =KALE(P,G) and let h∈ 2E such that ‖h+E?−f‖∞≤ . We have by concavity of the function (α,β) 7→LP,G(α(h+E?−f)+βf) we have that:\nLP,G(h+E?)≤2LP,G( 1\n2 f)−LP,G(h+E?−f)\nBy assumption, we have that ‖h+E?−f‖∞≤ , thus |LP,G(h+E?−f)|≤2 . Moreover, we have thatLP,G( 12f)≤KALE(P,G) since 1 2f ∈E . This ensures that:\nLP,G(h+E?)≤3KALE(P,G).\nFinally, we have shown that:\nKALE(P,Q)≤2KALE(P,G).\nHence, minimizing KALE(P,G) directly minimizes KALE(P,Q)." }, { "heading": "C.2.2 SMOOTHNESS PROPERTIES OF KALE", "text": "We will now prove Theorem 5. We begin by stating the assumptions that will be used in this section:\n(I) E is parametrized by a compact set of parameters Ψ. (II) Functions in E are jointly continuous w.r.t. (ψ,x) and are L-lipschitz and L-smooth w.r.t.\nthe input x:\n‖Eψ(x)−Eψ(x′)‖≤Le‖x−x′‖, ‖∇xEψ(x)−∇xEψ(x′)‖≤Le‖x−x′‖.\n(III) (θ,z) 7→Gθ(z) is jointly continuous in θ and z, with z 7→Gθ(z) uniformly Lipschitz w.r.t. z:\n‖Gθ(z)−Gθ(z′)‖≤Lb‖z−z′‖, ∀z,z′∈Z,θ∈Θ.\nThere exists non-negative functions a and b defined from Z to R such that θ 7→Gθ(z) are a-Lipschitz and b-smooth in the following sense:\n‖Gθ(z)−Gθ′(z)‖≤a(z)‖θ−θ′‖, ‖∇θGθ(z)−∇θGθ′(z)‖≤b(z)‖θ−θ′‖.\nMoreover, a and b are integrable in the following sense:∫ a(z)2exp(2LeLb‖z‖)dη(z)<∞, ∫ exp(LeLb‖z‖)dη(z)<∞,\n∫ b(z)exp(LeLb‖z‖)dη(z)<∞.\nTo simplify notation, we will denote byLθ(f) the expected Gθ log-likelihood under P. In other words, Lθ(E) :=LP,Gθ (E)=− ∫ EdP−log ∫ exp(−E)dGθ.\nWe also denote by pE,θ the density of the model w.r.t. Gθ,\npE,θ= exp(−E) ZGθ,E , ZGθ,E=\n∫ exp(−E)dGθ.\nWe writeK(θ) :=KALE(P||Gθ) to emphasize the dependence on θ.\nProof of Theorem 5. To show that sub-gradient methods converge to local optima, we only need to show thatK is Lipschitz continuous and weakly convex. This directly implies convergence to local optima for sub-gradient methods, according to Davis and Drusvyatskiy (2018); Thekumparampil et al. (2019). Lipschitz continuity ensures thatK is differentiable for almost all θ∈Θ, and weak convexity simply means that there exits some positive constant C ≥ 0 such that θ 7→K(θ)+C‖θ‖2 is convex. We now proceed to show these two properties.\nWe will first prove that θ 7→K(θ) is weakly convex in θ. By Lemma 15, we know that for anyE∈E , the function θ 7→ Lθ(E) is M -smooth for the same positive constant M . This directly implies that it is also weakly convex and the following inequality holds:\nLθt(E)≤ tLθ(E)+(1−t)Lθ′(E)+ M\n2 t(1−t)‖θ−θ′‖2.\nTaking the supremum w.r.t. E, it follows that\nK(θt)≤ tK(θ)+(1−t)K(θ′)+ M\n2 t(1−t)‖θ−θ′‖2.\nThis means precisely thatK is weakly convex in θ. To prove that K is Lipschitz, we will also use Lemma 15, which states that Lθ(E) is Lipschitz in θ uniformly on E . Hence, the following holds:\nLθ(E)≤Lθ(E)+LC‖θ−θ′‖.\nAgain, taking the supremum overE, it follows directly that\nK(θ)≤K(θ′)+LC‖θ−θ′‖.\nWe conclude that K is Lipschitz by exchanging the roles of θ and θ′ to get the other side of the inequality. Hence, by the Rademacher theorem,K is differentiable for almost all θ. We will now provide an expression for the gradient ofK. By Lemma 16 we know that ψ 7→Lθ(Eψ) is continuous and by Assumption (I) Ψ is compact. Therefore, the supremum supE∈ELθ(E) is achieved for some function E?θ . Moreover, we know by Lemma 15 that Lθ(E) is smooth uniformly on E , therefore the family (∂θLθ(E))E∈E is equi-differentiable. We are in position to apply Milgrom and Segal (2002)(Theorem 3) which ensures thatK(θ) admits left and right partial derivatives given by\n∂+e K(θ)= lim t>0 t→0 ∂θLθ(E?θ+te)>e, ∂−e K(θ)= lim t<0 t→0 ∂θLθ(E?θ+te)>e, (28)\nwhere e is a given direction in Rr. Moreover, the theorem also states that K(θ) is differentiable iff t 7→E?θ+te is continuous at t=0. Now, recalling thatK(θ) is actually differentiable for almost all θ, it must hold thatE?θ+te→t→0E?θ and ∂+e K(θ)=∂−e K(θ) for almost all θ. This implies that the two limits in (28) are actually equal to ∂θLθ(E?θ )>e. The gradient ofK, whenever defined, in therefore given by\n∇θK(θ)=Z−1Gθ,E?θ\n∫ ∇xE?θ (Gθ(z))∇θGθ(z)exp(−E?θ (Gθ(z)))η(z)dz.\nLemma 15. Under Assumptions (I) to (III), the functional Lθ(E) is Lipschitz and smooth in θ uniformly on E:\n|Lθ(E)−Lθ′(E)|≤LC‖θ−θ′‖, ‖∂θLθ(E)−∂θLθ′(E))‖≤2CL(1+L)‖θ−θ′‖.\nProof. By Lemma 16, we have thatLθ(E) is differentiable, and that ∂θLθ(E) := ∫ (∇xE◦Gθ)∇θGθ(pE,θ◦Gθ)dη.\nLemma 16 ensures that ‖∂θLθ(E)‖ is bounded by some positive constantC that is independent from E and θ. This implies in particular thatLθ(E) is Lipschitz with a constantC. We will now show that it is also smooth. For this, we need to control the difference\nD :=‖∂θLθ(E)−∂θLθ′(E)‖.\nWe have by triangular inequality: D≤ ∫ ‖∇xE◦Gθ−∇xE◦Gθ′‖‖∇θGθ‖(pE,θ◦Gθ)dη︸ ︷︷ ︸\nI\n+ ∫ ‖∇xE◦Gθ‖‖∇θGθ−∇θGθ′‖(pE,θ◦Gθ)dη︸ ︷︷ ︸\nII\n+ ∫ ‖∇xE◦Gθ∇θGθ‖|pE,θ◦Gθ−pE,θ′ ◦Gθ′ |dη︸ ︷︷ ︸\nIII\n.\nThe first term can be upper-bounded usingLe-smoothness ofE and the fact thatGθ is Lipschitz in θ: I≤Le‖θ−θ′‖ ∫ |a|2(pE,θ◦Gθ)dη\n≤LeC‖θ−θ′‖.\nThe last inequality was obtained by Lemma 17. Similarly, using that∇θGθ is Lipschitz, it follows by Lemma 17 that\nII≤Le‖θ−θ′‖ ∫ |b|(pE,θ◦Gθ)dη\n≤LeC‖θ−θ′‖.\nFinally, for the last term III , we first consider a path θt= tθ+(1−t)θ′ for t∈ [0,1], and introduce the function s(t) :=pE,θt ◦Gθt . We will now control the difference pE,θ◦Gθ−pE,θ′ ◦Gθ′ , also equal to s(1)−s(0). Using the fact that st is absolutely continuous we have that s(1)−s(0)= ∫ 1 0 s′(t)dt. The derivative s′(t) is simply given by s′(t) = (θ−θ′)>(Mt−M̄t)s(t) whereMt= (∇xE◦Bθt)∇θGθt and M̄t= ∫ MtpE,θt ◦Gθtdη. Hence,\ns(1)−s(0)=(θ−θ′)> ∫ 1\n0\n(Mt−M̄t)s(t)dt.\nWe also know thatMt is upper-bounded byLa(z),which implies III≤L2e‖θ−θ′‖ ∫ 1\n0\nÇ∫ |a(z)|2s(t)(z)dη(z)+ Å∫ a(z)s(t)(z)dη(z) ã2å ≤L2e(C+C2)‖θ−θ′‖,\nwhere the last inequality is obtained using Lemma 17. This allows us to conclude thatLθ(E) is smooth for anyE∈E and θ∈Θ.\nLemma 16. Under Assumptions (II) and (III), it holds that ψ 7→ Lθ(Eψ) is continuous, and that θ 7→Lθ(Eψ) is differentiable in θ with gradient given by\n∂θLθ(E) := ∫ (∇xE◦Gθ)∇θGθ(pE,θ◦Gθ)dη.\nMoreover, the gradient is bounded uniformly in θ andE: ‖∇θLθ(E)‖≤Le Å∫ exp(−LeLb‖z‖)dη(z) ã−1∫ a(z)exp(LeLb‖z‖)dη(z).\nProof. To show that ψ 7→Lθ(Eψ) is continuous, we will use the dominated convergence theorem. We fixψ0 in the interior of Ψ and consider a compact neighborhoodW ofψ0. By assumption, we have that (ψ,x) 7→Eψ(x) and (ψ,z) 7→Eψ(Gθ(z)) are jointly continuous. Hence, |Eψ(0)| and |Eψ(Gθ(0))| are bounded onW by some constantC. Moreover, by Lipschitz continuity of x 7→Eψ , we have\n|Eψ(x)|≤|Eψ(0)|+Le‖x‖≤C+Le‖x‖, exp(−E(Gθ(z)))≤exp(−E(Gθ(0)))exp(LeLb‖z‖)≤exp(C)exp(LeLb‖z‖).\nRecalling that P admits a first order moment and that by Assumption (III), exp(LeLb‖z‖) is integrable w.r.t. η, it follows by the dominated convergence theorem and by composition of continuous functions that ψ 7→Lθ(Eψ) is continuous in ψ0. To show that θ 7→ Lθ(Eψ) is differentiable in θ, we will use the differentiation lemma in (Klenke, 2008, Theorem 6.28). We first fix θ0 in the interior of Θ, and consider a compact neighborhood V of θ0. Since θ 7→ |E(Gθ(0))| is continuous on the compact neighborhood V it admits a maximum valueC; hence we have using Assumptions (II) and (III) that\nexp(−E(Gθ(z)))≤exp(−E(Gθ(0)))exp(LeLb‖z‖)≤exp(C)exp(LeLb‖z‖). Along with the integrability assumption in Assumption (III), this ensures that z 7→exp(−E(Gθ(z))) is integrable w.r.t η for all θ in V . We also have that exp(−E(Gθ(z))) is differentiable, with gradient given by\n∇θexp(−E(Gθ(z)))=∇xE(Gθ(z))∇θGθ(z)exp(−E(Gθ(z))). Using thatE is Lipschitz in its inputs andGθ(z) is Lipschitz in θ, and combining with the previous inequality, it follows that\n‖∇θexp(−E(Gθ(z)))‖≤exp(C)Lea(z)exp(LeLb‖z‖), where a(z) is the location dependent Lipschitz constant introduced in Assumption (III). The r.h.s. of the above inequality is integrable by Assumption (III) and is independent of θ on the neighborhood V . Thus (Klenke, 2008, Theorem 6.28) applies, and it follows that\n∇θ ∫ exp(−E(Gθ0(z)))dη(z)= ∫ ∇xE(Gθ0(z))∇θGθ0(z)exp(−E(Gθ0(z)))dη(z).\nWe can now directly compute the gradient ofLθ(E), ∇θLθ(E)= Å∫ exp(−E(Gθ0))dη ã−1∫ ∇xE(Gθ0)∇θGθ0exp(−E(Gθ0))dη.\nSince E and Gθ are Lipschitz in x and θ respectively, it follows that ‖∇xE(Gθ0(z))‖ ≤ Le and ‖∇θGθ0(z)‖≤a(z). Hence, we have\n‖∇θLθ(E)‖≤Le ∫ a(z)(pE,θ◦Gθ(z))dη(z).\nFinally, Lemma 17 allows us to conclude that ‖∇θLθ(E)‖ is bounded by a positive constant C independently from θ andE.\nLemma 17. Under Assumptions (II) and (III), there exists a constantC independent from θ andE such that ∫\nai(z)(pE,θ◦Gθ(z))dη(z)<C, (29)∫ b(z)(pE,θ◦Gθ(z))dη(z)<C,\nfor i∈1,2.\nProof. By Lipschitzness of E and Gθ, we have exp(−LeLb‖z‖)≤ exp(E(Gθ(0))−E(Gθ(z))≤ exp(LeLb‖z‖), thus introducing the factor exp(E(Bθ0(0)) in (29) we get∫ ai(z)(pE,θ◦Gθ(z))dη(e)≤Le Å∫ exp(−LeLb‖z‖)dη(z) ã−1∫\na(z)iexp(LeLb‖z‖)dη(z),∫ b(z)(pE,θ◦Gθ(z))dη(z)≤Le Å∫ exp(−LeLb‖z‖)dη(z) ã−1∫ b(z)exp(LeLb‖z‖)dη(z).\nThe r.h.s. of both inequalities is independent of θ andE, and finite by the integrability assumptions in Assumption (III).\nD IMAGE GENERATION\nFigures 3 and 4 show sample trajectories using Algorithm 3 with no friction γ=0 for the 4 datasets. It is clear that along the same MCMC chain, several image modes are explored. We also notice the transition from a mode to another happens almost at the same time for all chains and corresponds to the gray images. This is unlike Langevin or when the friction coefficient γ is large as in Figure 5. In that case each chain remains within the same mode.\nTable 4 shows further comparisons with other methods on Cifar10 and ImageNet 32x32." }, { "heading": "E DENSITY ESTIMATION", "text": "Figure Figure 7 (left) shows the error in the estimation of the log-partition function using both methods (KALE-DV and KALE-F). KALE-DV estimates the negative log-likelihood on each batch of size 100 and therefore has much more variance than KALE-F which maintains the amortized estimator of the log-partition function.\nFigure Figure 7 (right) shows the evolution of the negative log-likelihood (NLL) on both training and test sets per epochs for RedWine and Whitewine datasets. The error decreases steadily in the case of KALE-DV and KALE-F while the error gap between the training and test set remains controlled.\nLarger gaps are observed for both direct maximum likelihood estimation and Contrastive divergence although the training NLL tends to decrease faster than for KALE." }, { "heading": "F ALGORITHMS", "text": "Estimating the variational parameter. Optimizing (9) exactly over A yields (8), with the optimal A equal to Ã= log( 1M ∑M m=1exp(−E(Ym))). However, to maintain an amortized estimator of the log-partition we propose to optimize (9) iteratively using second order updates:\nAk+1 =Ak−λ(exp(Ak−Ãk+1)−1), A0 =Ã0 (30)\nwhere λ is a learning rate and Ãk+1 is the empirical log-partition function estimated from a batch of new samples. By leveraging updates from previous iterations, A can yield much more accurate estimates of the log-partition function as confirmed empirically in Figure 7 of Appendix E.\nTempered GEBM. It can be preferable to sample from a tempered version of the model by rescaling the energyE by an inverse temperature parameter β, thus effectively sampling from Q. High temperature regimes (β→0) recover the base model G while low temperature regimes (β→∞) essentially sample from minima of the energyE. As shown in Section 6, low temperatures tend to produce better sample quality for natural image generation tasks.\nTraining In Algorithm 1, we describe the general algorithm for training a GEBM which alternates between gradient steps on the energy and the generator. An additional regularization, denoted by I(ψ) is used to ensure conditions of Proposition 4 and Theorem 5 hold. I(ψ) can includeL2 regularization over the parameters ψ, a gradient penalty as in Gulrajani et al. (2017) or Spectral normalization Miyato et al. (2018). The energy can be trained either using the estimator in (8) (KALE-DV) or the one in (9) (KALE-F) depending on the variable C.\nSampling In Algorithm 3, we describe the MCMC sampler proposed in Sachs et al. (2017) which is a time discretization of (14).\nAlgorithm 2 Overdamped Langevin Algorithm 1: Input λ, γ, u,η,E,G 2: OuputXT 3: Z0∼η // Sample Initial latent from η. 4: for t=0,...,T do 5: Yt+1←∇zlogη(Zt)−∇zE◦B(Zt) // Evaluating∇zlog(ν(Zt+1)) using (4). 6: Wt+1∼N (0,I) // Sample standard Gaussian noise 7: Zt+1←Zt+λYt+1+ √ 2λWt+1\n8: end for 9: XT←G(ZT )\nAlgorithm 3 Kinetic Langevin Algorithm 1: Input λ, γ, u,η,E,G 2: OuputXT 3: Z0∼η // Sample Initial latent from η. 4: for t=0,...,T do 5: Zt+1←Zt+ λ2Vt 6: Yt+1←∇zlogη(Zt+1)−∇zE◦B(Zt+1) // Evaluating∇zlog(ν(Zt+1)) using (4). 7: Vt+1←Vt+ uλ2 Yt+1. 8: Wt+1∼N (0,I) // Sample standard Gaussian noise 9: Ṽt+1←exp(−γλ)Vt+ 12 + √ u(1−exp(−2γλ))Wt+1 10: Vt+1← Ṽt+1+ uλ2 Yt+1 11: Zt+1←Zt+1+ λ2Vt+1 12: end for 13: XT←G(ZT )" }, { "heading": "G EXPERIMENTAL DETAILS", "text": "In all experiments, we use regularization which is a combination of L2 norm and a variant of the gradient penalty Gulrajani et al. (2017). For the image generation tasks, we also employ spectral normalization Miyato et al. (2018). This is to ensure that the conditions in Proposition 4 and Theorem 5 hold. We pre-condition the gradient as proposed in Simsekli et al. (2020) to stabilize training, and to avoid taking large noisy gradient steps due to the exponential terms in (8) and (9). We also use the second-order updates in (30) for the variational constant cwhenever it is learned.\nG.1 IMAGE GENERATION\nz∈R100∼N (0,I) dense→Mg×Mg×512\n4×4, stride=2 deconv. BN 256 ReLU 4×4, stride=2 deconv. BN 128 ReLU 4×4, stride=2 deconv. BN 64 ReLU\n3×3, stride=1 conv. 3 Tanh\nRGB image x∈RM×M×3 3×3, stride=1 conv 64 lReLU 4×4, stride=2 conv 64 lReLU 3×3, stride=1 conv 128 lReLU 4×4, stride=2 conv 128 lReLU 3×3, stride=1 conv 256 lReLU 4×4, stride=2 conv 256 lReLU 3×3, stride=1 conv 512 lReLU\ndense→1.\nTable 6: Energy/Discriminator of SNGAN ConvNet: M=32.\nz∈R100∼N (0,I) dense, 4×4×256 ResBlock up 256 ResBlock up 256 ResBlock up 256 BN, ReLu, 3×3 conv, Tanh\nTable 8: Base/Generator of SNGAN ResNet.\nNetwork Architecture Table 5 and Table 6 show the network architectures used for the GEBM in the case of SNGAN ConvNet. Table 5 and Table 6 show the network architectures used for the GEBM in the case of SNGAN ResNet. The residual connections of each residual block consists of two convolutional layers proceeded by a BatchNormalization and ReLU activation: BN+ReLU+Conv+BN+ReLU+Conv as in (Miyato et al., 2018, Figure 8).\nTraining: We train both base and energy by alternating 5 gradient steps to learn the energy vs 1 gradient step to learn the base. For the first two gradient iterations and after every 500 gradient iterations on base, we train the energy for 100 gradient steps instead of 5. We then train the model up to 150000 gradient iterations on the base using a batch-size of 128 and Adam optimizer Kingma and Ba (2014) with initial learning rate of 10−4 and parameters (0.5,.999) for both energy and base.\nScheduler: We decrease the learning rate using a scheduler that monitors the FID score in a similar way as in Bińkowski et al. (2018); Arbel et al. (2018). More precisely, every 2000 gradient iterations on the base, we evaluate the FID score on the training set using 50000 generated samples from the base and check if the current score is larger than the score 20000 iterations before. The learning rate is decreased by a factor of 0.8 if the FID score fails to decrease for 3 consecutive times.\nSampling: For (DOT) Tanaka (2019), we use the following objective:\nz 7→‖z−zy+ ‖+ 1\nkeff E◦G(z) (31)\nwhere zy is sampled from a standard Gaussian, is a perturbation meant to stabilize sampling and keff is the estimated Lipschitz constant of E ◦B. Note that (31) uses a flipped sign for the E ◦B compared to Tanaka (2019). This is becauseE plays the role of−D whereD is the discriminator in Tanaka (2019). Introducing the minus sign in (31) leads to a degradation in performance. We perform 1000 gradient iterations with a step-size of 0.0001 which is also decreased by a factor of 10 every 200 iterations as done for the proposed method. As suggested by the authors of Tanaka (2019) we perform the following projection for the gradient before applying it:\ng←g− (g >z) √ q z.\nWe set the perturbation to 0.001 and keff to 1 which was also shown in Tanaka (2019) to perform well. In fact, we found that estimating the Lipschitz constant by taking the maximum value of ‖∇E◦G(z)‖ over 1000 latent samples according to η lead to higher values for keff : ( Cifar10: 9.4, CelebA : 7.2, ImageNet: 4.9, Lsun: 3.8). However, those higher values did not perform as well as setting keff =1.\nFor (IHM) Turner et al. (2019) we simply run the MCMC chain for 1000 iterations." }, { "heading": "G.2 DENSITY ESTIMATION", "text": "Pre-processing We use code and pre-processing steps from Wenliang et al. (2019) which we describe here for completeness. For RedWine and WhiteWine, we added uniform noise with support equal to the median distances between two adjacent values. That is to avoid instabilities due to the quantization of the datasets. For Hepmass and MiniBoone, we removed ill-conditioned dimensions as also done in Papamakarios et al. (2017). We split all datasets, except HepMass into three splits. The test split consists of 10% of the total data. For the validation set, we use 10% of the remaining data with an upper limit of 1000 to reduce the cost of validation at each iteration. For HepMass, we used the sample splitting as done in Papamakarios et al. (2017). Finally, the data is whitened before fitting and the whitening matrix was computed on at most 10000 data points.\nRegularization: We set the regularization parameter to 0.1 and use a combination ofL2 norm and a variant of the gradient penalty Gulrajani et al. (2017):\nI(ψ)2 = 1\ndψ ‖ψ‖2+E\nî ‖∇xfψ(‹X)‖2ó\nNetwork Architecture. For both base and energy, we used an NVP Dinh et al. (2016) with 5 NVP layers each consisting of a shifting and scaling layer with two hidden layers of 100 neurons. We do not use Batch-normalization.\nTraining: In all cases we use Adam optimizer with learning rate of 0.001 and momentum parameters (0.5,0.9). For both KALE-DV and KALE-F, we used a batch-size of 100 data samples vs 2000 generated samples from the base in order to reduce the variance of the estimation of the energy. We alternate 50 gradient steps on the energy vs 1 step on the base and further perform 50 additional steps on the energy for the first two gradient iterations and after every 500 gradient iterations on base. For Contrastive divergence, each training step is performed by first producing 100 samples from the model using 100 Langevin iterations with a step-size of 10−2 and starting from a batch of 100 data-samples. The resulting samples are then used to estimate the gradient of the of the loss.\nFor (CD), we used 100 Langevin iterations for each learning step to sample from the EBM. This translates into an improved performance at the expense of increased computational cost compared to the other methods. All methods are trained for 2000 epochs with batch-size of 100 (1000 on Hepmass and Miniboone datasets) and fixed learning rate 0.001, which was sufficient for convergence.\nG.3 ILLUSTRATIVE EXAMPLE IN FIGURE 1\nWe consider parametric functionsG(1)θ andG (2) θ from R to R of the form:\nG (1) θ (x)=sin(8πWx)/(1+4πBx), G (2) θ (x)=4πW ′x+b\nwith θ= (W,B,W ′,b). we also call θ?= (1,1,1,0). In addition, we consider a sigmoid like function h from [0,1] to [0,1] of the form:\nz̃= tan(π(z− 1 2 ), h(z)= 1 2\nÅ z+\n1\n1+exp(−9z̃)\nã .\nData generation : To generate a data point X = (X1,X2), we consider the following simple generative model:\n• Sample a uniform r.v. Z from [0,1]. • Apply the distortion function h to get a latent sample Y =h(Z).\n• Generate pointX usingX1 =G(1)θ? (Y ) andX2 =G (2) θ? (Y ).\nHence, the data are supported on the 1-d line defined by the equationX2 =G (2) θ? (X1).\nGAN For the generator we sample Z uniformly from [0, 1] then generate a sample (X1,X2)=(G (1) θ (Z),G (2) θ (Z)). The goal is to learn θ.\nFor the discriminator, we used an MLP with 6 layers and 10 hidden units.\nGEBM For the base we use the same generator as in the GAN model. For the energy we use the same MLP as discriminator of the GAN model.\nEBM To ensure tractability of the likelihood, we use the following model:\nX2|X1∼N (G(2)θ (X1),σ0) X1∼MoG((µ1,σ1),(µ2,σ2))\nMoG((µ1,σ1),(µ2,σ2)) refers to a Mixture of two gaussians with mean and variances µi and σi. We learn each of the parameters (θ,σ0,µ1,σ1,µ2,σ2) by maximizing the likelihood.\nBoth GAN and GEBM have the capacity to recover the the exact support by finding the optimal parameter θ?. For the EBM, when θ = θ?, the mean Gθ?(X1) of the conditional gaussian X2|X1 draws a line which matches the data support exactly, i.e.: X2 =G (2) θ? (X1)." }, { "heading": "G.4 BASE/GENERATOR COMPLEXITY", "text": "To investigate the effect of model complexity of the performance gap between GANs and GEBMs, we performed additional experiments using the setting of Figure 1. Now we allow the generator/base network to better model the hidden transformation h that produces the first coordinate X1 given the latent noise. We choose G(1)θ to be either a one hidden layer network or an MLP with 3 hidden layers both with leaky ReLU activation, instead of a simple linear transform as previously done in Appendix G.3. The network has universal approximation capability that depends on the number of units. This provides a direct control over the complexity of the generator/base. We then varied the number of hidden units from 1 to 5∗104 units for the one hidden layer network and from 10 to 5∗103 units per layer for the MLP. Note that the MLP with 5∗103 units per layer stores a matrix of size 2.5∗107 and thus contains 2 orders of magnitudes more parameters than the widest shallow network with 5∗104 units. We then compared the performance of the GAN and GEBM using the Sinkhorn divergence Feydy et al. (2019) between each model and the data distribution. In all experiments, we used the same discriminator/energy network described in Appendix G.3. Results are provided in Figure 8.\nEstimating the Sinkhorn divergence. The Sinkhorn is computed using 6000 samples from the data and the model, with squared euclidean distance as a ground cost and using a regularization =1e−3. We then repeat the procedure 5 times and average the result to get the final estimate of the Sinkhorn distance for a given run.\nTraining. Each run optimizes the parameters of the model using Adam optimizer (β1 = .5,β2 = .99), learning rate lr=1e−4 for the energy/discriminator and lr=1e−5 for the base/generator and weight decay of 1e−2 for the base/generator. Training is performed using KALE for 2000 epochs using a batch size of 5000 and 10 gradient iterations for the energy/discriminator per base/generator iteration. We use the gradient penalty for the energy/discriminator with a penalty parameter of 0.01. We then perform early stopping and retain the best performing model on a validation set.\nObservations We make the following observations from Figure 8: the GAN generator indeed improves when we increase the number of hidden units. The performance of the GEBM remains stable as the number of hidden units increases. The performance of the GEBM is always better than the GAN, although we can see the GAN converging towards the GEBM. GEBM with a simpler base already outperforms the GAN with more powerful generators. The gap between the GEBM and the GAN reduces as the GAN becomes more expressive. Using a deeper network further reduces the gap compared to a shallow network.\nThese observations support the prior discussion that the energy witnesses a remaining difference between the generator and training samples, as long as it is not flat. This information allows the GEBM to perform better than a GAN that ignores it. The performance gap between the GEBM and the GAN reduces as the generator becomes more powerful and forces the energy to be more flat. This is consistent with the result in Proposition 3." } ]
2,021
GENERALIZED ENERGY BASED MODELS
SP:3cc3f1b3e24923e2e84d0b9761e5fb30fa88bbeb
[ "The authors present a new package aimed at improving the design and validation of pipelines using medical time series data. The pipeline covers many aspects of time series pipelines including pre-processing, prediction, treatment effect estimation, calibration, etc. The package, as depicted in the paper, appears to be very comprehensive and well motivated." ]
Time-series learning is the bread and butter of data-driven clinical decision support, and the recent explosion in ML research has demonstrated great potential in various healthcare settings. At the same time, medical time-series problems in the wild are challenging due to their highly composite nature: They entail design choices and interactions among components that preprocess data, impute missing values, select features, issue predictions, estimate uncertainty, and interpret models. Despite exponential growth in electronic patient data, there is a remarkable gap between the potential and realized utilization of ML for clinical research and decision support. In particular, orchestrating a real-world project lifecycle poses challenges in engineering (i.e. hard to build), evaluation (i.e. hard to assess), and efficiency (i.e. hard to optimize). Designed to address these issues simultaneously, Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a (i) software toolkit, (ii) empirical standard, and (iii) interface for optimization. Our ultimate goal lies in facilitating transparent and reproducible experimentation with complex inference workflows, providing integrated pathways for (1) personalized prediction, (2) treatment-effect estimation, and (3) information acquisition. Through illustrative examples on real-world data in outpatient, general wards, and intensive-care settings, we illustrate the applicability of the pipeline paradigm on core tasks in the healthcare journey. To the best of our knowledge, Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML. Python Software Repository: https://github.com/vanderschaarlab/clairvoyance
[ { "affiliations": [], "name": "Daniel Jarrett" }, { "affiliations": [], "name": "Ioana Bica" }, { "affiliations": [], "name": "Jinsung Yoon" }, { "affiliations": [], "name": "Zhaozhi Qian" }, { "affiliations": [], "name": "Mihaela van der Schaar" } ]
[ { "authors": [ "Romain Pirracchio" ], "title": "Mortality prediction in the icu based on mimic-ii results from the super icu learner algorithm (sicula) project", "venue": "Secondary Analysis of Electronic Health Records,", "year": 2016 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Roger G Mark" ], "title": "Reproducibility in critical care: a mortality prediction case study", "venue": "Machine Learning for Healthcare Conference (MLHC),", "year": 2017 }, { "authors": [ "Sanjay Purushotham", "Chuizheng Meng", "Zhengping Che", "Yan Liu" ], "title": "Benchmarking deep learning models on large healthcare datasets", "venue": "Journal of Biomedical Informatics,", "year": 2018 }, { "authors": [ "Alvin Rajkomar", "Eyal Oren", "Kai Chen", "Andrew M Dai", "Nissan Hajaj", "Michaela Hardt", "Peter J Liu", "Xiaobing Liu", "Jake Marcus", "Mimi Sun" ], "title": "Scalable and accurate deep learning with electronic health records", "venue": "Nature Digital Medicine,", "year": 2018 }, { "authors": [ "Hrayr Harutyunyan", "Hrant Khachatrian", "David C Kale", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Multitask learning and benchmarking with clinical time series data", "venue": "Nature Scientific Data,", "year": 2019 }, { "authors": [ "Beau Norgeot", "Benjamin S Glicksberg", "Laura Trupin", "Dmytro Lituiev", "Milena Gianfrancesco", "Boris Oskotsky", "Gabriela Schmajuk", "Jinoos Yazdany", "Atul J Butte" ], "title": "Assessment of a deep learning model based on electronic health record data to forecast clinical outcomes in patients with rheumatoid arthritis", "venue": "Journal of the American Medical Association (JAMA),", "year": 2019 }, { "authors": [ "Carl Waldmann", "Neil Soni", "Andrew Rhodes" ], "title": "Critical Care: Oxford Desk Reference", "venue": null, "year": 2008 }, { "authors": [ "Eren Gultepe", "Jeffrey P Green", "Hien Nguyen", "Jason Adams", "Timothy Albertson", "Ilias Tagkopoulos" ], "title": "From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system", "venue": "Journal of the American Medical Informatics Association (AMIA),", "year": 2014 }, { "authors": [ "Steven Horng", "David A Sontag", "Yoni Halpern", "Yacine Jernite", "Nathan I Shapiro", "Larry A Nathanson" ], "title": "Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning", "venue": "PloS one,", "year": 2017 }, { "authors": [ "Andreas Philipp Hassler", "Ernestina Menasalvas", "Francisco José García-García", "Leocadio Rodríguez-Mañas", "Andreas Holzinger" ], "title": "Importance of medical data preprocessing in predictive modeling and risk factor discovery for the frailty syndrome", "venue": "BMC Medical Informatics and Decision Making,", "year": 2019 }, { "authors": [ "Daniel Trujillo Viedma", "Antonio Jesús Rivera Rivas", "Francisco Charte Ojeda", "María José del Jesus Díaz" ], "title": "A first approximation to the effects of classical time series preprocessing methods on lstm accuracy", "venue": "International Work-Conference on Artificial Neural Networks,", "year": 2019 }, { "authors": [ "Dimitris Bertsimas", "Agni Orfanoudaki", "Colin Pawlowski" ], "title": "Imputation of clinical covariates in time series", "venue": "NeurIPS 2018 Workshop on Machine Learning for Health", "year": 2018 }, { "authors": [ "Wei Cao", "Dong Wang", "Jian Li", "Hao Zhou", "Lei Li", "Yitan Li" ], "title": "Brits: Bidirectional recurrent imputation for time series", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Yonghong Luo", "Xiangrui Cai", "Ying Zhang", "Jun Xu" ], "title": "Multivariate time series imputation with generative adversarial networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Jinsung Yoon", "William R Zame", "Mihaela van der Schaar" ], "title": "Estimating missing data in temporal data streams using multi-directional recurrent neural networks", "venue": "IEEE Transactions on Biomedical Engineering (TBME),", "year": 2021 }, { "authors": [ "Paul Nickerson", "Raheleh Baharloo", "Anis Davoudi", "Azra Bihorac", "Parisa Rashidi" ], "title": "Comparison of gaussian processes methods to linear methods for imputation of sparse physiological time series", "venue": "International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),", "year": 2018 }, { "authors": [ "Edward Choi", "Mohammad Taha Bahadori", "Andy Schuetz", "Walter F Stewart", "Jimeng Sun" ], "title": "Doctor ai: Predicting clinical events via recurrent neural networks. Machine Learning for Healthcare Conference (MLHC), 2016", "venue": null, "year": 2016 }, { "authors": [ "Joseph Futoma", "Mark Sendak", "Blake Cameron", "Katherine A Heller" ], "title": "Scalable joint modeling of longitudinal and point process data for disease trajectory prediction and improving management of chronic kidney disease", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2016 }, { "authors": [ "Bryan Lim", "Mihaela van der Schaar" ], "title": "Disease-atlas: Navigating disease trajectories with deep learning", "venue": "Machine Learning for Healthcare Conference (MLHC),", "year": 2018 }, { "authors": [ "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Attentive state-space modeling of disease progression", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Daniel Jarrett", "Mihaela van der Schaar" ], "title": "Target-embedding autoencoders for supervised representation learning", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Zachary C Lipton", "David C Kale", "Charles Elkan", "Randall Wetzel" ], "title": "Learning to diagnose with lstm recurrent neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Inci M Baytas", "Cao Xiao", "Xi Zhang", "Fei Wang", "Anil K Jain", "Jiayu Zhou" ], "title": "Patient subtyping via time-aware lstm networks", "venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD),", "year": 2017 }, { "authors": [ "Huan Song", "Deepta Rajan", "Jayaraman J Thiagarajan", "Andreas Spanias" ], "title": "Attend and diagnose: Clinical time series analysis using attention models", "venue": "AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Ahmed M Alaa", "Mihaela Van Der Schaar" ], "title": "A hidden absorbing semi-markov model for informatively censored temporal data: Learning and inference", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2018 }, { "authors": [ "Jason Roy", "Kirsten J Lum", "Michael J Daniels" ], "title": "A bayesian nonparametric approach to marginal structural models for point treatments and a continuous or survival outcome", "venue": null, "year": 2016 }, { "authors": [ "Yanbo Xu", "Yanxun Xu", "Suchi Saria" ], "title": "A bayesian nonparametric approach for estimating individualized treatment-response curves. Machine Learning for Healthcare Conference (MLHC), 2016", "venue": null, "year": 2016 }, { "authors": [ "Peter Schulam", "Suchi Saria" ], "title": "Reliable decision support using counterfactual models", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Hossein Soleimani", "Adarsh Subbaswamy", "Suchi Saria" ], "title": "Treatment-response models for counterfactual reasoning with continuous-time, continuous-valued interventions", "venue": "arXiv preprint,", "year": 2017 }, { "authors": [ "Bryan Lim" ], "title": "Forecasting treatment responses over time using recurrent marginal structural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Ioana Bica", "Ahmed M Alaa", "James Jordon", "Mihaela van der Schaar" ], "title": "Estimating counterfactual treatment outcomes over time through adversarially balanced representations", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Kartik Ahuja", "William Zame", "Mihaela van der Schaar" ], "title": "Dpscreen: Dynamic personalized screening", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Jinsung Yoon", "William R Zame", "Mihaela van der Schaar" ], "title": "Deep sensing: Active sensing using multi-directional recurrent neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2021 }, { "authors": [ "Jaromír Janisch", "Tomáš Pevnỳ", "Viliam Lisỳ" ], "title": "Classification with costly features using deep reinforcement learning", "venue": "AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Asac: Active sensing using actor-critic models. Machine Learning for Healthcare Conference (MLHC), 2019", "venue": null, "year": 2019 }, { "authors": [ "Joseph Futoma", "Sanjay Hariharan", "Katherine Heller" ], "title": "Learning to detect sepsis with a multitask gaussian process rnn classifier", "venue": "International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Li-Fang Cheng", "Gregory Darnell", "Bianca Dumitrascu", "Corey Chivers", "Michael E Draugelis", "Kai Li", "Barbara E Engelhardt" ], "title": "Sparse multi-output gaussian processes for medical time series prediction", "venue": null, "year": 2017 }, { "authors": [ "Edmon Begoli", "Tanmoy Bhattacharya", "Dimitri Kusnezov" ], "title": "The need for uncertainty quantification in machine-assisted medical decision making", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Marco Lorenzi", "Maurizio Filippone", "Giovanni B Frisoni", "Daniel C Alexander", "Sébastien Ourselin" ], "title": "Alzheimer’s Disease Neuroimaging Initiative, et al. Probabilistic disease progression modeling to characterize diagnostic uncertainty: application to staging and prediction in alzheimer’s disease", "venue": null, "year": 2019 }, { "authors": [ "Aven Samareh", "Shuai Huang" ], "title": "Uq-chi: An uncertainty quantification-based contemporaneous health index for degenerative disease monitoring", "venue": "IEEE Journal of Biomedical and Health Informatics (JBHI),", "year": 2019 }, { "authors": [ "Edward Choi", "Mohammad Taha Bahadori", "Jimeng Sun", "Joshua Kulas", "Andy Schuetz", "Walter Stewart" ], "title": "Retain: An interpretable predictive model for healthcare using reverse time attention mechanism", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Tian Bai", "Shanshan Zhang", "Brian L Egleston", "Slobodan Vucetic" ], "title": "Interpretable representation learning for healthcare via capturing disease progression through time", "venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD),", "year": 2018 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Invase: Instance-wise variable selection using neural networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Camburu Oana-Maria", "Giunchiglia Eleonora", "Foerster Jakob", "Lukasiewicz Thomas", "Blunsom Phil" ], "title": "Can i trust the explainer? verifying post-hoc explanatory methods", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Sana Tonekaboni", "Shalmali Joshi", "David Duvenaud", "Anna Goldenberg" ], "title": "What went wrong and when? instance-wise feature importance for time-series models", "venue": "arXiv preprint,", "year": 2020 }, { "authors": [ "Beau Norgeot", "Benjamin S Glicksberg", "Atul J Butte" ], "title": "A call for deep-learning healthcare", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Brett Beaulieu-Jones", "Samuel G Finlayson", "Corey Chivers", "Irene Chen", "Matthew McDermott", "Jaz Kandola", "Adrian V Dalca", "Andrew Beam", "Madalina Fiterau", "Tristan Naumann" ], "title": "Trends and focus of machine learning applications for health research", "venue": "Journal of the American Medical Association (JAMA),", "year": 2019 }, { "authors": [ "Raag Agrawal", "Sudhakaran Prabakaran" ], "title": "Big data in digital healthcare: lessons learnt and recommendations for general practice", "venue": "Nature Heredity,", "year": 2020 }, { "authors": [ "Gang Luo", "Bryan L Stone", "Michael D Johnson", "Peter Tarczy-Hornoch", "Adam B Wilcox", "Sean D Mooney", "Xiaoming Sheng", "Peter J Haug", "Flory L Nkoy" ], "title": "Automating construction of machine learning models with clinical big data: proposal rationale and methods", "venue": "JMIR research protocols,", "year": 2017 }, { "authors": [ "Duncan Shillan", "Jonathan AC Sterne", "Alan Champneys", "Ben Gibbison" ], "title": "Use of machine learning to analyse routinely collected intensive care unit data: a systematic review", "venue": "Critical Care,", "year": 2019 }, { "authors": [ "D Sculley", "Gary Holt", "Daniel Golovin", "Eugene Davydov", "Todd Phillips", "Dietmar Ebner", "Vinay Chaudhary", "Michael Young" ], "title": "Hidden technical debt in machine learning systems", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Jeeheh Oh", "Jiaxuan Wang", "Shengpu Tang", "Michael Sjoding", "Jenna Wiens" ], "title": "Relaxed weight sharing: Effectively modeling time-varying relationships in clinical time-series. Machine 12 Published as a conference paper at ICLR 2021 Learning for Healthcare Conference (MLHC), 2019", "venue": null, "year": 2019 }, { "authors": [ "Fei Wang", "Lawrence Peter Casalino", "Dhruv Khullar" ], "title": "Deep learning in medicine—promise, progress, and challenges", "venue": "Journal of the American Medical Association (JAMA),", "year": 2019 }, { "authors": [ "Maarten van Smeden", "Ben Van Calster", "Rolf HH Groenwold" ], "title": "Machine learning compared with pathologist assessment", "venue": "Journal of the American Medical Association (JAMA),", "year": 2018 }, { "authors": [ "Nilay D Shah", "Ewout W Steyerberg", "David M Kent" ], "title": "Big data and predictive analytics: recalibrating expectations", "venue": "Journal of the American Medical Association (JAMA),", "year": 2018 }, { "authors": [ "Eric J Topol" ], "title": "High-performance medicine: the convergence of human and artificial intelligence", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Cao Xiao", "Edward Choi", "Jimeng Sun" ], "title": "Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review", "venue": "Journal of the American Medical Informatics Association (AMIA),", "year": 2018 }, { "authors": [ "Ben Van Calster", "Ewout W Steyerberg", "Gary S Collins" ], "title": "Artificial intelligence algorithms for medical prediction should be nonproprietary and readily available", "venue": "Journal of the American Medical Association (JAMA),", "year": 2019 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Nature Scientific Reports,", "year": 2018 }, { "authors": [ "Richard D Riley", "Joie Ensor", "Kym IE Snell", "Thomas PA Debray", "Doug G Altman", "Karel GM Moons", "Gary S Collins" ], "title": "External validation of clinical prediction models using big datasets from e-health records or ipd meta-analysis: opportunities and challenges", "venue": null, "year": 2016 }, { "authors": [ "Frank Hutter", "Lars Kotthoff", "Joaquin Vanschoren" ], "title": "Hyperparameter optimization. Chapter 1, Automated machine learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Matthias Feurer", "Aaron Klein", "Katharina Eggensperger", "Jost Springenberg", "Manuel Blum", "Frank Hutter" ], "title": "Efficient and robust automated machine learning. Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2015 }, { "authors": [ "Lars Kotthoff", "Chris Thornton", "Holger H Hoos", "Frank Hutter", "Kevin Leyton-Brown" ], "title": "Autoweka 2.0: Automatic model selection and hyperparameter optimization in weka", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2017 }, { "authors": [ "Randal S Olson", "Jason H Moore" ], "title": "Tpot: A tree-based pipeline optimization tool for automating machine learning", "venue": "ICML Workshop on Automatic Machine Learning,", "year": 2016 }, { "authors": [ "Xueqiang Zeng", "Gang Luo" ], "title": "Progressive sampling-based bayesian optimization for efficient and automatic machine learning model selection", "venue": "Health information science and systems,", "year": 2017 }, { "authors": [ "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Autoprognosis: Automated clinical prognostic modeling via bayesian optimization with structured kernel learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Yuyu Zhang", "Mohammad Taha Bahadori", "Hang Su", "Jimeng Sun" ], "title": "Flash: fast bayesian optimization for data analytic pipelines", "venue": "ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD),", "year": 2016 }, { "authors": [ "Zi Wang", "Chengtao Li", "Stefanie Jegelka", "Pushmeet Kohli" ], "title": "Batched high-dimensional bayesian optimization via structural kernel learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Jenna Wiens", "John Guttag", "Eric Horvitz" ], "title": "Patient risk stratification with time-varying parameters: a multitask learning approach", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2016 }, { "authors": [ "Yao Zhang", "Daniel Jarrett", "Mihaela van der Schaar" ], "title": "Stepwise model selection for sequence prediction via deep kernel learning", "venue": "International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Andrew Gordon Wilson", "Zhiting Hu", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Deep kernel learning", "venue": "International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2016 }, { "authors": [ "Brett Naul", "Stéfan van der Walt", "Arien Crellin-Quick", "Joshua S Bloom", "Fernando Pérez" ], "title": "Cesium: open-source platform for time-series inference. Annual Scientific Computing with 13 Published as a conference paper at ICLR", "venue": "Python Conference (SciPy),", "year": 2021 }, { "authors": [ "Romain Tavenard", "Johann Faouzi", "Gilles Vandewiele", "Felix Divo", "Guillaume Androz", "Chester Holtz", "Marie Payne", "Roman Yurchak", "Marc Rußwurm", "Kushal Kolar", "Eli Woods" ], "title": "tslearn: A machine learning toolkit dedicated to time-series data", "venue": null, "year": 2017 }, { "authors": [ "David M Burns", "Cari M Whyne" ], "title": "Seglearn: A python package for learning sequences and time series", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2018 }, { "authors": [ "Maximilian Christ", "Nils Braun", "Julius Neuffer", "Andreas W" ], "title": "Kempa-Liehr. tsfresh: time series feature extraction on basis of scalable hypothesis tests", "venue": null, "year": 2018 }, { "authors": [ "Ahmed Guecioueur" ], "title": "pysf: supervised forecasting of sequential data in python", "venue": null, "year": 2018 }, { "authors": [ "Markus Löning", "Anthony Bagnall", "Sajaysurya Ganesh", "Viktor Kazakov", "Jason Lines", "Franz J Király" ], "title": "sktime: A unified interface for machine learning with time series", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Johann Faouzi", "Hicham Janati" ], "title": "pyts: A python package for time series classification", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2020 }, { "authors": [ "David Taylor-Robinson", "Olia Archangelidi", "Siobhán B Carr", "Rebecca Cosgriff", "Elaine Gunn", "Ruth H Keogh", "Amy MacDougall", "Simon Newsome", "Daniela K Schlüter", "Sanja Stanojevic" ], "title": "Data resource profile: the uk cystic fibrosis", "venue": "registry. International Journal of Epidemiology,", "year": 2018 }, { "authors": [ "Ahmed M Alaa", "Jinsung Yoon", "Scott Hu", "Mihaela Van der Schaar" ], "title": "Personalized risk scoring for critical care prognosis using mixtures of gaussian processes", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2017 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Lu Shen", "H Lehman Li-Wei", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G Mark" ], "title": "Mimic-iii, a freely accessible critical care database", "venue": "Nature Scientific Data,", "year": 2016 }, { "authors": [ "Shawn D Aaron", "Anne L Stephenson", "Donald W Cameron", "George A Whitmore" ], "title": "A statistical model to predict one-year risk of death in patients with cystic fibrosis", "venue": "Journal of Clinical Epidemiology,", "year": 2015 }, { "authors": [ "Theodore G Liou", "Frederick R Adler", "David Huang" ], "title": "Use of lung transplantation survival models to refine patient selection in cystic fibrosis", "venue": "American Journal of Respiratory and Critical Care Medicine,", "year": 2005 }, { "authors": [ "Lionelle Nkam", "Jérôme Lambert", "Aurélien Latouche", "Gil Bellis", "Pierre-Régis Burgel", "MN Hocine" ], "title": "A 3-year prognostic score for adults with cystic fibrosis", "venue": "Journal of Cystic Fibrosis,", "year": 2017 }, { "authors": [ "Changhee Lee", "Jinsung Yoon", "Mihaela Van Der Schaar" ], "title": "Dynamic-deephit: A deep learning approach for dynamic survival analysis with competing risks based on longitudinal data", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2019 }, { "authors": [ "Dan Li", "Ruth Keogh", "John P Clancy", "Rhonda D Szczesniak" ], "title": "Flexible semiparametric joint modeling: an application to estimate individual lung function decline and risk of pulmonary exacerbations in cystic fibrosis", "venue": "Emerging Themes in Epidemiology,", "year": 2017 }, { "authors": [ "Joëlle Texereau", "Dany Jamal", "Gérald Choukroun", "Pierre-Régis Burgel", "Jean-Luc Diehl", "Antoine Rabbat", "Philippe Loirat", "Antoine Parrot", "Alexandre Duguet", "Joël Coste" ], "title": "Determinants of mortality for adults with cystic fibrosis admitted in intensive care unit: a multicenter study", "venue": "Respiratory Research,", "year": 2006 }, { "authors": [ "Joseph F Dasta", "Trent P McLaughlin", "Samir H Mody", "Catherine Tak Piech" ], "title": "Daily cost of an intensive care unit day: the contribution of mechanical ventilation", "venue": "Critical Care Medicine,", "year": 2005 }, { "authors": [ "Jason Phua", "Wang Jee Ngerng", "Tow Keang Lim" ], "title": "The impact of a delay in intensive care unit admission for community-acquired pneumonia", "venue": "European Respiratory Journal,", "year": 2010 }, { "authors": [ "Vincent Liu", "Patricia Kipnis", "Norman W Rizk", "Gabriel J Escobar" ], "title": "Adverse outcomes associated with delayed intensive care unit transfers in an integrated healthcare system", "venue": "Journal of Hospital Medicine,", "year": 2012 }, { "authors": [ "Michael P Young", "Valerie J Gooder", "Karen McBride", "Brent James", "Elliott S Fisher" ], "title": "Inpatient transfers to the intensive care unit", "venue": "Journal of General Internal Medicine,", "year": 2003 }, { "authors": [ "Jinsung Yoon", "Ahmed Alaa", "Scott Hu", "Mihaela Schaar" ], "title": "Forecasticu: a prognostic decision support system for timely prediction of intensive care unit admission", "venue": "International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Ahmed M Alaa", "Scott Hu", "Mihaela van der Schaar" ], "title": "Learning from clinical judgments: Semi-markov-modulated marked hawkes processes for risk prognosis", "venue": "International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Ioana Bica", "Ahmed M Alaa", "Mihaela van der Schaar" ], "title": "Time series deconfounder: Estimating treatment effects over time in the presence of hidden confounders", "venue": "International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela Van Der Schaar" ], "title": "Gain: Missing data imputation using generative adversarial nets", "venue": "arXiv preprint arXiv:1806.02920,", "year": 2018 } ]
[ { "heading": null, "text": "Python Software Repository: https://github.com/vanderschaarlab/clairvoyance" }, { "heading": "1 INTRODUCTION", "text": "Inference over time series is ubiquitous in medical problems [1–7]. With the increasing availability and accessibility of electronic patient records, machine learning for clinical decision support has made great strides in offering actionable predictive models for real-world questions [8, 9]. In particular, a plethora of methods-based research has focused on addressing specific problems along different stages of the clinical data science pipeline, including preprocessing patient data [10, 11], imputing missing measurements [12–16], issuing diagnoses and prognoses of diseases and biomarkers [17–25], estimating the effects of different treatments [26–31], optimizing measurements [32–36], capturing\n∗Authors contributed equally\nuncertainty [37–41], and interpreting learned models [42–46]. On the other hand, these component tasks are often formulated, solved, and implemented as mathematical problems (on their own), resulting in a stylized range of methods that may not acknowledge the complexities and interdependencies within the real-world clinical ML project lifecycle (as a composite). This leads to an often punishing translational barrier between state-of-the-art ML techniques and any actual patient benefit that could be realized from their intended application towards clinical research and decision support [47–51].\nThree Challenges To bridge this gap, we argue for a more comprehensive, systematic approach to development, validation, and clinical utilization. Specifically, due to the number of moving pieces, managing real-world clinical time-series inference workflows is challenging due the following concerns:\n• First and foremost, the engineering problem is that building complex inference procedures involves significant investment: Over 95% of work in a typical mature project is consumed by software technicals, and <5% addressing real scientific questions [52]. As a clinician or healthcare practitioner, however, few resources are available for easily developing and validating complete workflows. What is desired is a simple, consistent development and validation workflow that encapsulates all major aspects of clinical time-series ML—from initial data preprocessing all the way to the end.\n• Second, the evaluation problem is that the performance of any component depends on its context; for instance, the accuracy of a prediction model is intimately tied to the data imputation method that precedes it [13, 14]. As an ML researcher, however, current empirical practices typically examine the merits of each component individually, with surrounding steps configured as convenient for ensuring “all else equal” conditions for assessing performance. What is desired is a structured, realistic, and reproducible method of comparing techniques that honestly reflects interdependencies in the gestalt.\n• Lastly, the efficiency problem is that sophisticated designs tend to be resource-intensive to optimize, and state-of-the-art deep learning approaches require many knobs to be tuned. As a clinical or ML practitioner alike, this computational difficulty may be compounded by pipeline combinations and the potential presence of temporal distribution shifts in time-series datasets [53]. What is desired is a platform on which the process of pipeline configuration and hyperparameter optimization can be automated—and through which new optimization algorithms to that effect may be built and tested.\nContributions We tackle all three issues simultaneously. The Clairvoyance package is a unified, end-to-end, autoML-friendly pipeline for medical time series. (i) As a software toolkit, it enables development through a single unified interface: Modular and composable structures facilitate rapid experimentation and deployment by clinical practitioners, as well as simplifying collaboration and code-sharing. (ii) As an empirical standard, it serves as a complete experimental benchmarking environment: Standardized, end-to-end pipelines provide realistic and systematic context for evaluating novelties within individual component designs, ensuring that comparisons are fair, transparent, and reproducible. (iii) Finally, as an interface for optimization over the pipeline abstraction, Clairvoyance enables leveraging and developing algorithms for automatic pipeline configuration and stepwise selection, accounting for interdependencies among components, hyperparameters, and time steps. Through illustrative examples on real-world medical datasets, we highlight the applicability of the proposed paradigm within personalized prediction, personalized treatment planning, and personalized monitoring. To the best of our knowledge, Clairvoyance is the first coherent effort to demonstrate viability of a comprehensive, structured, and automatable pipeline for clinical time-series learning." }, { "heading": "2 THE CLAIRVOYANCE PIPELINE", "text": "The Patient Journey Consider the typical patient’s interactions with the healthcare system. Their healthcare lifecycle revolves tightly around (1) forecasting outcomes of interest (i.e. the prediction problem), (2) selecting appropriate interventions (i.e. the treatment effects problem), and (3) arranging followup monitoring (i.e. the active sensing problem). Each of these undertakings involve the full complexity of preparing, modeling, optimizing, and drawing conclusions from clinical time series. Clairvoyance provides model pathways for these core tasks in the patient journey (see Figure 1) —integrated into a single pipeline from start to finish (see Figure 2). Formally, these pathways include:\n• Predictions Path. Let {(sn,x1:Tn)}Nn=1 denote any medical time-series dataset, where sn is the vector of static features for the n-th patient, and x1:Tn . = {xn,t}Tnt=1 is the vector sequence of\ntemporal features. One-shot problems seek to predict a vector of labels yn from (sn,xn,1:Tn): e.g. prediction of mortality or discharge, where yn ∈ {0, 1}. Online problems predict some target vector yn,t from (sn,xn,1:t) at every time step: e.g. τ -step-ahead prediction of biomarkers yn,t ⊆ xn,t+τ .\n• Treatment Effects Path. For individualized treatment-effect estimation [26–31], we additionally identify interventional actions an,t ⊆ xn,t at each time step (e.g. the choices and dosages of prescribed medication), as well as corresponding measurable outcomes yn,t ⊆ xn,t+τ . The learning problem now consists in quantifying the (factual or counterfactual) potential outcomes yn,t+τ that would result from any specific sequence of interventions and patient covariates (sn,xn,1:t,an,1:t).\n• Active Sensing Path. In addition to mapping (already-measured) covariates to targets, the very decision of what (and when) to measure is also important under resource constraints. In medical settings, active sensing deals with balancing this trade-off between information gain and acquisition costs [32–36]. With reference to some downstream task (e.g. predicting yn,t+1), the aim is to select a subset of covariates Kn,t at each t to maximize the (net) benefit of observing {xn,t,k}k∈Kn,t .\nAs a Software Toolkit Engineering complete medical time-series workflows is hard. The primary barrier to collaborative research between ML and medicine seldom lies in any particular algorithm. Instead, the difficulty is operational [6, 48, 54]—i.e. in coordinating the entire data science process, from handling missing/irregularly sampled patient data all the way to validation on different popu-" }, { "heading": "Active Sensing", "text": "" }, { "heading": "Treatment Eff.", "text": "" }, { "heading": "Predictions", "text": "\"\"Configure Data Preprocessing\"\" preprocessing = PipelineComposer( FilterNegative(...), OneHotEncoder(...), Normalizer(...), ...)\n\"\"Configure Problem Specification\"\" specification = ProblemMaker( problem_class=‘online’, max_seq_len=24, label=[‘ventilator’], treatment=None, window=4, ...)\n\"\"Configure Data Imputation\"\" imputation = PipelineComposer( Imputation(type=‘static’, model_name=‘...’, ...), Imputation(type=‘temporal’, model_name=‘...’, ...))\n\"\"Configure Feature Selection\"\" feature_selection = PipelineComposer( FeatureSelection(type=‘static’, model_name=‘...’, ...), FeatureSelection(type=‘temporal’, model_name=‘...’, ...))\n\"\"Configure Pathway Model\"\" prediction_model = Prediction(model_name=‘...’, parameter_dict={...}, ...)\n\"\"Load Datasets\"\" data_train, data_test = DataLoader.load( static_dir=‘...’, temporal_dir=‘...’, ...), DataLoader.load(static_dir=‘...’, temporal_dir=‘...’)\n\"\"Execute Pipeline\"\" for component in [preprocessing,\nspecification, imputation, feature_selection]:\ndata_train = component.fit_transform(data_train) data_test = component.transform(data_test)\nprediction_model.fit(data_train, ...) test_output = prediction_model.predict(data_test, ...)\nFigure 3: Illustrative Usage. A prototypical structure of API calls for constructing a prediction pathway model. Clairvoyance is modularized to abide by established fit/transform/predict design patterns. (Green) ellipses denote additional configuration; further modules (treatments, sensing, uncertainty, etc.) expose similar interfaces.\nlations [4, 55–60]. Clairvoyance gives a single unified roof under which clinicians and researchers alike can readily address such common issues—with the only requirement that the data conform to the standard EAV open schema for clinical records (i.e. patient key, timestamp, parameter, and value).\nUnder a simple, consistent API, Clairvoyance encapsulates all major steps of time-series modeling, including (a.) loading and (b.) preprocessing patient records, (c.) defining the learning problem, handling missing or irregular samples in both (d.) static and (e.) temporal contexts, (f.) conducting feature selection, (g.) fitting prediction models, performing (h.) calibration and (i.) uncertainty estimation of model outputs, (j.) applying global or instance-wise methods for interpreting learned models, (k.) computing evaluation metrics, and (l.) visualizing results. Figure 2 shows a high-level overview of major components in the pipeline, and Figure 3 shows an illustrative example of usage.\nAll component modules are designed around the established fit-transform-predict paradigms, and the modeling workflow is based around a single chain of API calls. In this manner, each stage in the pipeline is extensible with little effort: Novel techniques developed for specific purposes (e.g. a new state-of-the-art imputation method) can be seamlessly integrated via simple wrappers (see Appendix G for an example of how this can be done for any existing method, e.g. from sklearn). This stepwise composability aims to facilitate rapid experimentation and deployment for research, as well as simplifying collaboration and code-sharing. Package documentation/tutorials give further software details.\nAs an Empirical Standard Evaluating any algorithm depends on its context. For instance, how well a proposed classifier ultimately performs is invariably coupled with the upstream feature-selection method it is paired with [44]. Likewise, the accuracy of a state-of-the-art imputation method cannot be assessed on its own: With respect to different downstream prediction models, more sophisticated imputation may actually yield inferior performance relative to simpler techniques [13, 14]—especially if components are not jointly optimized [15]. While current research practices typically seek to isolate individual gains through “all-else-equal” configurations in benchmarking experiments, the degree of actual overlap in pipeline configurations across studies is lacking: There is often little commonality in the datasets used, preprocessing done, problem types, model classes, and prediction endpoints. This dearth of empirical standardization may not optimally promote practical assessment/reproducibility, and may obscure/entangle true progress. (Tables 6–7 in Appendix A give a more detailed illustration).\nClairvoyance aims to serve as a structured evaluation framework to provide such an empirical standard. After all, in order to be relevant from a real-world medical standpoint, assessment of any single proposed component (e.g. a novel ICU mortality predictor) can—and should—be contextualized in the entire end-to-end workflow as a whole. Together, the ‘problem-maker’, ‘pipeline-composer’, and all the pipeline component modules aim to simplify the process of specifying, benchmarking, and (self-)documenting full-fledged experimental setups for each use case. At the end of the day, while results from external validation of is often heterogeneous [2, 59, 61], improving transparency and reproducibility greatly facilitates code re-use and independent verification [54, 56, 57]. Just as the “environment” abstraction in OpenAI Gym does for reinforcement learning, the “pipeline” abstraction in Clairvoyance seeks to promote accessibility and fair comparison as pertains medical time-series.\n(a) Example: SASH ‘decomposed’ as SMS (fulfilled here by DKL) followed by combiner (stacking ensemble):\n(b) Example: SPSC ‘decomposed’ as PSC (fulfilled here by SKL) followed by SMS (fulfilled here by DKL):\nAs an Optimization Interface Especially in cross-disciplinary clinical research—and during initial stages of experimentation— automated optimization may alleviate potential scarcity of expertise in the specifics of design and tuning. The Clairvoyance pipeline abstraction serves as a software interface for optimization algorithms—through which new/existing techniques can be applied, developed, and tested in a more systematic, realistic setting. In particular, by focusing on the temporal aspect of medical time series, this adds a new dimension to classes of autoML problems.\nBriefly (see Figure 5), consider the standard task of hyperparameter optimization (for a given model) [62]. By optimizing over\nclasses of algorithms, the combined algorithm selection and hyperparameter optimization (“CASH”) problem [63–65] has been approached in healthcare settings by methods such as progressive sampling, filtering, and fine-tuning [50, 66]. By further optimizing over combinations of pipeline components, the pipeline selection and configuration (“PSC”) problem [67] has also been tackled in clinical modeling via such techniques as fast linear search (“FLASH”) [68] and structured kernel learning (“SKL”) [67, 69]. Now, what bears further emphasis is that for clinical time series, the temporal dimension is critical due to the potential for temporal distribution shifts within time-series data—a common phenomenon in the medical setting (we refer to [53, 70, 71] for additional background). Precisely to account for such temporal settings, the stepwise model selection (“SMS”) problem [71] has recently been approached by such methods as relaxed parameter sharing (“RPS”) [53] as well as deep kernel learning (“DKL”) [71, 72]. Further, what the pipeline interface also does is to naturally allow extending this to define the stepwise algorithm selection and hyperparameter optimization (“SASH”) problem, or even—in the most general case—the stepwise pipeline selection and configuration (“SPSC”) problem. Although these latter two are new—and clearly hard—problems (with no existing solutions), Figure 4 shows simple examples of how the interface allows minimally adapting the SMS and PSC sub-problems (which do have existing solutions) to form feasible (approximate) solutions.\nTwo distinctions are due: First, Clairvoyance is a pipeline toolkit, not an autoML toolkit. It is not our goal to (re-)implement new/existing optimization algorithms—which abound in literature. Rather, the standardized interface is precisely what enables existing implementations to be plugged in, as well as allowing new autoML techniques to be developed and validated within a realistic medical pipeline. All that is required, is for the optimizing agent to expose an appropriate ‘optimize’ method given candidate components, and for such candidates to expose a ‘get-hyperparameter-space’ method. Second—but no less importantly—we must emphasize that we are not advocating removing human oversight from the healthcare loop. Rather, the pipeline simply encourages systematizing the initial development stages in clinical ML, which stands to benefit from existing literature on efficient autoML techniques.\nDesign Principles\nOur philosophy is based on the authors’ experience in prototyping and developing real-world collaborative projects in clinical time-series. • Pipeline First, Models Second: Our first emphasis is on reproducibility: The process of engineering and evaluating complete medical time-series workflows needs to be clear and transparent. Concretely, this manifests in the strict “separation of concerns” enforced by the highlevel API of each component module along the pipeline (see e.g. Figure 3). With the ‘problem-maker’ and ‘problem-composer’ as first-class objects, the central abstraction here is the pipeline itself, while the intricacies and configurations of individual model choices (e.g. a specific deep learning temporal imputation method) are limited to within each component module. • Be Minimal and Unintrusive: Our second emphasis is on standardization: While workflow development needs to be unified and systematic, learning to use the framework should be intuitive as well. Concretely, this manifests in the API’s adherence to the existing and popular ‘fit-transform-predict’ paradigm (see e.g. sklearn) in all component modules—both ‘along’ the pipeline steps, as well as ‘across’ the pathways that define the patient’s healthcare lifecycle (see Figure 2). This enables easy adoption and rapid prototyping—qualities that are paramount given the degree of collaborative research and cross-disciplinary code-sharing required in healthcare-related research. • Encourage Extension: Our third emphasis is on extensibility: Given that novel methods are proposed in the ML community every day, the pipeline components should be easily extensible to incorporate new algorithms. Concretely, this manifests in the encapsulated design for models within each component module: Specifically, in order to integrate a new component method (e.g. from another researcher’s code, or from an external package) into the framework, all that is required is a simple wrapper class that implements the ‘fit’, ‘predict’, and ‘get-hyperparameter-space’ methods; likewise, for an optimization agent (see subsection on optimization interface below), all that is required is to expose an ‘optimize’ method.\nWorked Examples\nFor a discussion of the choice of built-in techniques to include with the initial release, see Appendix C. Appendix E gives a worked example of using the Clairvoyance pipeline to train and use a model in a standard setting (for this, we use the predictions pathway). Appendix F gives a worked example of using the optimization interface to perform stepwise model selection (for this, we use the treatment effects pathway for variety). Appendix G gives an example of how a generic wrapper can be written for integrating an external model/algorithm that is not already implemented in the current version of Clairvoyance. Finally, the software repository contains Jupyter notebooks and top-level API code with examples of pathways and optimization." }, { "heading": "3 RELATED WORK", "text": "Clairvoyance is a pipeline toolkit for medical time-series machine learning research and clinical decision support. As such, this broad undertaking lies at the intersection of three concurrent domains of work: Time-series software development, healthcare journey modeling, and automated learning.\nTime-series Software First and foremost, Clairvoyance is a software toolkit. Focusing on challenges common to clinical time-series modeling, it is primarily differentiated by the breadth and flexibility of the pipeline. While there exists a variety of sophisticated time-series packages for different purposes, they typically concentrate on implementing collections of algorithms and estimators for specific types of problems, such as classification [79], forecasting [77], feature extraction [76], reductions between tasks [78], or integrating segmentation and transforms with estimators [75]. By contrast, our focus is orthogonal: Clairvoyance aims at end-to-end development along the entire inference workflow, including pathways and pipeline components important to medical problems (see Table 1). Again indeed—if so desired, and as mentioned above—specific algorithms from [73–79] can be integrated into Clairvoyance workflows through the usual ‘fit-transform-predict’ interface, with little hassle.\nHealthcare Lifecycle For specific use cases, clearly a plethora of research exists in support of issuing diagnoses [22–24], prognostic modeling [17–21], treatment-effect estimation [26–31], optimizing measurements [32–36], among much more. The key proposition that Clairvoyance advances is the underlying commonality across these seemingly disparate problems: It abstracts and integrates along the time-series inference workflow, across outpatient, general wards, and intensive-care environments, and—above all—amongst a patient’s journey of interactions through the healthcare system that call for decision support in predictions, treatments, and monitoring (Figure 1). Now, it also important to state what Clairvoyance is not: It is not an exhaustive list of algorithms; the pipeline includes a collection of popular components, and provides a standardized interface for extension. It is also not a solution to preference-/application-specific considerations: While issues such as data cleaning, algorithmic fairness, and privacy and heterogeneity are important, they are beyond the scope of our software.\nAutomated Learning Finally, tangentially related is the rich body of work on autoML for hyperparameter optimization [62], algorithm/pipeline configuration [63–65, 67], and stepwise selection [71], as well as specific work for healthcare data [50, 53, 66–68, 70, 71]. In complement to these threads of research, the Clairvoyance pipeline interface enables—if so desired—leveraging existing implementations, or validating novel ones—esp. in efficiently accounting for the temporal dimension." }, { "heading": "4 ILLUSTRATIVE EXAMPLES", "text": "Recall the patient’s journey of interactions within the healthcare system (Figure 1). In this section, our goal is to illustrate key usage scenarios for Clairvoyance in this journey—for personalized (1) prediction, (2) treatment, and (3) monitoring—in outpatient, general wards, and intensive-care environments.\nSpecifically, implicit in all examples is our proposition that: (i) as a software toolkit, constructing an end-to-end solution to each problem is easy, systematic, and self-documenting; (ii) as an empirical standard, evaluating collections of models by varying a single component ensures that comparisons are standardized, explicit, and reproducible; and (iii) as an optimization interface, the flexibility of selecting over the temporal dimension—in and of itself—abstracts out an interesting research avenue.\nMedical Environments Our choices of time-series environments are made to reflect the heterogeneity of realistic use cases envisioned for Clairvoyance. For the outpatient setting, we consider a cohort of patients enrolled in the UK Cystic Fibrosis Registry (CYSTIC) [80], which records longitudinal follow-up data for ∼5,800 individuals with the disease. On the registry, individuals\nare chronic patients monitored over infrequent visits, and for which long-term decline is generally expected. For the general wards setting, we consider a cohort of ∼6,300 patients hospitalized in the general medicine floor in the Ronald Reagan Medical Center (WARDS) [81]. In contrast, here the population of patients presents with a wide variety of conditions and diagnoses (1,600+ ICD-9 codes), and patients are monitored more frequently. The data is highly non-stationary: on the hospital floor, deterioration is an unexpected event. For the intensive-care setting, we consider ∼23,100 individuals from the Medical Information Mart for Intensive Care (MIMIC) [82]. Here, the setting is virtually that more or less “anything-can-happen”, and physiological data streams for each patient are recorded extremely frequently. Varying across the set of environments are such characteristics as the average durations of patient trajectories, the types of static and longitudinal features recorded, their frequencies of measurement, and their patterns and rates of missingness (Table ?? presents some brief statistics).\nExample 1 (Lung Function in Cystic Fibrosis Patients) The most common genetic disease in Caucasian populations is cystic fibrosis [83], which entails various forms of dysfunction in respiratory and gastrointestinal systems, chiefly resulting in progressive lung damage and recurrent respiratory infections requiring antibiotics—and in severe cases may require hospitalization and even mechanical ventilation in an ICU (see Example 2) [84, 85]. While classical risk scores and survival models utilize only a fraction of up-to-date measurements, recent work has leveraged deep learning to incorporate greater extents of longitudinal biomarkers, comorbidities, and other risk factors [86]. An essential barometer for anticipating the occurrence of respiratory failures is the gauge of lung function by forced expiratory volume (FEV1): Accurate prediction yields an important tool for assessing severity of a patient’s disease, describing its onset/progression, and as an input to treatment decisions [85, 87].\nThis is an archetypical rolling-window time-series problem for Clairvoyance’s predictions pathway. Consider the models in Table 3: (i) As a clinical professional, it goes without saying that building the pipeline for each—or extending additional models through wrappers—has a low barrier to entry (see Figure 3/tutorials/documentation). (ii) As an ML researcher, one can rest assured that such comparisons are expressly standardized: Here, all results are explicitly from same pipeline using min-max normalized features, GAIN for static missing values, M-RNN for temporal imputation, no feature selection, and each model class shown. (iii) Lastly, to highlight the utility of the interface for selection over time, the final row presents results of approaching SASH using the example method of Figure 4(a), and—for fair comparison—with the pipeline kept constant. This simple approach already yields some gains in performance, laying a precedent—and the pipeline infrastructure—for further research.\nExample 2 (Mechanical Ventilation on Intensive Care) Mechanical ventilation is an invasive, painful, and extremely unpleasant therapy that requires induction of artificial coma, and carries a high risk of mortality [88]. It is also expensive, with a typical ICU ventilator admission >$30,000 [89]. To the patient, the need for mechanical ventilation—due to evidence of respiratory/ventilatory failure—is by itself an adverse outcome, and is unacceptable to some, even if it means they will not survive. It is possible that alternative strategies employed earlier may alleviate the need for ventilation, such as high flow oxygen, non-invasive ventilation, or—in this example—appropriate use of antibiotics [88]. Now, little is known about optimal timing of courses of antibiotics; in most cases a routine number of days is simply chosen when blood is typically sterile after first dose. On the one hand, there is a clear biologically plausible mechanism for incompletely treated infection to lead to longer periods of\ncritical care, esp. requiring ventilation. On the other hand, antibiotic stewardship is crucial: Overuse of broad spectrum antibiotics leads to resistance, and is by itself a global health emergency [90].\nThis is an archetypical problem for the treatment effects pathway. Table 4 shows the performance of the two state-of-the-art models for estimating effects of treatment decisions over time while adjusting for time-dependent confounding—that is, since actions taken in the data may depend on time-varying variables related to the outcome of interest [30, 31]. We refrain from belaboring points (i), (ii), (iii) above but their merits should be clear. From the patient’s perspective, accurate estimation of the effect of treatment decisions on the risk of ventilation may assist them and their carers in achieving optimal shared decision-making about the care that they would like to receive. From the hospital’s perspective, many ICUs around the world operate at∼100% bed occupancy, and delayed admission is typically an independent predictor of mortality [91–94]; therefore accurate estimation of the need for escalation or continued ICU ventilation is logistically important for resource planning and minimization of delays.\nExample 3 (Clinical Deterioration of Ward Patients) Given the delay-critical nature of ICU admission w.r.t. morbidity/mortality, what is often desired is an automated prognostic decision support system to monitor ward patients and raise (early) alarms for impending admission to ICU (as a result of clinical deterioration) [25, 94, 95]. However, observations are costly, and the question of what (and when) to measure is by itself an active choice under resource constraints [32–36]: For instance, there is less reason to measure a feature whose value can already be confidently estimated on the basis of known quantities, or if its value is not expected to contribute greatly to the prognostic task at hand.\nThis is an archetypical problem for Clairvoyance’s active sensing pathway. Table 5 indicates the performance of different models for balancing this trade-off between information gain and acquisition rate with respect to admissions to ICU of ward patients. At various budget constraints (i.e. amounts of measurements permitted), each active sensing model learns from the training data to identify the most informative features to measure at test-time, so as to maximize the performance of admission predictions. (To allow some measurements to be costlier than others, they can simply be up-weighted when computing the budget constraint). As before, our propositions (i), (ii), and (iii) are implicit here." }, { "heading": "5 CONCLUSION", "text": "Machines will never replace a doctor’s medical judgment, nor an ML researcher’s technical innovation. But as a matter of data-driven clinical decision support, Clairvoyance enables rapid prototyping, benchmarking, and validation of complex time-series pipelines—so doctors can spend more time on the real scientific problems, and ML researchers can focus on the real technical questions. Moreover, collaborative research between medical practitioners and ML researchers is increasingly common [48]. To help grease the wheels, we developed and presented Clairvoyance, and illustrated its flexibility and capability in answering important and interesting medical questions in real-world environments." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the reviewers for their generous and invaluable comments and suggestions. This work was supported by Alzheimer’s Research UK (ARUK), The Alan Turing Institute (ATI) under the EPSRC grant EP/N510129/1, The US Office of Naval Research (ONR), and the National Science Foundation (NSF) under grant numbers 1407712, 1462245, 1524417, 1533983, and 1722516." }, { "heading": "A NEED FOR EMPIRICAL STANDARDIZATION", "text": "‘All else’ is seldom ‘equal’. As examples, a review of recent, state-of-the-art research on medical time-series imputation and prediction models demonstrates the following: While the benchmarking performed within each individual study strives to isolate sources of gain through “all-else-equal” experiments, the degree of overlap in pipeline settings across studies is lacking. Such a dearth of empirical standardization may not optimally promote effective assessment of true research progress:" }, { "heading": "B BACKGROUND REFERENCES FOR EXPERIMENTS", "text": "" }, { "heading": "C NOTE ON CHOICE OF BUILT-IN TECHNIQUES", "text": "Given our “Pipeline First” focus (see “Key Design Principles” in Section 2), especially in the context of medical applications, rather than (re-)implementing every time-series model in existence, our primary contribution is in unifying all three key pathways in a patient’s healthcare lifecycle (i.e. predictions, treatments, and monitoring tasks; see Section 2: “The Patient Journey”) through a single end-to-end pipeline abstraction—for which Clairvoyance is the first (see Table 1: “Clairvoyance and Comparable Software”). For the predictions pathway, while there is a virtually infinite variety of time-series models in the wild, we choose to include standard and popular classes of deep learning models, given their ability to handle large amounts and dimensions of data, as well as the explosion of their usage in medical time-series studies (see e.g. virtually any of the paper references in Section 1). For both the treatment effects and active sensing pathways, there is much less existing work available; for these, we provide state-of-the-art models (e.g. CRN, R-MSN, ASAC, DeepSensing) implemented exactly as given in their original research papers. With that said, as noted throughout, recall that all component modules (including the various other pipeline components) are easily extensible: For instance, if more traditional time-series baselines from classical literature were desired for comparison purposes, existing algorithms from packages such as [73–79] can be integrated into Clairvoyance by using simple wrapper classes with little hassle (for an explicit demonstration of this, see Appendix G).\nNote on Time to Train: Our computations for the examples included in Section 4 were performed using a single NVIDIA GeForce GTX 1080 Ti GPU, and each experiment took approximately ∼24– 72 hours. Of course, this duration may be shortened through the use of multiple GPUs in parallel." }, { "heading": "D ADDITIONAL DETAIL ON EXPERIMENTS", "text": "In our experiments for UKCF (used in Example 1), out of the total of 10,995 entries in the registry data, we focused on the 5,883 adult patients with followup data available from January 2009 through December 2015, which excludes pediatric patients and patients with no follow-up data from January 2009. This includes a total of 90 features, with 11 static covariates and 79 time-varying covariates, which includes basic demographic features, genetic mutations, lung function scores, hospitalizations, bacterial lung infections, comorbidities, and therapeutic management. Within the 5,883 patients, 605 were followed until death (the most common causes of which were complications due to transplantation and CF-associated liver disease); the remaining 5,278 patients were right-censored.\nIn our experiments for WARDS (used in Examples 1 and 3), the data comes from 6,321 patients who were hospitalized in the general medicine floor during the period March 2013 through February 2016, and excludes patients who were reverse transfers from the ICU (i.e. initially admitted from the ICU, and then returned to the ward subsequent to stabilization in condition). The heterogeneity in patient conditions mentioned in the main text include such conditions as shortness of breath, hypertension, septicemia, sepsis, fever, pneumonia, and renal failure. Many patients had diagnoses of leukemia or lymphoma, and had received chemotherapy, allogeneic or autologous stem cell transplantation, and treatments that cause severe immunosuppression places them at risk at developing further complications that may require ICU admission. Here, the recorded features include 8 static variables (admission-time statistics) and 37 temporal physiological data streams (vital signs and laboratory tests); vital signs were taken approximately every 4 hours, and lab tests approximately every 24 hours.\nIn our experiments for MIMIC (used in Example 1 for predictions, and Example 2 for estimating treatment effects), for the predictions example we focus on 22,803 patients who were admitted to ICU after 2008, and consider 11 static variables (demographics information) and 40 physiological data streams in total, which includes 20 vital signs which were most frequently measured and for which missing rates were lowest (e.g. heart rate, respiratory rate), as well as 20 laboratory tests (e.g. creatinine, chloride); vital signs were taken approximately every 1 hour, and laboratory tests approximately every 24 hours. For the treatment effects pathway (used in Example 2), we focus on the 6,033 patients who had received antibiotics at any point in time, based on daily decisions on antibiotic treatment, with a maximum sequence length of 20 days. Note that the class-label imbalance between the pure prediction task (Example 1) and treatment effects task (Example 2) is slightly different per the different populations included, and the numerical results should not be compared directly. The code for extracting this data is included under ‘mimic_data_extraction’ in the repository.\nIn all experiments, the entire dataset is first randomly partitioned into training sets (64%), validation sets (16%), and testing sets (20%). The training set is used for model training, the validation set is used for hyperparameter tuning, and the testing set is used for the final evaluation—which generates the performance metrics. This process itself is then repeated randomly for a total of 10 times, with the means and spreads of each result used in generating results Tables 3–5. As usual, the entire pipeline (with the exception of the pathway model corresponding to each row) is fixed across all rows, which in this case uses min-max normalized features, GAIN for static missing values, M-RNN for temporal imputation, and no prior feature selection; where hyperparameters for such pipeline components are involved (i.e. GAIN and M-RNN here), these are also—as they should be—constant across all rows.\nIn order to highlight our emphasis on the temporal dimension of autoML in Clairvoyance, the results for SASH isolate precisely this effect alone: Each result for SASH is generated using the simple approach of Figure 4(a)—that is, by ‘naively’ decomposing SASH into a collection of SMS problems (for each model class considered), subsequent to which the stepwise models for each class are further ensembled through stacking. Note that the point here is not to argue for this specific technique, but merely to show that even this (simplistic) approach already yields some gains, thereby illustrating the potential for further autoML research (which can be conveniently performed over Clairvoyance’s pipeline abstraction) to investigate perhaps more efficient solutions with respect to this temporal dimension. Briefly, in DKL the validation performance for each time step is treated as a noisy version of a black box function, which leads to a multiple black-box function optimization problem (which DKL solves jointly and efficiently); we refer to [71] for their original exposition. In our experiments we complete 100 iterations of Bayesian optimization in DKL for each model class. For reproducibility, the code for our implementation of DKL used for experiments is included in the repository." }, { "heading": "E WORKED EXAMPLE: USING THE FULL PIPELINE", "text": "This section gives a fully worked example of using the Clairvoyance pipeline (via the predictions pathway). To follow along, the user should have their own static and temporal datasets for training and testing, named as follows—where ‘data_name’ is replaced by the appropriate name of the dataset:\n• data_name_temporal_train_data_eav.csv.gz • data_name_static_train_data.csv.gz • data_name_temporal_test_data_eav.csv.gz • data_name_static_test_data.csv.gz\nand placed within the directory ‘../datasets/data/data_name/’. As described in Section 2 (“As a Software Toolkit”), the requirement is that the data conform to the standard EAV open schema for clinical records (i.e. patient key, timestamp, parameter, and value). See Figure 6 for a summary of the pipeline workflow that we shall be walking through and executing in the following subsections:\n1. Load Dataset: Extract csv files from the original raw datasets located in the data directory. 2. Preprocess Dataset: Preprocess the raw data using various filters, such as replacing negative\nvalues to NaN, doing one-hot encoding for certain features, and normalizing feature values. 3. Define Problem: Set the prediction problem (one-shot or online), the label (the target of\npredictions), the maximum sequence length, and (optionally) the treatment features (not used here). Also define the metric for evaluation and the task itself (classification or regression).\n4. Impute Dataset: Impute missing values in the preprocessed static and temporal datasets—for each selecting among data imputation methods of choice, and return the complete datasets.\n5. Feature Selection: Select the relevant static and temporal features for the labels (e.g. recursive or greedy addition/deletion, or simply skip this step by setting the method to be None.\n6. Model Training and Prediction: After finishing the data preparation steps, we define the model used for time-series prediction, and train the model using the training dataset. After training is finished, we use the trained model to predict the labels using the testing dataset.\n7. Estimate Uncertainty: Estimate uncertainty of the predictions made by the predictor model. 8. Interpret Predictions: Compute the (instance-wise) feature and temporal importance weights. 9. Visualize Results: Output predictions, performance metrics, uncertainties, and importances.\nImport necessary packages for this example:\n# Necessary packages from __future__ import absolute_import from __future__ import division from __future__ import print_function\nimport numpy as np import warnings; warnings.filterwarnings(’ignore’) import sys; sys.path.append(’../’)\nfrom utils import PipelineComposer" }, { "heading": "E.1 LOAD DATASET", "text": "Extract csv files from the original raw datasets located in the directory. The CSVLoader is responsible for loading csv files from the original raw datasets in the ‘../datasets/data/data_name/’ directory. In this example we use data from MIMC, so here the ‘data_name’ is ‘mimic’ throughout:\nLoad Dataset\nfrom datasets import CSVLoader\n# Define data name data_name = ’mimic’\n# Define data directory data_directory = ’../datasets/data/’+data_name + ’/’ + data_name + ’_’\n# Load train and test datasets data_loader_training = \\ CSVLoader(static_file=data_directory + ’static_train_data.csv.gz’,\ntemporal_file=data_directory + ’temporal_train_data_eav.csv.gz’)\ndata_loader_testing = \\ CSVLoader(static_file=data_directory + ’static_test_data.csv.gz’,\ntemporal_file=data_directory + ’temporal_test_data_eav.csv.gz’)\ndataset_training = data_loader_training.load() dataset_testing = data_loader_testing.load()\nprint(’Finish data loading.’)" }, { "heading": "E.2 PREPROCESS DATASET", "text": "Preprocess the raw data using multiple filters. In this example, we replace all the negative values to NaN (using FilterNegative), do one-hot encoding on ‘admission_type’ feature (using OneHotEncoder), and do MinMax Normalization (using Normalizer). Preprocessing is done for both training and testing datasets; note that—as should be the case—the ‘fit_transform’ method is called on the training dataset, and only the ‘transform’ method is executed on the testing dataset:\nPreprocess Dataset\nfrom preprocessing import FilterNegative, OneHotEncoder, Normalizer\n# (1) filter out negative values negative_filter = FilterNegative()\n# (2) one-hot encode categorical features one_hot_encoding = ’admission_type’ onehot_encoder = OneHotEncoder(one_hot_encoding_features=[one_hot_encoding])\n# (3) Normalize features: 3 options (minmax, standard, none)\nnormalization = ’minmax’ normalizer = Normalizer(normalization)\n# Data preprocessing filter_pipeline = PipelineComposer(negative_filter, onehot_encoder, normalizer)\ndataset_training = filter_pipeline.fit_transform(dataset_training) dataset_testing = filter_pipeline.transform(dataset_testing)\nprint(’Finish preprocessing.’)" }, { "heading": "E.3 DEFINE PROBLEM", "text": "The prediction problem can be defined as: ‘one-shot’ (one time prediction) or ‘online’(rolling window prediction). The “max_seq_len’ is the maximum sequence length of time-series sequence. The ‘label_name’ is the column name for the label(s) selected as the prediction target. The ‘treatment’ is the column name for the actions selected as treatments (not used here in this example, in the predictions pathway). The ‘window’ specifies the prediction window (i.e. how many hours ahead to predict). The ‘metric_name’ specifies the performance metric of interest, e.g. ‘auc’, ‘apr’, ‘mse’, ‘mae’, and the ‘task’ is classification or regression. In this example, we are interested in issuing online predictions for whether the patient will require mechanical ventilation after 4 hours:\nDefine Problem\nfrom preprocessing import ProblemMaker\n# Define parameters problem = ’online’ max_seq_len = 24 label_name = ’ventilator’ treatment = None window = 4\n# Define problem problem_maker = \\ ProblemMaker(problem=problem, label=[label_name],\nmax_seq_len=max_seq_len, treatment=treatment, window = window)\ndataset_training = problem_maker.fit_transform(dataset_training) dataset_testing = problem_maker.fit_transform(dataset_testing)\n# Set other parameters metric_name = ’auc’ task = ’classification’\nmetric_sets = [metric_name] metric_parameters = {’problem’: problem, ’label_name’: [label_name]}\nprint(’Finish defining problem.’)\nE.4 IMPUTE DATASET\nFor static imputation there are options such as mean, median, mice, missforest, knn, gain. For temporal imputation there are options such as mean, median, linear, quadratic, cubic, spline, mrnn, tgain. In this example we simply select median imputation for both static and temporal data:\nImpute Dataset\nfrom imputation import Imputation\n# Set imputation models static_imputation_model = ’median’ temporal_imputation_model = ’median’\n# Impute the missing data static_imputation = Imputation(imputation_model_name = static_imputation_model, data_type = ’static’) temporal_imputation = Imputation(imputation_model_name = temporal_imputation_model,\ndata_type = ’temporal’)\nimputation_pipeline = PipelineComposer(static_imputation, temporal_imputation)\ndataset_training = imputation_pipeline.fit_transform(dataset_training) dataset_testing = imputation_pipeline.transform(dataset_testing)\nprint(’Finish imputation.’)" }, { "heading": "E.5 FEATURE SELECTION", "text": "In this step, we can perform feature selection for the most relevant static and temporal features to the labels. In the simplest case, we can skip the feature selection step entirely (as we do here). The user can select from among greedy-addtion, greedy-deletion, recursive-addition, recursive-deletion, and None. The feature_number specifies the number of selected features:\nFeature Selection\nfrom feature_selection import FeatureSelection\n# Set feature selection parameters static_feature_selection_model = None temporal_feature_selection_model = None static_feature_selection_number = None temporal_feature_selection_number = None\n# Select relevant features static_feature_selection = \\ FeatureSelection(feature_selection_model_name = static_feature_selection_model,\nfeature_type = ’static’, feature_number = static_feature_selection_number, task = task, metric_name = metric_name, metric_parameters = metric_parameters)\ntemporal_feature_selection = \\ FeatureSelection(feature_selection_model_name = temporal_feature_selection_model,\nfeature_type = ’temporal’, feature_number = temporal_feature_selection_number, task = task, metric_name = metric_name, metric_parameters = metric_parameters)\nfeature_selection_pipeline = \\ PipelineComposer(static_feature_selection, temporal_feature_selection)\ndataset_training = feature_selection_pipeline.fit_transform(dataset_training) dataset_testing = feature_selection_pipeline.transform(dataset_testing)\nprint(’Finish feature selection.’)" }, { "heading": "E.6 TRAINING AND PREDICTION", "text": "After finishing the data preparation, we define the predictive models. Existing options include RNN, GRU, LSTM, Attention, Temporal CNN, and Transformer, and—as is the case for the other pipeline modules, and as discussed in Section 2—is easily extensible through the standard fit-transformpredict paradigm. We now train the model using the training dataset. We set the validation set as the 20% of the training set for early stopping and for saving the best model. After training, we use the trained model to predict the labels of the testing dataset. Here the parameters include model_name: rnn, gru, lstm, attention, tcn, transformer; model_parameters: network parameters,\nsuch as hdim: hidden dimensions, n_layer: number of layers, n_head: number of heads (for transformer model), batch_size: number of samples in mini-batch, epochs: number of epochs, learning_rate: learning rate, static_mode: method of incorporating static features (e.g. by concatenation), time_mode: method of incorporating temporal information (e.g. concatenate), etc.:\nTraining and Prediction\nfrom prediction import prediction\n# Set predictive model model_name = ’gru’\n# Set model parameters model_parameters = {’h_dim’: 100,\n’n_layer’: 2, ’n_head’: 2, ’batch_size’: 128, ’epoch’: 20, ’model_type’: model_name, ’learning_rate’: 0.001, ’static_mode’: ’Concatenate’, ’time_mode’: ’Concatenate’, ’verbose’: True}\n# Set up validation for early stopping and best model saving dataset_training.train_val_test_split(prob_val=0.2, prob_test = 0.0)\n# Train the predictive model pred_class = prediction(model_name, model_parameters, task) pred_class.fit(dataset_training)\n# Return the predictions on the testing set test_y_hat = pred_class.predict(dataset_testing)\nprint(’Finish predictor model training and testing.’)" }, { "heading": "E.7 ESTIMATE UNCERTAINTY", "text": "Estimate uncertainty of the predictions (which we name ‘test_ci_hat’ below) made by the predictor model. In this example, we use the method of ensembling to model uncertainty in prediction output:\nEstimate Uncertainty\nfrom uncertainty import uncertainty\n# Set uncertainty model uncertainty_model_name = ’ensemble’\n# Train uncertainty model uncertainty_model = uncertainty(uncertainty_model_name, model_parameters, pred_class, task) uncertainty_model.fit(dataset_training)\n# Return uncertainty of the trained predictive model test_ci_hat = uncertainty_model.predict(dataset_testing)\nprint(’Finish uncertainty estimation’)\nE.8 INTERPRET PREDICTIONS\nCompute feature importance weights (which we name ‘test_s_hat’ below). In this example, we use the method of (temporal) INVASE to model instance-wise feature/temporal importance weights:\nInterpret Predictions\nfrom interpretation import interpretation\n# Set interpretation model interpretation_model_name = ’tinvase’\n# Train interpretation model interpretor = interpretation(interpretation_model_name, model_parameters, pred_class, task) interpretor.fit(dataset_training)\n# Return instance-wise temporal and static feature importance test_s_hat = interpretor.predict(dataset_testing)\nprint(’Finish model interpretation’)\nE.9 VISUALIZE RESULTS\nHere we visualize the performance of the trained model (using the print_performance method): Visualize Performance\nfrom evaluation import Metrics from evaluation import print_performance\n# Evaluate predictor model result = Metrics(metric_sets, metric_parameters).evaluate( dataset_testing.label, test_y_hat) print(’Finish predictor model evaluation.’)\nprint(’Overall performance’) print_performance(result, metric_sets, metric_parameters)\nSimilar methods can be used to visualize model predictions, uncertainties, and importances (by importing print_prediction, print_uncertainty, and print_interpretation methods). See Jupyter notebook tutorial for complete sample code, including inputs, outputs, and visualizations:\nOther Visualizations\nfrom evaluation import print_prediction, print_uncertainty, print_interpretation\n# Set the patient index for visualization index = [1]\nprint(’Each prediction’) print_prediction(test_y_hat[index], metric_parameters)\nprint(’Uncertainty estimations’) print_uncertainty (test_y_hat[index], test_ci_hat[index], metric_parameters)\nprint(’Model interpretation’) print_interpretation (test_s_hat[index], dataset_training.feature_name,\nmetric_parameters, model_parameters)" }, { "heading": "F WORKED EXAMPLE: USING THE AUTOML INTERFACE", "text": "This section gives a fully worked example of using the Clairvoyance optimization interface (via the treatment effects pathway). Here the basic structure remains the same as in Section E, but in the model training step (here we use CRN for the treatment effects model) we show an example of performing stepwise model selection (SMS) as well. We assume the reader is familiar with the details as in Section E, and do not repeat similar descriptions. Instead, we organize the code as in a standard experiment—using a ‘main’ function wrapper with top-level arguments to enable ease of inspection.\nImport necessary packages, begin main function, and set basic parameters:\nfrom __future__ import absolute_import from __future__ import division from __future__ import print_function\nimport argparse import numpy as np import warnings; warnings.filterwarnings(’ignore’) import sys; sys.path.append(’../’)\nfrom datasets import CSVLoader from preprocessing import FilterNegative, OneHotEncoder, Normalizer, ProblemMaker from imputation import Imputation from feature_selection import FeatureSelection from treatments.CRN.CRN_Model import CRN_Model from prediction import AutoEnsemble from automl.model import AutoTS from evaluation import Metrics, BOMetric from evaluation import print_performance, print_prediction from utils import PipelineComposer\nBegin Main Function\ndef main (args): ’’’Args:\n- data loading parameters: - data_names: mimic, ward, cf, mimic_antibiotics - preprocess parameters: - normalization: minmax, standard, None - one_hot_encoding: input features that need to be one-hot encoded - problem: ’one-shot’ or ’online’\n- ’one-shot’: one time prediction at the end of the time-series - ’online’: preditcion at every time stamps of the time-series\n- max_seq_len: maximum sequence length after padding - label_name: the column name for the label(s) - treatment: the column name for treatments\n- imputation parameters: - static_imputation_model: mean, median, mice, missforest, knn, gain, etc. - temporal_imputation_model: mean, median, linear, quadratic, cubic, spline, etc. - feature selection parameters: - feature_selection_model: greedy-addtion, recursive-addition, etc. - feature_number: selected feature number - predictor_parameters: - epochs: number of epochs - bo_itr: bayesian optimization iterations - static_mode: how to utilize static features (concatenate or None) - time_mode: how to utilize time information (concatenate or None) - task: classification or regression\n- metric_name: auc, apr, mae, mse ’’’ # Set basic parameters metric_sets = [args.metric_name] metric_parameters = {’problem’: args.problem, ’label_name’: [args.label_name]}\nF.1 LOAD DATASET Load Dataset\n# (continued within ’def main’)\n# File names data_directory = ’../datasets/data/’ + args.data_name + ’/’ + args.data_name + ’_’\ndata_loader_training = \\ CSVLoader(static_file=data_directory + ’static_train_data.csv.gz’,\ntemporal_file=data_directory + ’temporal_train_data_eav.csv.gz’)\ndata_loader_testing = \\ CSVLoader(static_file=data_directory + ’static_test_data.csv.gz’,\ntemporal_file=data_directory + ’temporal_test_data_eav.csv.gz’)\ndataset_training = data_loader_training.load() dataset_testing = data_loader_testing.load()\nprint(’Finish data loading.’)" }, { "heading": "F.2 PREPROCESS DATASET", "text": "Preprocess Dataset\n# (continued within ’def main’)\n# (0) filter out negative values (Automatically) negative_filter = FilterNegative()\n# (1) one-hot encode categorical features onehot_encoder = OneHotEncoder(one_hot_encoding_features=[args.one_hot_encoding])\n# (2) Normalize features: 3 options (minmax, standard, none) normalizer = Normalizer(args.normalization)\nfilter_pipeline = PipelineComposer(negative_filter, onehot_encoder, normalizer)\ndataset_training = filter_pipeline.fit_transform(dataset_training) dataset_testing = filter_pipeline.transform(dataset_testing)\nprint(’Finish preprocessing.’)\nF.3 DEFINE PROBLEM Define Problem\n# (continued within ’def main’)\nproblem_maker = \\ ProblemMaker(problem=args.problem, label=[args.label_name],\nmax_seq_len=args.max_seq_len, treatment=[args.treatment])\ndataset_training = problem_maker.fit_transform(dataset_training) dataset_testing = problem_maker.fit_transform(dataset_testing)\nprint(’Finish defining problem.’)\nF.4 IMPUTE DATASET Impute Dataset\n# (continued within ’def main’)\nstatic_imputation = Imputation( imputation_model_name=args.static_imputation_model, data_type=’static’)\ntemporal_imputation = Imputation( imputation_model_name=args.temporal_imputation_model, data_type=’temporal’)\nimputation_pipeline = PipelineComposer(static_imputation, temporal_imputation)\ndataset_training = imputation_pipeline.fit_transform(dataset_training) dataset_testing = imputation_pipeline.transform(dataset_testing)\nprint(’Finish imputation.’)" }, { "heading": "F.5 FEATURE SELECTION", "text": "Feature Selection\n# (continued within ’def main’)\nstatic_feature_selection = FeatureSelection( feature_selection_model_name=args.static_feature_selection_model, feature_type=’static’, feature_number=args.static_feature_selection_number, task=args.task, metric_name=args.metric_name, metric_parameters=metric_parameters)\ntemporal_feature_selection = FeatureSelection( feature_selection_model_name=args.temporal_feature_selection_model, feature_type=’temporal’, feature_number=args.temporal_feature_selection_number, task=args.task, metric_name=args.metric_name, metric_parameters=metric_parameters)\nfeature_selection_pipeline = PipelineComposer(static_feature_selection, temporal_feature_selection)\ndataset_training = feature_selection_pipeline.fit_transform(dataset_training) dataset_testing = feature_selection_pipeline.transform(dataset_testing)\nprint(’Finish feature selection.’)" }, { "heading": "F.6 OPTIMIZATION AND PREDICTION", "text": "Since we want to do stepwise model selection, this step differs from that in Section E. In particular, here we are not just relying on a single pathway model (CRN); we are calling the ‘AutoTS’ module to perform Bayesian optimization—which implements SMS by DKL, exactly as described in [71]:\nOptimization and Prediction\n# (continued within ’def main’)\n# CRN model model_parameters = {’projection_horizon’: 5,\n’encoder_max_alpha’:1, ’decoder_max_alpha’: 1, ’static_mode’: ’concatenate’, ’time_mode’: ’concatenate’}\ncrn_model = CRN_Model(task=’classification’) crn_model.set_params(**model_parameters)\nmodel_class= crn_model\n# train_validate split dataset_training.train_val_test_split(prob_val=0.2, prob_test=0.2)\n# Bayesian Optimization Start metric = BOMetric(metric=’auc’, fold=0, split=’test’)\n# Run BO for selected model class BO_model = AutoTS(dataset_training, model_class, metric) models, bo_score = BO_model.training_loop(num_iter=20) auto_ens_model = AutoEnsemble(models, bo_score)\n# Prediction of treatment effects test_y_hat = auto_ens_model.predict(dataset_testing, test_split=’test’)\nprint(’Finish AutoML model training and testing.’)" }, { "heading": "F.7 MODEL EVALUATION", "text": "Output performance evaluation for the final trained model, using metric parameters as defined above: Model Evaluation\n# (continued within ’def main’)\nresult = Metrics(metric_sets, metric_parameters).evaluate(test_y, test_y_hat) print(’Finish ITE model evaluation.’)\nprint(’Overall performance’) print_performance(result, metric_sets, metric_parameters)" }, { "heading": "F.8 TOP-LEVEL PARSER", "text": "Define and Parse Arguments\nif __name__ == ’__main__’: parser = argparse.ArgumentParser() parser.add_argument(\n’--data_name’, choices=[’mimic’, ’ward’, ’cf’, ’mimic_antibiotics’], default=’mimic_antibiotics’, type=str)\nparser.add_argument( ’--normalization’, choices=[’minmax’, ’standard’, None], default=’minmax’, type=str) parser.add_argument( ’--one_hot_encoding’, default=’admission_type’, type=str) parser.add_argument( ’--problem’, choices=[’online’, ’one-shot’], default=’online’, type=str) parser.add_argument( ’--max_seq_len’, help=’maximum sequence length’, default=20, type=int) parser.add_argument( ’--label_name’, default=’ventilator’, type=str) parser.add_argument( ’--treatment’, default=’antibiotics’, type=str)\nparser.add_argument( ’--static_imputation_model’, choices=[’mean’, ’median’, ’mice’, ’missforest’, ’knn’, ’gain’], default=’median’, type=str) parser.add_argument( ’--temporal_imputation_model’, choices=[’mean’, ’median’, ’linear’, ’quadratic’, ’cubic’, ’spline’,\n’mrnn’, ’tgain’], default=’median’, type=str)\nparser.add_argument( ’--static_feature_selection_model’, choices=[’greedy-addition’, ’greedy-deletion’, ’recursive-addition’,\n’recursive-deletion’, None], default=None, type=str)\nparser.add_argument( ’--static_feature_selection_number’, default=10, type=int) parser.add_argument( ’--temporal_feature_selection_model’, choices=[’greedy-addition’, ’greedy-deletion’, ’recursive-addition’,\n’recursive-deletion’, None], default=None, type=str)\nparser.add_argument( ’--temporal_feature_selection_number’, default=10, type=int) parser.add_argument( ’--epochs’, default=20, type=int) parser.add_argument( ’--bo_itr’, default=20, type=int) parser.add_argument( ’--static_mode’, choices=[’concatenate’,None], default=’concatenate’, type=str) parser.add_argument( ’--time_mode’, choices=[’concatenate’,None], default=’concatenate’, type=str) parser.add_argument( ’--task’, choices=[’classification’,’regression’], default=’classification’, type=str) parser.add_argument( ’--metric_name’, choices=[’auc’,’apr’,’mse’,’mae’], default=’auc’, type=str)\n# Call main function args = parser.parse_args() main(args)" }, { "heading": "G EXTENSIBILITY: EXAMPLE WRAPPER CLASS", "text": "Since novel methods are proposed in the ML community every day, the pipeline components should be easily extensible to incorporate new algorithms. To integrate a new component method (e.g. from another researcher’s code, or from an external package) into the framework, all that is required is a simple wrapper class that implements the ‘fit’, ‘predict’, and ‘get-hyperparameter-space’ methods. Here we show an example of how a classical time-series prediction model (ARIMA) can be integrated.\nARIMA Wrapper Class\n# Necessary packages import os import pmdarima as pm from datetime import datetime from base import BaseEstimator, PredictorMixin import numpy as np\nclass ARIMA(BaseEstimator, PredictorMixin): \"\"\"Attributes:\n- task: classification or regression - p: MA degree - d: Differencing degree - q: AR degree - time_mode: ’concatenate’ or None - model_id: the name of model - model_path: model path for saving - verbose: print intermediate process\n\"\"\" def __init__(self,\ntask=None, p=None, d=None, q=None, time_mode=None, model_id=’auto_arima’, model_path=’tmp’, verbose=False):\nsuper().__init__(task)\nself.task = task self.p = p self.d = d self.q = q self.time_mode = time_mode self.model_path = model_path self.model_id = model_id self.verbose = verbose\n# Predictor model & optimizer define self.predictor_model = None\nif self.task == ’classification’: raise ValueError(’Arima model cannot be used for classification’)\n# Set path for model saving if not os.path.exists(model_path): os.makedirs(model_path) self.save_file_name = ’{}/{}’.format(model_path, model_id) + \\ datetime.now().strftime(’%H%M%S’) + ’.hdf5’\ndef new(self, model_id): \"\"\"Create a new model with the same parameter as the existing one. Args: - model_id: an unique identifier for the new model\nReturns: - a new ARIMA \"\"\" return ARIMA(self.task,\nself.p, self.d, self.q, self.time_mode, model_id, self.model_path, self.verbose)\nfit\ndef fit(self, dataset, fold=0, train_split=’train’, valid_split=’val’): \"\"\" Arima model fitting does not require an independent training set. \"\"\" pass\npredict\ndef predict(self, dataset, fold=0, test_split=’test’): \"\"\"Return the predictions based on the trained model. Args: - dataset: temporal, static, label, time, treatment information - fold: Cross validation fold - test_split: testing set splitting parameter\nReturns: - test_y_hat: predictions on testing set \"\"\" test_x, test_y = self._data_preprocess(dataset, fold, test_split) shape0 = test_y.shape[0] shape1 = test_y.shape[1] shape2 = test_y.shape[2]\nprint(test_y.shape)\nassert shape2 == 1\n# y: N_sample, max_seq_len, dim fited_list = []\nfor i in range(shape0): y0 = test_y[i, :, 0] model = pm.arima.ARIMA(order=(self.p, self.d, self.q), suppress_warnings=True) try:\nmodel.fit(y0) y_hat = model.predict_in_sample(dynamic=True)\nexcept Exception: y_hat = np.zeros_like(y0) fited_list.append(y_hat)\ny_hat = np.stack(fited_list, axis=0)[:, :, None] return y_hat\n@staticmethod\nget_hyperparameter_space\ndef get_hyperparameter_space(): hyp_ = [{’name’: ’p’, ’type’: ’discrete’, ’domain’: list(range(1, 6)), ’dimensionality’: 1},\n{’name’: ’d’, ’type’: ’discrete’, ’domain’: list(range(1, 6)), ’dimensionality’: 1}, {’name’: ’q’, ’type’: ’discrete’, ’domain’: list(range(1, 6)), ’dimensionality’: 1}]\nreturn hyp_" }, { "heading": "H SOME FREQUENTLY ASKED QUESTIONS", "text": "Q1. Does Clairvoyance include every time-series model under the sun?\nA1. That is not our purpose in providing the pipeline abstraction (see Section 2: “As a Software Toolkit”), not to mention generally impossible. We do include standard classes of models (e.g. popular deep learning models for prediction), and an important contribution is in unifying all three key tasks involved in a patient’s healthcare lifecycle under a single roof, including the treatment effects pathway and active sensing pathway (both for which we provide state-of-the-art time-series models) in addition to the predictions pathway (see Section 2: “The Patient Journey”, Figures 1–2, as well as Table 1). Moreover, as noted throughout, modules are easily extensible: For instance, if more traditional time-series baselines from classical literature are desired for comparison purposes, existing algorithms from [73–79] can be integrated into Clairvoyance by using wrappers, with little hassle.\nQ2. Isn’t preprocessing, imputation, selection, etc. already always performed?\nA2. Yes, and we are not claiming that there is anything wrong with individual studies per se. However (per Section 2: “As an Empirical Standard”, and Appendix A: Tables 6–7), while current research practices typically seek to isolate individual gains, the degree of clarity and/or overlap in pipeline configurations across studies is lacking. This dearth of empirical standardization may not optimally promote practical assessment/reproducibility, and may obscure/entangle true progress. By providing a software toolkit and empirical standard, constructing an end-to-end solution to each problem is easy, systematic, and self-documenting (see Figures 2–3), and evaluating collections of models by varying a single component ensures that comparisons are standardized, explicit, and reproducible.\nQ3. How about other issues like regulations, privacy, and federated learning?\nA3. Per the discussion in Section 3, Clairvoyance is not a solution for preference-/application-specific considerations such as cohort construction, data cleaning and heterogeneity, patient privacy, algorithmic fairness, federated learning, or compliance with government regulations. While such issues are real/important concerns (with plenty of research), they are firmly beyond the scope of our software; it is designed to operate in service to clinical decision support—not at all to replace humans in the loop.\nQ4. What are these interdependencies among components and time steps?\nA4. Componentwise interdependencies occur for any number of reasons. We have discussed several examples (see Section 2: “As an Empirical Standard”), but it is not our mission to convince the reader from scratch: For that, there exists a plethora of existing autoML/medical literature (see e.g. Section 3). However, the pipeline abstraction serves as a succinct and standardized interface to anyone’s favorite autoML algorithm (see Section 2: “As an Optimization Interface”). Moreover, here we do specifically highlight the temporal dimension of model selection opened up by the time-series nature of the pipeline (see Figure 5). In particular, each example in Section 4 specifically illustrates the gains in performance that already occur—ceteris paribus—using a simple approach to SASH as in Figure 4(a).\nQ5. Where is all the background and theory on each module?\nA5. The scope of the software toolkit is purposefully broad, but it is not our intention to provide a technical introduction to each of the topics involved (which would—in any case—be impossible in the scope of a paper). While Clairvoyance lowers the barrier to entry in terms of engineering/evaluation, it is not intended to be used as a black-box solution. For instance, we expect that a user desiring to conduct treatment effects estimation using the CRN component to be familiar with its basic theory and limitations. That said, in addition to the various references provided throughout the description of each aspect of Clairvoyance, the following may serve as more concise background information on original problem formulations and solutions: For treatment effects estimation over time we refer to [31]; for active sensing we refer to [34]; for time-series data imputation we refer to [15]; for interpretation by individualized variable selection we refer to [44]; for autoML in general we refer to [63]; for the pipeline configuration and selection (PSC) problem we refer to Section 3.1 in [67]; and for the stepwise model selection (SMS) problem we refer to Sections 2–3 in [71]; moreover, Figure 5 shows how new problems (e.g. SASH) directly result from combining their optimization domains.\nQ6. How do you know what clinicians want?\nA6. With clinicians as developers/authors, it is our central goal to understand realistic usage scenarios." }, { "heading": "I GLOSSARY OF ACRONYMS", "text": "ASAC: Active sensing by actor critic, first defined in [36].\nAPR: Area under the precision-recall curve.\nAUC: Area under the receiver-operating characteristic curve.\nCASH: Combined algorithm selection and hyperparameter optimization, first defined in [63].\nCRN: Counterfactual recurrent network, first defined in [31].\nDKL: Deep kernel learning, first defined in [72].\nFLASH: Fast linear search, first defined in [68].\nGAIN: Generative adversarial imputation network, first defined in [97].\nGANITE: Generative adversarial network for individualized treatment effects, first defined in [98].\nGRU: Gated Recurrent Units, a type of recurrent neural network.\nINVASE: Instance-wise variable selection, first defined in [44].\nLSTM: Long-short term memory, a type of recurrent neural network.\nMAE: Mean absolute error.\nMICE: Multiple imputation by chained equations.\nMissForest: Missing value imputation using random forest.\nMRNN: Multi-directional recurrent neural networks, first defined in [15].\nPSC: Pipeline selection and configuration, first defined in [67].\nRMSE: Root mean squared error.\nRMSN: Recurrent marginal structural network, first defined in [30].\nRPS: Relaxed parameter sharing, first defined in [53].\nSASH: Stepwise algorithm selection and hyperparameter optimization, first defined in Section 2.\nSKL: Structured kernel learning, first defined in [69].\nSMS: Stepwise model selection, first defined in [71].\nTCN: Temporal convolutional network.\nNote: A prefix of “T” to certain techniques simply indicates its temporal counterpart (e.g. “T-GAIN” refers to the method of GAIN using a recurrent neural network for handling the temporal dimension)." } ]
2,021
CLAIRVOYANCE: A PIPELINE TOOLKIT FOR MEDICAL TIME SERIES
SP:415de370d5dea4aa3136d79bef9bf04f733d8285
[ "The paper aims to model the posterior utility of showing an item given a static display policy, where the utility function captures both: (a) uncertainty over item dimensions from the user perspective, and (b) influence of the policy on the user. It is motivated by the fact that most recommender systems don't take into account that user may be highly uncertain about value/utility in certain dimensions (e.g., color of a product) while more certain about others. The platform can use this information explicitly while optimizing for what to show." ]
Commercial recommendation can be regarded as an interactive process between the recommendation platform and its target users. One crucial problem for the platform is how to make full use of its advantages so as to maximize its utility, i.e., the commercial benefits from recommendation. In this paper, we propose a novel recommendation framework which effectively utilizes the information of user uncertainty over different item dimensions1 and explicitly takes into consideration the impact of display policy on user in order to achieve maximal expected posterior utility for the platform. We formulate the problem of deriving optimal policy to achieve maximal expected posterior utility as a constrained non-convex optimization problem and further propose an ADMM-based solution to derive an approximately optimal policy. Extensive experiments are conducted over data collected from a real-world recommendation platform and demonstrate the effectiveness of the proposed framework. Besides, we also adopt the proposed framework to conduct experiments with an intent to reveal how the platform achieves its commercial benefits. The results suggest that the platform should cater to the user’s preference for item dimensions that the user prefers, while for item dimensions where the user is with high uncertainty, the platform can achieve more commercial benefits by recommending items with high utilities.
[ { "affiliations": [], "name": "RIOR UTILITY" } ]
[ { "authors": [ "Himan Abdollahpouri", "Masoud Mansoury" ], "title": "Multi-sided exposure bias in recommendation", "venue": "arXiv preprint arXiv:2006.15772,", "year": 2020 }, { "authors": [ "John Barnard", "Robert McCulloch", "Xiao-Li Meng" ], "title": "Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage", "venue": "Statistica Sinica,", "year": 2000 }, { "authors": [ "Stephen Boyd", "Neal Parikh", "Eric Chu" ], "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "venue": "Now Publishers Inc,", "year": 2011 }, { "authors": [ "Haokun Chen", "Xinyi Dai", "Han Cai", "Weinan Zhang", "Xuejian Wang", "Ruiming Tang", "Yuzhou Zhang", "Yong Yu" ], "title": "Large-scale interactive recommendation with tree-structured policy gradient", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Paul Covington", "Jay Adams", "Emre Sargin" ], "title": "Deep neural networks for youtube recommendations", "venue": "In Proceedings of the 10th ACM conference on recommender systems,", "year": 2016 }, { "authors": [ "Gabriel Dulac-Arnold", "Richard Evans", "Hado van Hasselt", "Peter Sunehag", "Timothy Lillicrap", "Jonathan Hunt", "Timothy Mann", "Theophane Weber", "Thomas Degris", "Ben Coppin" ], "title": "Deep reinforcement learning in large discrete action spaces", "venue": "arXiv preprint arXiv:1512.07679,", "year": 2015 }, { "authors": [ "Huifeng Guo", "Ruiming Tang", "Yunming Ye", "Zhenguo Li", "Xiuqiang He" ], "title": "Deepfm: a factorizationmachine based neural network for ctr prediction", "venue": "arXiv preprint arXiv:1703.04247,", "year": 2017 }, { "authors": [ "Xiangnan He", "Lizi Liao", "Hanwang Zhang", "Liqiang Nie", "Xia Hu", "Tat-Seng Chua" ], "title": "Neural collaborative filtering", "venue": "In Proceedings of the 26th international conference on world wide web,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Nicole Immorlica", "Jieming Mao", "Aleksandrs Slivkins", "Zhiwei Steven Wu" ], "title": "Bayesian exploration with heterogeneous agents", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Cong Leng", "Hao Li", "Shenghuo Zhu", "Rong Jin" ], "title": "Extremely low bit neural network: Squeeze the last bit out with admm", "venue": "arXiv preprint arXiv:1707.09870,", "year": 2017 }, { "authors": [ "Chao Li", "Zhiyuan Liu", "Mengmeng Wu", "Yuchi Xu", "Huan Zhao", "Pipei Huang", "Guoliang Kang", "Qiwei Chen", "Wei Li", "Dik Lun Lee" ], "title": "Multi-interest network with dynamic routing for recommendation at tmall", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Yishay Mansour", "Aleksandrs Slivkins", "Vasilis Syrgkanis", "Zhiwei Steven Wu" ], "title": "Bayesian exploration: Incentivizing exploration in bayesian games", "venue": "arXiv preprint arXiv:1602.07570,", "year": 2016 }, { "authors": [ "Qi Pi", "Weijie Bian", "Guorui Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Practice on long sequential user behavior modeling for click-through rate prediction", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Yanru Qu", "Han Cai", "Kan Ren", "Weinan Zhang", "Yong Yu", "Ying Wen", "Jun Wang" ], "title": "Product-based neural networks for user response prediction", "venue": "IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Kan Ren", "Jiarui Qin", "Yuchen Fang", "Weinan Zhang", "Lei Zheng", "Weijie Bian", "Guorui Zhou", "Jian Xu", "Yong Yu", "Xiaoqiang Zhu" ], "title": "Lifelong sequential modeling with personalized memorization for user response prediction", "venue": "In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Gary R Waissi", "Donald F Rossin" ], "title": "A sigmoid approximation of the standard normal integral", "venue": "Applied Mathematics and Computation,", "year": 1996 }, { "authors": [ "Elizabeth Lai Sum Wong" ], "title": "Active-set methods for quadratic programming", "venue": "PhD thesis, UC San Diego,", "year": 2011 }, { "authors": [ "Chen Xu", "Quan Li", "Junfeng Ge", "Jinyang Gao", "Xiaoyong Yang", "Changhua Pei", "Fei Sun", "Jian Wu", "Hanxiao Sun", "Wenwu Ou" ], "title": "Privileged features distillation at taobao recommendations", "venue": null, "year": 1907 }, { "authors": [ "Xiangyu Zhao", "Long Xia", "Liang Zhang", "Zhuoye Ding", "Dawei Yin", "Jiliang Tang" ], "title": "Deep reinforcement learning for page-wise recommendations", "venue": "In Proceedings of the 12th ACM Conference on Recommender Systems,", "year": 2018 }, { "authors": [ "Guorui Zhou", "Xiaoqiang Zhu", "Chenru Song", "Ying Fan", "Han Zhu", "Xiao Ma", "Yanghui Yan", "Junqi Jin", "Han Li", "Kun Gai" ], "title": "Deep interest network for click-through rate prediction", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Guorui Zhou", "Na Mou", "Ying Fan", "Qi Pi", "Weijie Bian", "Chang Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Deep interest evolution network for click-through rate prediction", "venue": "In Proceedings of the AAAI conference on artificial intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Commercial recommendation systems have been widely applied among prevalent content distribution platforms such as YouTube, TikTok, Amazon and Taobao. During the interactive process on the recommendation platform, the users may find contents of their interests and avoid the information overload problem with the help of recommendation services. Meanwhile, the platform may gain commercial benefits from user behaviors on the platform such as clicks and purchases. As the platform may serve millions of users and can determine which contents to be recommended, it naturally has some advantages over individual user. Therefore, it would be crucial for the platform to make full use of its advantages in order to maximize the commercial benefits.\nOne typical advantage of the platform is its information advantage, i.e., they may collect plenty of information over users and items for conducting better recommendation. Typical state-of-the-art recommendation systems (Covington et al., 2016; Guo et al., 2017; Ren et al., 2019; Zhou et al., 2019) always take these information into consideration including user profiles, item features and historical interactions between users and recommended items. It is worth noting that information over item features is always directly incorporated into the recommendation models without considering that the user may be with different levels of uncertainty over different item dimensions (which can be regarded as different hidden attributes describing different high-order features of the item). For instance, when buying a new coat on the platform, a user may be sure that the logistics is very fast as she (he) has bought clothes from the same online store before (i.e., the user is with low uncertainty over the logistics). But she (he) may be uncertain about the quality of the coat since it is of the brand that she (he) does not know much about (i.e., the user is with high uncertainty over the quality). Thus, it would be crucial for the platform to figure out whether it is possible to leverage the user uncertainty over different item dimensions to maximize the platform utility, and if yes, how?\n1Item dimensions: Typical state-of-the-art solutions for recommendation systems always encode each item as an embedding. The item dimensions refer to different dimensions of the item embedding, which can be explained as different high-order features.\nActually, with consideration of the user uncertainty over different item dimensions, we would show that more commercial benefits can be gained from the item dimensions with higher uncertainty.\nAnother advantage of the platform is that it owns the capacity of determining which items to display for the users and thus may affect the users’ behaviors. It has been proved by lots of works (Kamenica & Gentzkow, 2011; Immorlica et al., 2019; Abdollahpouri & Mansoury, 2020) that the display signal itself would highly affect users’ behaviors, and affected behaviors would apparently result in different benefits for the platform. Regarding the recommendation as a game between the platform and the users, it is possible for the platform to achieve more commercial benefits from the game by taking a proper display (recommendation) policy. However, though there are works to explore the impact of recommendation policies, it is still not well-studied in recommendation area how to explicitly model and exploit the impact of the display policy over users.\nIn this paper, we propose an uncertainty-aware expected Posterior Utility maximization framework for REcommendation platforms (denoted as PURE in short). We take both the two previously mentioned factors, i.e., user uncertainty over different item dimensions and influence of display policy over the user, into account and introduce a generic utility function which can be flexibly adjusted for different real-world scenarios. Then, we formulate the problem of maximizing expected posterior utility for the platform as a constrained non-convex optimization problem, and correspondingly propose a solution based on Alternating Direction Method of Multipliers (ADMM, Boyd et al. (2011)) to derive the approximately optimal policy. To verify the effectiveness of the proposed framework, extensive experiments are conducted over data collected from a real-world recommendation platform. Furthermore, we also provide practical insights derived from carefully designed experiments and empirically reveal how the platform utilizes its information advantage to achieve more commercial benefits, which may help to better understand and conduct commercial recommendation." }, { "heading": "2 RELATED WORK", "text": "Existing state-of-the-art recommendation systems (Zhou et al., 2018; Pi et al., 2019; Qu et al., 2016) mainly try to make full use of the information advantage of the platform. These works take these information into consideration including user profiles, item features, contextual information and historical interactions between users and recommended items. Typically, some works (Qu et al., 2016; Zhou et al., 2018; Li et al., 2019) focus on how to achieve better feature interactions or conduct better user interest modeling, while some works (Ren et al., 2019; Pi et al., 2019) may pay more attention to utilizing extremely long sequential interactive information. However, most of them ignore the existence of user uncertainty over different item dimensions, which might be crucial to conduct better commercial recommendation.\nIn the research area to explore the display influence to the information receiver, Bayesian Persuasion (Kamenica & Gentzkow, 2011) is one of the most crucial works, which theoretically proves that the information sender may benefit from displaying proper information to the receiver. Some works (Immorlica et al., 2019; Mansour et al., 2016) follow this idea and strive to incentivize exploration via information asymmetry in scenarios such as recommendation. In another research direction that try to develop Reinforcement Learning (RL) based solutions for recommendation scenarios, a series of works (Dulac-Arnold et al., 2015; Zhao et al., 2018; Chen et al., 2019) model the recommendation process as a Markov Decision Process (MDP) and maximize the long-term reward via utilizing learned sequential patterns, which can also be regarded as taking the display (recommendation) influence into consideration to some extent." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 OPTIMAL POLICY FOR MAXIMIZING PLATFORM’S EXPECTED POSTERIOR UTILITY", "text": "From the perspective of the platform, the optimal recommendation policy is the one with maximal expected utility (i.e., maximal expected commercial benefits). As mentioned before, the influence of display policy over users can not be ignored as it would highly affect the commercial benefits of the platform. In this paper, taking the impact of display policy on users into consideration, we formulate the platform’s optimal policy πu for user u over a given item set I as follows.\nπu = argmax π ∑ i∈I πiUu(i|display;π), s.t.,∀i ∈ I,πi ≥ 0 and ∑ i∈I πi = 1, (1)\nwhere Uu(i|display;π) is the posterior utility of recommending item i to user u with consideration of the influence of display policy π. With this formulation, the remaining problem is how to model\nthe posterior utility properly. In the following, we illustrate two reasonable assumptions in detail, which make it possible to model the posterior utility with consideration of the user uncertainty over different item dimensions as well as the influence of display policy over user.\nAs discussed before, it would be crucial to explicitly consider the user uncertainty over different item dimensions to conduct recommendation. For a given user, we assume that the representation for an item is sampled from a multivariate Gaussian distribution and adopt the variances to describe the user uncertainty over different item dimensions, which is formulated as the following assumption. Assumption 1 (Assumption of uncertainty (correlation) over different item dimensions). For a user u, the representation of item i is sampled from a n-dimension multivariate Gaussian distribution N (µu,i,Σu,i), i.e., the probability density function of the representation is:\npu,i(x) = 1√\n(2π)n|Σu,i| e− 1 2 (x−µu,i) TΣ−1u,i(x−µu,i), (2)\nwhere x ∈ Rn, µu,i and Σu,i denote the mean vector and the covariance matrix respectively.\nThe covariance matrix can be decomposed as Σu,i = Du,iCu,iDu,i, where Du,i is the diagonal standard deviation matrix andCu,i is the correlation matrix (see Barnard et al. (2000) for more information). Thus, the covariance matrix can depict the user uncertainty over different item dimensions (with diagonal standard deviation matrix Du,i) as well as the correlation between different item dimensions (with correlation matrix Cu,i). Note that we provide a practical method to gain µu,i and Σu,i in Section 4.1 while any other reasonable approach to get µu,i and Σu,i can be applied.\nFrom the perspective of users, they may try to understand the display policy of the platform from the interactive process. When an item i is recommended to a user u, the user may consider the corresponding display probability, which could influence his behavior. One reasonable assumption is that the probability of displaying item i from the perspective of user u is linear to the similarity between the item representation x and the representations of historical recommended items. Without loss of generality, we formulate this assumption as follows. Assumption 2 (Assumption of the influence of the display policy over user). Given display policy π and item representation x, the probability of recommending item i to user u from the user’s perspective is:\npu,i(display|x;π) = Φ(avTux+ b), (3) where display denotes the event of displaying (recommending) the corresponding item to the target user, a and b are scale hyper-parameters, a > 0, Φ(z) = ∫ z −∞ 1√ 2π e− x2\n2 dx and vu = Ei∼π(µu,i) = ∑ i∈I πiµu,i.\nNote that Φ is the cumulative distribution function of standard normal distribution, which is widely adopted to map the input into range [0, 1] in many models such as probit model, and vu takes the expected value of item representations w.r.t. the display policy π over item set I. Thus, with a > 0, higher similarity (calculated by inner product) between the current item representation x and the expected representation of display items would lead to higher likelihood, which is reasonable as discussed before.\nSo far, we have presented two crucial assumptions adopted in our framework. In the following, we introduce the utility function and derive the formula of posterior utility. To ease the illustration, we denote the utility function of recommending item i to user u given sampled item representation x as f(wu,x), where wu is the embedding of user side typically and f(wu,x) depicts the utility (benefits) that the platform can gain from recommending item i to user u with item representation x. For instance, when we regard the click trough rate (CTR) as the platform’s utility, one simple way is to model f as the inner product of wu and x where wu can be regarded as the user preference vector of user u. Note that the function f can be flexibly adjusted to fit different requirements of different scenarios. For example, when we try to maximize the gross merchandise volume (GMV) in a recommendation scenario, f can be defined as the product of the corresponding CTR, conversion rate (CVR) and the item price.\nNow we can combine the two assumptions and the utility function f to derive the formula of posterior utility. By adopting the Bayes’ theorem and the law of total probability, we present the formula of posterior utility of recommending item i to user u as follows.\nUu(i|display;π) = ∫ Rn f(wu,x)pu,i(x|display;π) dx (4)\n= ∫ Rn f(wu,x)pu,i(x) pu,i(display|x;π) pu,i(display;π) dx (5)\nEquation 5 provides a way to model the posterior utility of the platform taking into account both the user uncertainty over different item dimensions and the influence of display policy over user. However, it is still challenging to derive the optimal policy πu for given user u as the right part of Equation 5 is not a close-form expression (which makes it intractable to calculate the exact value of the posterior utility)." }, { "heading": "3.2 POSTERIOR UTILITY DERIVATION FOR LINEAR AND NON-LINEAR UTILITY FUNCTIONS", "text": "In this section, we present how to compute (estimate) the value of posterior utility for both linear and non-linear utility functions. To derive an effective calculation formula of Uu(i|display;π) for the case with linear utility function, we first introduce a lemma in the following.\nLemma 1. ∫ +∞ −∞ (c ′x+d′)Φ(a′x+ b′)N (x|0, 1) dx = a ′c′√ 2π(a′2+1) e − b′2 2a′2+2 +d′Φ( b ′ √ 1+a′2\n),∀a′, b′, c′, d′ ∈ R.\nThe detailed proof of Lemma 1 is provided in Appendix A.1 due to page limits. Lemma 1 reveals the fact that the definite integral ∫ +∞ −∞ (c\n′x+ d′)Φ(a′x+ b′)N (x|0, 1) dx can be computed directly, which is a crucial property to derive an effective calculation formula of Uu(i|display;π) for the case with linear utility function.\nBy utilizing Lemma 1, the following corollary shows that pu,i(display;π) can be straightforwardly calculated if the covariance matrix is positive definite.\nCorollary 1. pu,i(display;π) = Φ( avTuµu,i+b√\n1+a2vTuΣu,ivu ), if Σu,i is positive definite.\nDue to page limits, the detailed proof of Corollary 1 is also presented in Appendix A.2. With Lemma 1 and Corollary 1, we can derive the calculation formula of Uu(i|display;π) if the utility function is linear, which is illustrated in detail as follows.\nCorollary 2. Uu(i|display;π) = wTuµu,i+ avTuΣu,iwu√\n2π(a2vTuΣu,ivu+1)Φ( avTu µu,i+b√ a2vTuΣu,ivu+1\n) e −\n(avTu µu,i+b) 2\n2(a2vTuΣu,ivu+1) ,\nif f(wu,x) = wTux and Σu,i is positive definite.\nProof. If Σu,i is positive definite, Σ−1u,i is also positive definite. Then we can conduct Cholesky decomposition: Σ−1u,i = A T u,iAu,i, where Au,i is an upper triangular matrix with real and positive diagonal entries. Thus, we have:∫ Rn f(wu,x)pu,i(x)pu,i(display|x;π) dx (6)\n= ∫ Rn wTuxΦ(av T ux+ b)N (x|µu,i,Σu,i) dx (7)\n= ∫ Rn wTu (A −1 u,iz + µu,i)Φ(av T u (A −1 u,iz + µu,i) + b)N (z|0, I) dz (8) =wTuµu,iΦ( avTuµu,i + b√ a2vTuΣu,ivu + 1 ) + avTuΣu,iwu√ 2π(a2vTuΣu,ivu + 1) e − (avTu µu,i+b) 2 2(a2vTuΣu,ivu+1) (9)\nEquation 8 and Equation 9 are derived from Proposition 3 and Proposition 5 respectively. Due to page limits, Proposition 3 and Proposition 5 and their detailed proofs are presented in Appendix. Combing Corollary 1 and Equation 9, we have:\nUu(i|display;π) = 1\npu,i(display;π) ∫ Rn f(wu,x)pu,i(x)pu,i(display|x;π) dx (10)\n= wTuµu,i + avTuΣu,iwu√\n2π(a2vTuΣu,ivu + 1)Φ( avTuµu,i+b√ a2vTuΣu,ivu+1\n) e −\n(avTu µu,i+b) 2\n2(a2vTuΣu,ivu+1)\n(11)\nCorollary 2 reveals that the posterior utility can be effectively calculated (since it is with a closeform expression and thus can avoid the intractable calculation in Equation 5) when Σu,i is positive definite and the utility function f is linear, which makes it possible to be effectively utilized in the real-world scenarios.\nHowever, when the utility function f is non-linear, it might be challenging or even impossible to calculate the exact value of the posterior utility. To estimate the posterior utility when f is nonlinear, we leverage the importance sampling technique to approximate the posterior utility. Combing Assumption 2, Equation 5 and Corollary 1, we have:\nUu(i|display;π) = ∫ Rn f(wu,x)pu,i(x)l(x;π) dx, (12)\nwhere l(x;π) = Φ(av T u x+b)\nΦ( avTu µu,i+b√ a2vTuΣu,ivu+1\n) . Thus, computing Uu(i|display;π) can be regarded as calcu-\nlating the expected value of f(wu, X) w.r.t. the target probability density function q(X = x) = pu,i(x)l(x;π). Since pu,i(x) is the probability density function of a multivariate Gaussian distribution from which one can easily conduct sampling, we can adopt pu,i(x) as the sampler density and as a result, given a sample x, the corresponding importance weight is l(x;π). Thus, given a sample set X which consists of m samples drawn i.i.d. according to distribution N (µu,i,Σu,i), we can approximate the posterior utility as:\nÛu(i|display;π) = 1∑\nx∈X l(x;π) ∑ x∈X f(wu,x)l(x;π). (13)\nIn this section, we derive an effective calculation formula for posterior utility in the case with linear utility function and provide a sampling-based method to estimate posterior utility in the case with non-linear utility function. Note that the sampling-based method can also be applied to estimate the posterior utility for the case with linear utility function. Combining the results in this section (i.e., Equation 11 and Equation 13) and the formulation of optimal policy (as shown in Equation 1), we notice that the problem of deriving the optimal policy for maximizing platform’s expected posterior utility, which is formulated in Equation 1, is turned into a constrained non-convex optimization problem even for the case with linear utility function. To solve the problem, an ADMM-based solution is proposed in the next section." }, { "heading": "3.3 AN ADMM-BASED SOLUTION FOR DERIVING APPROXIMATELY OPTIMAL POLICY", "text": "As presented in the previous section, finding the optimal policy to maximize the expected posterior utility of the platform is formulated as a constrained non-convex optimization problem. Noticing that the ADMM technique (Boyd et al., 2011) has been successfully adopted as an effective method to find approximate solution for constrained non-convex optimization problem (see Leng et al. (2017) for more details), we develop an ADMM-based solution to solve the aforementioned constrained non-convex optimization problem.\nDenoting gu,I(π) = − ∑ i∈I πiUu(i|display;π), we rewrite the constrained non-convex optimization problem (which is formulated in Equation 1) as follows.\nargmin π gu,I(π), s.t.,∀i ∈ I,πi ≥ 0 and ∑ i∈I πi = 1. (14)\nBy introducing an auxiliary parameter vector π′, the problem can be reformulated as:\nargmin π\ngu,I(π ′), s.t.,π′ = π, ∀i ∈ I,πi ≥ 0 and ∑ i∈I πi = 1. (15)\nNote that the target function gu,I(π′) and the target policy π is linked via the first constrain, i.e., π′ = π. We regard the last two constrains as hard constrains which are required to be satisfied at any optimization step. And the augmented Lagrangian w.r.t. to the first constrain is:\nLρ(π,π ′,λ) = gu,I(π ′) + λT (π′ − π) + ρ 2 ||π′ − π||22, (16)\nwhere λ is the Lagrangian multipliers and ρ > 0 is a hyper-parameter. To achieve an approximate solution, the ADMM method consists of the following three steps for each iteration:\nπ(k+1) := argmin π Lρ(π,π ′(k),λ(k)) (17)\nπ′(k+1) := argmin π′ Lρ(π (k+1),π′,λ(k)) (18)\nλ(k+1) := λ(k) + ρ(π′(k+1) − π(k+1)) (19)\nWe can observe that the last two steps do not involve optimization w.r.t. π. Thus, the third step can be simply achieved by value updating while the second step can be approximately solved by adopting gradient-based solutions such as stochastic gradient decent. The optimization problem w.r.t. the first step can be reformulated as:\nargmin π\nLρ(π,π ′(k),λ(k)) = argmin π λ(k)T (π′(k) − π) + ρ 2 ||π′(k) − π||22 (20)\n= argmin π\nρ 2 ||π′(k) + λ\n(k) ρ − π||22, (21)\ns.t.,∀i ∈ I,πi ≥ 0 and ∑ i∈I πi = 1, (22)\nwhich is a convex quadratic programming problem and can be solved by convex quadratic programming solutions such as active set methods (Wong, 2011). It is worth noting that the optimization problem w.r.t. the first step can be regarded as finding the nearest point from a given convex set (which satisfy the two constrains) to a given point (i.e., π′(k) + λ (k)\nρ ).\nBy iteratively conducting the aforementioned three optimization steps until convergence (say, after k′ steps), we can acquire an approximately optimal policy π(k\n′) for maximizing expected posterior utility of the platform, which is demonstrated to be great enough in the experiment section." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present the experiment setup and the experiment results in detail. First, we present the detailed setup of the experiments including a practical way to gain the mean vector and covariance matrix for given pair of user and item, which makes it possible to take the user uncertainty over different item dimensions into consideration. Second, we conduct experiments to verify the effectiveness of the proposed technical solutions involved in our framework, i.e., the effectiveness of the ADMM-based solution for maximizing platform expected posterior utility and the sampling-based technique for posterior utility estimation. Third, to verify the superior of the policy derived from the proposed framework, we conduct experiments and present the results of comparison between our policy and other two heuristic policies. Fourth, to answer the crucial question that how the platform maximizes its commercial benefits, we analyse the relation between the learned policy and two significant factors, the user preference and uncertainty over given item dimension, and provide practical insights to better understand and conduct commercial recommendation." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "To conduct experiments, we first collect click log data of about 1 million users and 5 million items from the GUESS U LIKE scenario of the Taobao Application. To encode the users and the items, we adopt prevalent Embedding&MLP paradigm (He et al., 2017; Guo et al., 2017) and encode the information of the user and the item sides respectively (Xu et al., 2019). By training on specific task (e.g, click through rate prediction task), we can gain the embedding of the user side as well as that of the item side. Without loss of generality, we consider the following two model architectures: i) the model output is the inner product of the user and the item embeddings; ii) the model output is achieved by applying a non-linear function over the user and the item embeddings. Note that these two architectures correspond to the cases with linear and non-linear utility functions respectively. For each pair of user group (divided according to age and gender) and item category, the clicked item set is extracted and the mean vector and the covariance matrix of the corresponding item embeddings are calculated. For a given pair of user and item, the mean vector and the covariance matrix can be gained by adopting those of the pair of the corresponding user group and item category.\nWithout loss of generality, we empirically set the value of hyper-parameters a and b to ensure that most of the values of pu,i(display|x;π) lie between 0.001 and 0.1 (noticing that other reasonable ranges also lead to similar experimental conclusions). Since Φ is a function in the form of integral which might be intractable to calculate, we adopt an effective approximation (Waissi & Rossin, 1996) to ease the calculation. To avoid learning a highly unbalanced policy (with probability 1 for one item while with 0 for others) due to unbalanced utilities, we also incorporate an entropy-based regularization term into our model. Specifically, instead of directly adopting the optimization step as shown in Equation 18, we add an extra term to smooth the learned policy and the corresponding optimization step is:\nπ′(k+1) := argmin π′ Lρ(π (k+1),π′,λ(k)) + η ∑ i∈I (π′i − 1 |I| )2, (23)\nwhere |I| denotes the number of items in item set I, η is the parameter to control the smooth level of the learned policy and is set to 10.0 empirically in the experiments. Besides, the optimizer we adopted for gradient decent is the Adam Optimizer (Kingma & Ba, 2014) and the value of ρ is set as 1.0 empirically, which is proved to be effective in the following experiments." }, { "heading": "4.2 EFFECTIVENESS OF THE PROPOSED TECHNICAL SOLUTIONS", "text": "In this section, we present the experiment results to demonstrate the effectiveness of the two proposed technical solutions absorbed into our framework, i.e., the effectiveness of the ADMM-based solution for maximizing expected posterior utility and the importance sampling technique for posterior utility estimation.\nTo demonstrate the effectiveness of the proposed ADMM-based solution for maximizing expected posterior utility, we record the optimization curves (i.e., expected posterior utility w.r.t. the number of optimization iterations) and present the results in Figure 1. According to Section 3.2, there are three cases to be considered: i) linear utility function with exact posterior utility (calculated by adopting Equation 11); ii) linear utility function with estimated posterior utility; iii) non-linear utility function with estimated posterior utility. We randomly sample 3 users and 100 items and utilize the proposed ADMM-based optimization method to learn the approximately optimal policy to maximize the expected posterior utility of the platform. As shown in Figure 1, the expected posterior utility first increases and then converges to its maximum with the increase of the number of optimization iterations. These results demonstrate that the proposed ADMM-based solution can effectively learn a policy to maximize the expected posterior utility. Besides, comparing the optimization curves of the three cases, we can observe that the variances of the expected posterior utility in the cases with\nestimated posterior utility Ûu(i|display;π) (the last two cases) are higher than that in the case with exact calculated posterior utility Uu(i|display;π) (the first case). The reason is that extra random noise is induced by importance sampling when we conduct posterior utility estimation and as a result, expected posterior utility is with higher variance in the case with estimated posterior utility.\nIn order to verify the effectiveness of the adopted importance sampling technique for posterior estimation, we record the optimization curves w.r.t. different numbers of samples for estimating posterior utility for three randomly sampled users and present the results in Figure 2. The linear utility function is adopted to conduct the experiments which makes it possible to make comparison between the results of utilizing estimated posterior utility (estimated according to Equation 13) and exact posterior utility (calculated according to Equation 11). From Figure 2 we can observe that even in the case with 1 sample per estimation, the tendency of the optimization curve is similar to that of the case with exact posterior utility, which verifies that the adopted sampling-based technique for posterior utility estimation is effective. Besides, when varying the number of samples per estimation from 1 to 1000, we observe that the optimization curves become more and more stable. It is because that larger number of samples per estimation leads to more accurate approximation of posterior utility, and thus results in more stable optimization curve. More details including the Root Mean Square Error (RMSE) between the estimated values and the exact values of the posterior utilities for each sampling setting are provided in Appendix A.4 due to page limits." }, { "heading": "4.3 EXPERIMENTAL COMPARISON WITH HEURISTIC POLICIES", "text": "In this section, we present the experiment details and the corresponding results to verify the effectiveness of the policy derived from the proposed framework. To the best of our knowledge, the presented framework is the first complete work that proposes to maximize the expected posterior utility (which explicitly takes the impact of display policy into consideration) and develops a corresponding solution to derive the approximately optimal policy. Thus, there may lack solutions for conducting straightforward comparison and we intent to verify the effectiveness of the learned policy by comparing it with other heuristic policies. One simple policy for comparison is the random policy that recommends each item with same probability. Thus, the random policy can be regarded as an uniform probability distribution over the candidate items. We also incorporate a heuristic prior policy for comparison where the recommendation probability is higher for item with higher prior utility2. Specifically, the recommendation probability of item i is given by\nπpopi = e Uu(i) τ∑\nj∈I e Uu(j) τ\n(24)\nwhere τ is the temperature parameter (Hinton et al., 2015) to control the smooth level of the heuristic prior policy. For conducting fair comparison, we adjust the value of τ to ensure that the smooth levels of the heuristic prior policy and our learned policy are comparable.\nWe randomly sample 1000 users, calculate the mean of expected posterior utilities for each policy and present the results in Table 1. As shown in Table 1, the policy derived from the proposed framework achieves the highest expected posterior utility compared to the random and the heuristic prior policies in both linear and non-linear cases, which demonstrates the superior of the proposed framework." }, { "heading": "4.4 HOW DOES THE PLATFORM ACHIEVE ITS COMMERCIAL BENEFITS", "text": "In this section, we intent to adopt the proposed framework to reveal how the platform achieves its commercial benefits taking both the user preference and the user uncertainty over different item dimensions into account. In the experiments, the utility function is realized by applying the sigmoid\n2Similar to the derivation of posterior utility, we can derive that i) for linear case, Uu(i) = wTuµu,i; ii) for non-linear case, Uu(i) = ∫ Rn f(wu,x)pu,i(x) dx.\nactivation function over the inner product of item representation and user representation (i.e., wu). Each element ofwu can be explained as the user preference over the corresponding item dimension. For a randomly sampled user, we denote the mean vector of the user preference vectors over items as w̄ and denote its first element as w̄0, whose value indicates the user’s preference of item dimension 0. We vary the value of w̄0 and record the learned expected value (w.r.t. the learned policy) of the elements with dimension 0 of the mean vectors, which is denoted as v0. As shown in the left part of Figure 3, with the increase of w̄0, the value of v0 increases approximately linearly (noting that the Pearson correlation coefficient is 0.9995 which indicates strong linear correlation). According to its definition, v0 can reflect to what extent the policy has adjusted to fit item dimension 0 (more details about how the policy adjust to different levels of user preference or uncertainty are provided in Appendix A.5). Thus, the result suggests that the policy would tend to adjust more for dimensions with higher value. In other words, the platform can achieve more commercial benefits from the item dimension with stronger user preference. We also vary the mean of item variance of dimension 0, which indicates the user uncertainty over item dimension 0, and record v0 as presented in the right part of Figure 3. By similar analysis, the result indicates that the platform can achieve more commercial benefits from the item dimension that the user is with higher uncertainty.\nTherefore, the results in Figure 3 indicate that commercial benefits mainly come from the item dimensions with either strong user preference or high user uncertainty when we take the impact of display policy on user into consideration. Note that this result may contribute to better understand how commercial benefits are achieved and better conduct commercial recommendation. For instance, from the perspective of the platform, one effective way to improve its commercial benefits is to better balance the user experience and its commercial actions, i.e., for item dimensions where the user is with high preference, the platform should cater to his preference, while for flexible item dimensions where the user is with high uncertainty, the platform may achieve more benefits by recommending items with high platform utilities." }, { "heading": "5 CONCLUSION", "text": "In the paper, we propose a novel recommendation framework and take into account the user uncertainty over different item dimensions and the influence of display policy over user, both of which could highly affect the commercial benefits of the platform. We derive the calculation formula of the posterior utility in the case with linear utility function and provide a sampling-based method to estimate the posterior utility in the case with non-linear utility function. Based on these works, we formulate the problem of deriving optimal policy for maximizing the expected posterior utility of the platform as a constrained non-convex optimization problem and further propose an ADMM-based solution to derive an approximately optimal policy. To demonstrate the effectiveness of the proposed technical solutions absorbed into our framework, extensive experiments are conducted over data collected from a real-world recommendation scenario. Furthermore, we also provide some practical insights about how to achieve commercial benefits for the platform taking both the user preference and the user uncertainty over different item dimensions into consideration, which might contribute to better understand and conduct commercial recommendation." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF LEMMA 1\nTo prove Lemma 1, we first prove the following two propositions. Proposition 1. ∫ +∞ −∞ Φ(a ′x+ b′)N (x|µ0, σ20) dx = Φ( a′µ0+b\n′√ 1+a′2σ20 ),∀a′, b′, µ0, σ0 ∈ R.\nProof. By introducing two independent variables X and Y which satisfy X ∼ N (µ0, σ0) and Y ∼ N (0, 1), we have∫ +∞\n−∞ Φ(a′x+ b′)N (x|µ0, σ20) dx = EX∼N (µ0,σ0)(Φ(a ′X + b′)) (25)\n= P(Y < a′X + b′) (26) = P(Y − a′X < b′) (27) = P( Y − a′X + a′µ0√\n1 + a′2σ20 < b′ + a′µ0√ 1 + a′2σ20 ) (28)\n= Φ( a′µ0 + b ′√ 1 + a′2σ20 ) (29)\nNote that Equation 26 is gained by the definition of function Φ, i.e., Φ(z) is the probability of sampling from the standard normal distribution with the sampled value less than z. Since X ∼ N (µ0, σ0) and Y ∼ N (0, 1), we have Y−a\n′X+a′µ0√ 1+a′2σ20\n∼ N (0, 1). Thus, Equation 29 can be straightforwardly derived by definition of Φ, which concludes the proof.\nProposition 2. Denoting the error function by erf which satisfies erf(z) = 2√ π ∫ z 0 e−t 2 dt, we\nhave: ∫∞ −∞ xe −x2 2 erf(a′x+ b′) dx = 2a\n′√ a′2+ 12\ne − b′2 2a′2+1 ,∀a′, b′ ∈ R.\nProof. For case with a′ = 0, we have:∫ ∞ −∞ xe −x2 2 erf(a′x+ b′) dx = ∫ 0 −∞ xe −x2 2 erf(b′) dx+ ∫ ∞ 0 xe −x2 2 erf(b′) dx (30)\n= − ∫ ∞\n0\nxe −x2 2 erf(b′) dx+ ∫ ∞ 0 xe −x2 2 erf(b′) dx (31)\n= 0 (32) = 2a′√ a′2 + 12 e − b′2 2a′2+1 (33)\nFor case with a′ 6= 0, we have:∫ ∞ −∞ xe −x2 2 erf(a′x+ b′) dx = −e −x2 2 erf(a′x+ b′) ∣∣∣∞ −∞ − ∫ ∞ −∞ −e −x2 2 2a′√ π e−(a ′x+b′)2 dx\n(34)\n= ∫ ∞ −∞ 2a′√ π e−(a ′2+ 12 )x 2−2a′b′x−b′2 dx (35)\n= ∫ ∞ −∞ 2a′√ π e −(a′2+ 12 )(x+ a′b′ a′2+ 1 2 )2+ a ′2b′2 a′2+ 1 2 −b′2 dx (36)\n= ∫ ∞ −∞ 2a′√ π e a′2b′2 a′2+ 1 2 −b′2 e−(a ′2+ 12 )z 2 dz (37) = 2a′√ π e a′2b′2 a′2+ 1 2 −b′2 √ π\na′2 + 12 (38)\n= 2a′√ a′2 + 12 e − b′2 2a′2+1 (39)\nNote that Equation 34 is derived by applying Newton-Leibniz formula. Thus, combining the above two cases, we have: ∫∞ −∞ xe −x2 2 erf(a′x + b′) dx =\n2a′√ a′2+ 12\ne − b′2 2a′2+1 ,∀a′, b′ ∈ R.\nWith Proposition 1 and Proposition 2, now we prove Lemma 1 as follows.\nProof of Lemma 1. By definition, Φ(z) = 1+erf( z√ 2 )\n2 . Thus, we have:∫ +∞ −∞ (c′x+ d′)Φ(a′x+ b′)N (x|0, 1) dx (40)\n= ∫ +∞ −∞ 1 2 √ 2π (c′x)(1 + erf( a′x+ b′√ 2 ))e− x2 2 dx+ ∫ +∞ −∞ d′Φ(a′x+ b′)N (x|0, 1) dx (41) = a′c′√\n2π(a′2 + 1) e − b′2 2a′2+2 + d′Φ( b′√ 1 + a′2 ) (42)\nThe last equation is derived by utilizing Proposition 1 and Proposition 2, which completes the proof.\nA.2 PROOF OF COROLLARY 1\nIn order to prove Corollary 1, we first prove the following two propositions. Proposition 3. If Σu,i is positive definite, then ∫ Rn h(x)N (x|µu,i,Σu,i) dx = ∫ Rn h(A −1 u,iz + µu,i)N (z|0, I) dz, where Au,i is the upper triangular matrix gained by Cholesky decomposition over Σ−1u,i , i.e., Σ −1 u,i = A T u,iAu,i\nProof. If Σu,i is positive definite, Σ−1u,i is also positive definite. Then we can conduct Cholesky decomposition:\nΣ−1u,i = A T u,iAu,i, (43)\nwhereAu,i is an upper triangular matrix with real and positive diagonal entries.\nThus, by adopting integration by substitution, we have:∫ Rn h(x)N (x|µu,i,Σu,i) dx (44)\n= ∫ Rn h(x) 1√ (2π)n|Σu,i| e− 1 2 (x−µu,i) TΣ−1u,i(x−µu,i) dx (45)\n= ∫ Rn h(A−1u,iAu,i(x− µu,i) + µu,i) 1√ (2π)n e− 1 2 (Au,i(x−µu,i)) TAu,i(x−µu,i)|Au,i| dx (46)\n= ∫ Rn h(A−1u,iz + µu,i) 1√ (2π)n e− 1 2z T z dz (47)\n= ∫ Rn h(A−1u,iz + µu,i)N (z|0, I) dz (48)\nThus, if Σu,i is positive definite and Au,i is the upper triangular matrix gained by Cholesky decomposition over Σ−1u,i , then we have ∫ Rn h(x)N (x|µu,i,Σu,i) dx = ∫ Rn h(A −1 u,iz + µu,i)N (z|0, I) dz, which concludes the proof.\nProposition 4. ∫ Rn Φ(v Tx+ a′)N (x|0, I) dx = Φ( a ′√ 1+vT v ), ∀v ∈ Rn and a′ ∈ R. Proof. Denoting ṽi = a′ + ∑n j=i+1 vjxj , we have:∫\nRn Φ(vTx+ a′)N (x|0, I) dx (49)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ(a′ + n∑ j=1 vjxj)N (x1|0, 1) dx1 · · · N (xn|0, 1) dxn (50)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ(v1x1 + ṽ1)N (x1|0, 1) dx1 · · · N (xn|0, 1) dxn (51)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ( ṽ1√ 1 + v21 )N (x2|0, 1) dx2 · · · N (xn|0, 1) dxn (52)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ( v2x2 + ṽ2√ 1 + v21 )N (x2|0, 1) dx2 · · · N (xn|0, 1) dxn (53)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ( ṽ2√ 1+v21√\n1 + v22\n1+v21\n)N (x3|0, 1) dx3 · · · N (xn|0, 1) dxn (54)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ Φ( ṽ2√\n1 + v21 + v 2 2\n)N (x3|0, 1) dx3 · · · N (xn|0, 1) dxn (55)\n=Φ( a′√\n1 + vTv ) (56)\nEquation 52 and Equation 54 are derived from Lemma 1 (or Proposition 1). Similarly, Equation 56 is achieved by applying Lemma 1 (or Proposition 1) iteratively, which completes the proof.\nWith Proposition 3 and Proposition 4, now we can prove Corollary 1 as follows.\nProof of Corollary 1. If Σu,i is positive definite, Σ−1u,i is also positive definite. Then we can conduct Cholesky decomposition: Σ−1u,i = A T u,iAu,i, whereAu,i is an upper triangular matrix with real and positive diagonal entries. Thus, we have:\npu,i(display;π) = ∫ Rn pu,i(display|x;π)pu,i(x) dx (57)\n= ∫ Rn Φ(a(vTux) + b)N (x|µu,i,Σu,i) dx (58)\n= ∫ Rn Φ(avTuA −1 u,iz + av T uµu,i + b)N (z|0, I) dz (59)\n= Φ( avTuµu,i + b√\n1 + a2vTuΣu,ivu ) (60)\nEquation 59 and Equation 60 are derived from Proposition 3 and Proposition 4 respectively, which concludes the proof.\nA.3 PROPOSITION ADOPTED FOR PROVING COROLLARY 2 Proposition 5. ∫ Rn(u Tx + b′)Φ(vTx + a′)N (x|0, I) dx = v Tu√ 2π(vT v+1) e − a′2 2(vT v+1) + b′Φ( a ′√\nvT v+1 ), ∀u,v ∈ Rn and a′, b′ ∈ R.\nProof. Denoting ũi = b′ + ∑n j=i+1 ujxj and ṽi = a ′ + ∑n j=i+1 vjxj , we have:∫\nRn (uTx+ b′)Φ(vTx+ a′)N (x|0, I) dx (61)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ (b′ + n∑ j=1 ujxj)Φ(a ′ + n∑ j=1 vjxj)N (x1|0, 1) dx1 · · · N (xn|0, 1) dxn (62)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ (u1x1 + ũ1)Φ(v1x1 + ṽ1)N (x1|0, 1) dx1 · · · N (xn|0, 1) dxn (63)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ ( v1u1√ 2π(v21 + 1) e − ṽ 2 1 2(v21+1) + ũ1Φ( ṽ1√ 1 + v21 ))N (x2|0, 1) dx2 · · · N (xn|0, 1) dxn\n(64)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ ( v1u1√ 2π(v21 + 1) e − (v2x2+ṽ2) 2 2(v21+1) + (u2x2 + ũ2)Φ( v2x2 + ṽ2√ 1 + v21 ))N (x2|0, 1) dx2\n· · · N (xn|0, 1) dxn (65)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ ( v1u1√\n2π(v21 + v 2 2 + 1)\ne − (v3x3+ṽ3)\n2\n2(v21+v 2 2+1) + v2u2√ 2π(v21 + v 2 2 + 1) e − (v3x3+ṽ3) 2 2(v21+v 2 2+1) +\n(u3x3 + ũ3)Φ( v3x3 + ṽ3√ 1 + v21 + v 2 2 ))N (x3|0, 1) dx3 · · · N (xn|0, 1) dxn (66)\n= ∫ +∞ −∞ · · · ∫ +∞ −∞ ( v1u1 + v2u2√ 2π(v21 + v 2 2 + 1) e − (v3x3+ṽ3) 2 2(v21+v 2 2+1) + (u3x3 + ũ3)Φ( v3x3 + ṽ3√ 1 + v21 + v 2 2 ))\nN (x3|0, 1) dx3 · · · N (xn|0, 1) dxn (67)\n= vTu√\n2π(vTv + 1) e − a′2 2(vT v+1) + b′Φ( a′√ vTv + 1 ) (68)\nEquation 64 is derived from Lemma 1. Applying Lemma 1 and conducting definite integration over the exponential function, we can derive Equation 66. Similar calculation step can be conducted iteratively to derive Equation 68, which completes the proof.\nA.4 ADDITIONAL EXPERIMENTS TO VERIFY THE EFFECTIVENESS OF THE IMPORTANCE SAMPLING BASED POSTERIOR UTILITY ESTIMATION\nAs described in Section 3.2, importance sampling technique can be adopted to estimate the value of posterior utility. However, it is apparent that the number of samples for each posterior utility estimation would highly affect the results. Extra experiments are conducted to verify the effectiveness of the importance sampling based posterior utility estimation as shown in Table 2. When we vary the number of samples per estimation from 1 to 1000, the Root Mean Square Error (RMSE) between the estimated values and the exact values of the posterior utilities decreases from 0.0369 to 0.0011, which indicates that larger number of samples per estimation leads to more accurate approximation. We also utilize the proposed ADMM-based solution to maximize the expected posterior utility and record the mean and variance of the expected posterior utilities of the last 50 optimization iterations for each sampling setting. For the case with the number of samples per estimation set to 100 (or 1000), the variance of the expected posterior utilities is very small while the learned final value of expected posterior utility is around 0.2681, which is similar to the result of optimization with exact value of posterior utility and thus verifies the effectiveness of the proposed importance sampling based posterior utility estimation.\nA.5 ADDITIONAL EXPERIMENTS TO ILLUSTRATE HOW THE POLICY ADJUST W.R.T. DIFFERENT LEVELS OF USER PREFERENCE OR UNCERTAINTY\nWe also provide analysis about how different levels of user preference would affect the final learned policy over items. For a randomly sampled user, we vary w̄0 from -1.0 to 1.5 and keep the values of other dimensions unchanged. We also randomly pick 16 items from candidate item set, sort these items according to the value of dimension 0 of the item mean vector and denote the sorted items as i1, i2,· · · ,i16. For each case (with specific value for w̄0), we record the final learned policy (recommendation probabilities) over the picked items and present the heat map in Figure 4 where darker color represents larger recommendation probability. As shown in Figure 4, with the increase of the value of w̄0, the learned recommendation probabilities of items with large value of dimension 0 (e.g., i15 and i16) increase while those with small value decrease. Thus, to maximize the platform’s commercial benefits, the platform would tend to recommend items that match user’s preference over item dimension with high user preference. Similar results can be derived when we analyse how the learned policy adjust to different levels of user uncertainty over item dimensions. Thus, the proposed framework reveals that the policy would adjust more for item dimensions with higher preference or uncertainty so as to maximize the commercial benefits of the platform." } ]
2,020
PURE: AN UNCERTAINTY-AWARE RECOMMENDATION FRAMEWORK FOR MAXIMIZING EXPECTED POSTE-
SP:86435186f0d117c14bbf6d300053dd46884ea061
[ "This paper is a review of model-based approaches of integrating causal inference to reinforcement learning (RL) in different environments (application areas). The authors provide software to analyse how three types of models (“monolithic”, i.e. latent space models without a graph-like structure of the latent space, graph neural networks (GNN) and “modular”, i.e. the C-SWM model (Kipf et al., 2020)) perform in two artificial “environments” devised by the authors (physics and chemistry) based on a number of metrics, some of them also proposed by the authors. The main contributions are the platform for evaluating models in the environments and the insights from the experiments performed on the selected models (taken from existing literature)." ]
Inducing causal relationships from observations is a classic problem in machine learning. Most work in causality starts from the premise that the causal variables themselves are observed. However, for AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level variables, particularly those which are causal or are affected by causal variables. A central goal for AI and causality is thus the joint discovery of abstract representations and causal structure. However, we note that existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs which are impossible to manipulate parametrically (e.g., number of nodes, sparsity, causal chain length, etc.). In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them. In order to systematically probe the ability of methods to identify these variables and structures, we design a suite of benchmarking RL environments. We evaluate various representation learning algorithms from the literature and find that explicitly incorporating structure and modularity in models can help causal induction in model-based reinforcement learning.
[]
[ { "authors": [ "Fabien Baradel", "Natalia Neverova", "Julien Mille", "Greg Mori", "Christian Wolf" ], "title": "Cophy: Counterfactual learning of physical dynamics", "venue": "arXiv preprint arXiv:1909.12000,", "year": 2019 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Rosemary Ke", "Sébastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": null, "year": 1901 }, { "authors": [ "Maxime Chevalier-Boisvert", "Dzmitry Bahdanau", "Salem Lahlou", "Lucas Willems", "Chitwan Saharia", "Thien Huu Nguyen", "Yoshua Bengio" ], "title": "Babyai: First steps towards grounded language learning with a human in the loop", "venue": "arXiv preprint arXiv:1810.08272,", "year": 2018 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "arXiv preprint arXiv:1812.02341,", "year": 2018 }, { "authors": [ "Ishita Dasgupta", "Jane Wang", "Silvia Chiappa", "Jovana Mitrovic", "Pedro Ortega", "David Raposo", "Edward Hughes", "Peter Battaglia", "Matthew Botvinick", "Zeb Kurth-Nelson" ], "title": "Causal reasoning from meta-reinforcement learning", "venue": null, "year": 1901 }, { "authors": [ "Pim de Haan", "Dinesh Jayaraman", "Sergey Levine" ], "title": "Causal confusion in imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Eaton", "Kevin Murphy" ], "title": "Exact bayesian structure learning from uncertain interventions", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2007 }, { "authors": [ "Scott Elliott Fahlman" ], "title": "A planning system for robot construction tasks", "venue": "Artificial intelligence,", "year": 1974 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Jordan Hoffmann", "Shagun Sodhani", "Sergey Levine", "Yoshua Bengio", "Bernhard Schölkopf" ], "title": "Recurrent independent mechanisms", "venue": null, "year": 1909 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Phanideep Gampa", "Philippe Beaudoin", "Sergey Levine", "Charles Blundell", "Yoshua Bengio", "Michael Mozer" ], "title": "Object files and schemata: Factorizing declarative and procedural knowledge in dynamical systems", "venue": null, "year": 2006 }, { "authors": [ "Stephen James", "Zicong Ma", "David Rovick Arrojo", "Andrew J Davison" ], "title": "Rlbench: The robot learning benchmark & learning environment", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Nan Rosemary Ke", "Olexa Bilaniuk", "Anirudh Goyal", "Stefan Bauer", "Hugo Larochelle", "Chris Pal", "Yoshua Bengio" ], "title": "Learning neural causal models from unknown interventions", "venue": null, "year": 1910 }, { "authors": [ "Nan Rosemary Ke", "Jane Wang", "Jovana Mitrovic", "Martin Szummer", "Danilo J Rezende" ], "title": "Amortized learning of neural causal representations", "venue": "arXiv preprint arXiv:2008.09301,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "arXiv preprint arXiv:1911.12247,", "year": 2019 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Sarthak Mittal", "Alex Lamb", "Anirudh Goyal", "Vikram Voleti", "Murray Shanahan", "Guillaume Lajoie", "Michael Mozer", "Yoshua Bengio" ], "title": "Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules", "venue": "arXiv preprint arXiv:2006.16981,", "year": 2020 }, { "authors": [ "Suraj Nair", "Yuke Zhu", "Silvio Savarese", "Li Fei-Fei" ], "title": "Causal induction from visual observations for goal directed tasks", "venue": null, "year": 1910 }, { "authors": [ "Alex Nichol", "Vicki Pfau", "Christopher Hesse", "Oleg Klimov", "John Schulman" ], "title": "Gotta learn fast: A new benchmark for generalization in rl", "venue": "arXiv preprint arXiv:1804.03720,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Ian Osband", "Yotam Doron", "Matteo Hessel", "John Aslanides", "Eren Sezener", "Andre Saraiva", "Katrina McKinney", "Tor Lattimore", "Csaba Szepezvari", "Satinder Singh" ], "title": "Behaviour suite for reinforcement learning", "venue": null, "year": 1908 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Danilo J Rezende", "Ivo Danihelka", "George Papamakarios", "Nan Rosemary Ke", "Ray Jiang", "Theophane Weber", "Karol Gregor", "Hamza Merzic", "Fabio Viola", "Jane Wang" ], "title": "Causally correct partial models for reinforcement learning", "venue": "arXiv preprint arXiv:2002.02836,", "year": 2020 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In Proceedings of The 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "David Silver", "Kamil Ciosek" ], "title": "Compositional planning using optimal option models", "venue": "arXiv preprint arXiv:1206.6473,", "year": 2012 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Andrea Tacchetti", "H Francis Song", "Pedro AM Mediano", "Vinicius Zambaldi", "Neil C Rabinowitz", "Thore Graepel", "Matthew Botvinick", "Peter W Battaglia" ], "title": "Relational forward models for multi-agent learning", "venue": "arXiv preprint arXiv:1809.11044,", "year": 2018 }, { "authors": [ "Sjoerd Van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions", "venue": "arXiv preprint arXiv:1802.10353,", "year": 2018 }, { "authors": [ "Rishi Veerapaneni", "John D Co-Reyes", "Michael Chang", "Michael Janner", "Chelsea Finn", "Jiajun Wu", "Joshua Tenenbaum", "Sergey Levine" ], "title": "Entity abstraction in visual model-based reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Matko Bosnjak", "Christopher P Burgess", "Alexander Lerchner" ], "title": "Cobra: Data-efficient model-based rl through unsupervised object discovery and curiosity-driven exploration", "venue": null, "year": 1905 }, { "authors": [ "Terry Winograd" ], "title": "Understanding natural language", "venue": "Cognitive psychology,", "year": 1972 }, { "authors": [ "Patrick H Winston" ], "title": "Learning structural descriptions from examples", "venue": null, "year": 1970 }, { "authors": [ "Kexin Yi", "Chuang Gan", "Yunzhu Li", "Pushmeet Kohli", "Jiajun Wu", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "Clevrer: Collision events for video representation and reasoning", "venue": null, "year": 1910 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Xun Zheng", "Bryon Aragam", "Pradeep K Ravikumar", "Eric P Xing" ], "title": "DAGs with NO TEARS: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning methods have made immense progress on many reinforcement learning (RL) tasks in recent years. However, the performance of these methods still pales in comparison to human abilities in many cases. Contemporary deep reinforcement learning models have a ways to go to achieve robust generalization (Nichol et al., 2018), efficient planning over flexible timescales (Silver & Ciosek, 2012), and long-term credit assignment (Osband et al., 2019). Model-based methods in RL (MBRL) can potentially mitigate this issue (Schrittwieser et al., 2019). These methods observe sequences of state-action pairs, and from these observations are able to learn a self-supervised model of the environment. With a well-trained world model, these algorithms can then simulate the environment and look ahead to future events to establish better value estimates, without requiring expensive interactions with the environment (Sutton, 1991). Model-based methods can thus be far more sample-efficient than their model-free counterparts when multiple objectives are to be achieved in the same environment. However, for model-based approaches to be successful, the learned models must capture relevant mechanisms that guide the world, i.e., they must discover the right causal variables and structure. Indeed, models sensitive to causality have been shown to be robust and easily transferable (Bengio et al., 2019; Ke et al., 2019). As a result, there has been a recent surge of interest in learning causal models for deep reinforcement learning (de Haan et al., 2019; Dasgupta et al., 2019; Nair et al., 2019; Goyal et al., 2019; Rezende et al., 2020). Yet, many challenges remain, and a systematic framework to modulate environment causality structure and evaluate models’ capacity to capture it is currently lacking, which motivates this paper.\nWhat limits the use of causal modeling approaches in many AI tasks and realistic RL settings is that most of the current causal learning literature presumes abstract domain representations in which the cause and effect variables are explicit and given (Pearl, 2009). Methods are needed to automate the inference and identification of such causal variables (i.e. causal induction) from low-level state\nrepresentations (like images). Although one solution is manual labeling, it is often impractical and in some cases impossible to manually label all the causal variables. In some domains, the causal structure may not be known. Further, critical causal variables may change from one task to another, or from one environment to another. And in unknown environments, one ideally aims for an RL agent that could induce the causal structure of the environment from observations and interventions. In this work, we seek to evaluate various model-based approaches parameterized to exploit structure of environments purposfully designed to modulate causal relations. We find that modular network architectures appear particularly well suited for causal learning. Our conjecture is that causality can provide a useful source of inductive bias to improve the learning of world models.\nShortcomings of current RL development environments, and a path forward. Most existing RL environments are not a good fit for investigating causal induction in MBRL, as they have a single fixed causal graph, lack proper evaluation and have entangled aspects of causal learning. For instance, many tasks have complicated causal structures as well as unobserved confounders. These issues make it difficult to measure progress for causal learning. As we look towards the next great challenges for RL and AI, there is a need to better understand the implications of varying different aspects of the underlying causal graph for various learning procedures.\nHence, to systematically study various aspects of causal induction (i.e., learning the right causal graph from pixel data), we propose a new suite of environments as a platform for investigating inductive biases, causal representations, and learning algorithms. The goal is to disentangle distinct aspects of causal learning by allowing the user to choose and modulate various properties of the ground truth causal graph, such as the structure and size of the graph, the sparsity of the graph and whether variables are observed or not (see Figure 1 (a)-(d)). We also provide evaluation criteria for measuring causal induction in MBRL that we argue help measure progress and facilitate further research in these directions. We believe that the availability of standard experiments and a platform that can easily be extended to test different aspects of causal modeling will play a significant role in speeding up progress in MBRL.\nInsights and causally sufficient inductive biases. Using our platform, we investigate the impact of explicit structure and modularity for causal induction in MBRL. We evaluated two typical of monolithic models (autoencoders and variational autoencoders) and two typical models with explicit structure: graph neural networks (GNNs) and modular models (shown in Figure 5). Graph neural networks (GNNs) have a factorized representation of variables and can model undirected relationships between variables. Modular models also have a factorized representation of variables, along with directed edges between variables which can model directed relationship such as A causing B, but not the other way around. We investigated the performance of such structured approaches on learning from causal graphs with varying complexity, such as the size of the graph, the sparsity of the graph and the length of cause-effect chains (Figure 1 (a) - (d)).\nThe proposed environment gives novel insights in a number of settings. Especially, we found that even our naive implementation of modular networks can scale significantly better compared to other models (including graph neural networks). This suggests that explicit structure and modularity such as factorized representations and directed edges between variables help with causal induction in MBRL. We also found that graph neural networks, such as the ones from Kipf et al. (2019) are good at modeling pairwise interactions and significantly outperform monolithic models under this setting. However, they have difficulty modeling complex causal graphs with long cause-effect chains, such as the chain graph (demonstration of chain graphs are found in Figure 1 (i)). Another finding is that evaluation metrics such as likelihood and ranking loss do not always correspond to the performance of these models in downstream RL tasks." }, { "heading": "2 ENVIRONMENTS FOR CAUSAL INDUCTION IN MODEL-BASED RL", "text": "Causal models are frequently described using graphs in which the edges represent causal relationships. In these structural causal models, the existence of a directed edge from A to B indicates that intervening on A directly impacts B, and the absence of an edge indicates no direct interventional impact (see Appendix A for formal definitions). In parallel, world models in MBRL describe the underlying data generating process of the environment by modeling the next state given the current state-action pair, where the actions are interventions in the environment. Hence, learning world models in MBRL can be seen as a causal induction problem. Below, we first outline how a collection of simple causal structures can capture real-world MBRL cases, and we propose a set of elemental environments to express them for training. Second, we describe precise ways to evaluate models in these environments." }, { "heading": "2.1 MINI-ENVIRONMENTS: EXPLICIT CASES FOR CAUSAL MODULATION IN RL", "text": "The ease with which an agent learns a task greatly depends on the structure of the environment’s underlying causal graph. For example, it might be easier to learn causal relationships in a collider graph ( see Figure 1(a)) where all interactions are pairwise, meaning that an intervention on one variable Xi impacts no more than one other variable Xj , hence the cause-effect chain has a length of at most 1. However, causal graphs such as full graphs (see Figure 1 (a)) can have more complex causal interactions, where intervening on one variable impacts can impact up to n − 1 variables for graphs of size n (see Figure 1). Therefore, one important aspect of understanding a model’s performance on causal induction in MBRL is to analyze how well the model performs on causal graphs of varying complexity.\nImpotant factors that contribute to the complexity of discovering the causal graph are the structure, size, sparsity of edges and length of cause-effect chains of the causal graph (Figure 1). Presence of unobserved variables also adds to the complexity. The size of the graph increases complexity because the number of possible graphs grows super-exponentially with the size of the graph (Eaton & Murphy, 2007; Peters et al., 2016; Ke et al., 2019). The sparsity of graphs also impacts the difficulty of learning, as observed in (Ke et al., 2019). Given graphs of the same size, denser graphs are often more challenging to learn. Futhermore, the length of the cause-effect chains can also impact learning. We have observed in our experiments, that graphs with shorter cause-effect lengths such as colliders (Figure 1 (a)) can be easier to model as compared to chain graphs with longer cause-effect chains. Finally, unobserved variables which commonly exist in the real-world can greatly impact learning, especially if they are confounding causes (shared causes of observed variables).\nTaking these factors into account, we designed two suites of (toy) environments: the physics environment and the chemistry environment, which we discuss in more detail in the following section. They are designed with a focus on the underlying causal graph and thus have a minimalist design that is easy to visualize." }, { "heading": "2.1.1 PHYSICS ENVIRONMENT: WEIGHTED-BLOCK PUSHING", "text": "The physics environment simulates very simple physics in the world. It consists of blocks of different, unique weights. The rule for interaction between blocks is that heavier objects can push lighter ones. Interventions ammount to move a particular block, and the consequence depends on whether the block next to it (if present) is heavier or lighter. For an accurate world model, inferring the weights becomes essential. Additionally, one can allow the weight of the objects to be either observed through\nthe intensity of the color, or unobserved, leading to two environment settings described below. The underlying causal graph is an acyclic tournament, shown in Figure 3. For more details about the setup, please refer to Appendix E.\nFully observed setting. In the fully observed setting, all objects are given a particular color and the weight of each block is represented by the intensity of the color. Once the agent learns this underlying causal structure, it does not have to perform interventions on new objects in order to infer they will interact with the others.\nUnobserved setting. In this setting, the weight of each object is not directly observable by its color. The agent thus needs to interact with the object in order to understand the order of weights associated with the blocks. In this case, the weight of objects needs to be inferred through interventions. We consider two sub-divisions of this setting - FixedUnobserved where there is a fixed assignment between the shapes of the objects and their weights and Unobserved where there is no fixed assignment between the shape and the weight, hence making it a more challenging environment. We refer the reader to Appendix E.2 for details." }, { "heading": "2.1.2 CHEMISTRY ENVIRONMENT", "text": "The chemistry environment enables more complexity in the causal structure of the world by allowing arbitrary causal graphs. This is depicted by simple chemical reactions, where the state of an element can cause changes to another variable’s state. The environment consists of a number of objects whose positions are kept fixed and thus, uniquely identifiable.\nThe interactions between different objects take place according to the underlying causal graph which can either be a randomly generated DAG, or specified by the user. An interaction consists of changing the color (state) of a variable. At this point, the color of all variables affected by this variable (according to the causal graph) can change. Interventions change a block’s color unconditionally, thus cutting the graph edge linking it with its parents in the graph. All transitions are probabilistic and defined by conditional probability tables (CPTs). A visualization of the environment can be found in Figure 4.\nThis environment allows for a complete and thorough testing of causal models as there are various degrees of complexities which can be easily tuned such as: (1) Complexity of the graph: We can test any model on many different graphs thus ensuring that a models performance is not only limited to a few select graphs. (2) Stochasticity: By tuning the skewness of the probability distribution of each object we can test how good is a given model in modelling data uncertainty. In addition to this we can also tune the number of object or the number of colors to test whether the model generalizes to larger graphs and more colors. A causally correct model should be able to infer the causal relationships between observed objects, as well as their respective color distribution and its dependence on a causal parent’s distribution." }, { "heading": "2.2 EVALUATING CAUSAL MODELS", "text": "In much of the existing literature, evaluation of learned causal models is based on the structural difference between the learned graph and the ground-truth graph (Peters et al., 2016; Zheng et al., 2018). However, this may not be applicable for most deep RL algorithms, as they do not necessarily learn an explicit causal structure (Dasgupta et al., 2019; Ke et al., 2020). Even if a structure is learned, it may not be unique as several variable permutations can be equivalent, introducing an additional evaluation burden. Another possibility is to exhaustively evaluate models on all possible intervention predictions and all environment states, a process that quickly becomes intractable even for small environments. We therefore propose a few evaluation methods that can be used as a surrogate metrics to measure the model’s performance on recovering the correct causal structure.\nPredicting Intervention Outcomes. While it may not be feasible to predict all intervention outcomes in an RL environment, we propose that evaluating predictions on a subset of interventions provides an informative evaluation. Here, the test data is collected from the same environment used in training, ensuring a single underlying causal graph. Test data is generated from new episodes that are unseen during training. All interventions (actions) in the test episodes are randomly sampled and we evaluate the model’s performance on this test set.\nZero Shot Transfer. Here, we test the model’s ability to generalize to unseen test environments, where the environment does not have exactly the same causal graph as training, but training and test causal graphs share some similarity. For example, in the observed Physics environment, a model that has learned the underlying causal relationship between color intensity and weight would be able to generalize to new variables with a novel color intensity.\nDownstream RL Tasks. Downstream RL tasks that require a good understanding of the underlying causal graph of the environment are also good metrics for measuring the model’s performance. For example, in the physics environment, we can provide the model with a target configuration in the form of some specific arrangement of blocks on a grid and the model needs to perform actions in the environment to reach the target configuration. Models that capture causal relationships between objects should achieve the target configuration more easily (as it is can predict intervention outcomes). For more details about this setup, please refer to Appendix C.\nMetrics. We also evaluate the learned models on ranking metrics in the latent space as well as reconstruction-based metrics in the observation space (Kipf et al., 2019). In particular we measure and report Hits at Rank 1 (H@1), Mean Reciprocal Rank (MRR) and Reconstruction loss for evaluation in standard as well as transfer testing settings. We report these metrics for 1, 5 and 10 steps of prediction in the latent space (refer Appendix B).\n3 MODELS\nA large variety of neural network models have been proposed as world models in MBRL. These models can roughly be divided into two categories: monolithic models and models that have structure and modularity. Monolithic models typically have no explicit structure (other than layers). Some typical monolithic models are Autoencoders and Variational Autoencoders (Kingma & Welling, 2013; Rezende et al., 2014). Conversely, structured models have explicit architecture built into (or learned by) the model. Examples of such models are ones based on graph neural networks (Battaglia et al., 2016; Van Steenkiste et al., 2018; Kipf et al., 2019; Veerapaneni et al., 2020) and modular models (Ke et al., 2020; Goyal et al., 2019; Mittal et al., 2020;\nGoyal et al., 2020). We picked some commonly used models from these categories and evaluated their performance to understand their ability for causal induction in MBRL.\nTo disentangle the architectural biases and effects of different training methodologies, we trained all the models on both likelihood based and contrastive losses, respectively. All models share three common components: encoder, decoder and transition model. We follow a similar training procedure as in Ha & Schmidhuber (2018); Kipf et al. (2019). Details of the architectures as well as the training protocols and losses can be found in Appendix D." }, { "heading": "3.1 MONOLITHIC MODELS", "text": "We evaluate causal induction on two commonly used monolithic models: multilayered autoencoders and variational autoencoders. We follow a similar setup as in Ha & Schmidhuber (2018). These models do not have strong inductive biases other than the number of layers used." }, { "heading": "3.2 MODULAR AND STRUCTURED MODELS", "text": "Several forms of structure can be included in neural networks, including modularity, factorized variables, and directed rules. Taking the three factors into account, we consider two types of structured models in our paper, graph neural networks (GNN) and so called modular networks. Graph neural networks (GNN) (Gilmer et al., 2017; Tacchetti et al., 2018; Battaglia et al., 2018; Kipf et al., 2019) is a widely adopted relational model that have a factorized representation of variables and models pairwise interactions between objects while being permutation invariant. In particular, we consider the C-SWM model (Kipf et al., 2019), which is a state-of-art GNN used for modeling object interactions. Similar to most GNNs, the C-SWM model learns factorized representations of different objects but for modelling dynamics it considers all possible pairwise interactions, and hence the transition model is monolithic (i.e., not a modular transition model).\nModular networks on the other hand are composed of an initial encoder that factorizes inputs (images), and then a modular transition model (MTM) - M . This internal model is tasked to create separate factored representations for each objects in the environment, while taking into account all other objects’ representations. This model also learns interactions between objects. The rules learned here are directed rules." }, { "heading": "4 EXPERIMENTS", "text": "Our experiments seak to answer the following questions: (a) Does explicit structure and modularity help for causal induction in MBRL? If so, then what type of structures provide good inductive bias for causal induction in MBRL? (b) How do different objective functions (likelihood or contrastive) impact learning? (c) How do different models scale to complex causal graphs? (d) Do prediction metrics (likelihood and ranking metrics) correspond to better downstream RL performance? (e) What are good evaluation criteria for causal induction in MBRL?\nWe report the performance of our models on both the Physics and the Chemistry environments, and refer the readers to Appendix D for implementation details.. All models are trained using the procedure described in Section D.2 and are evaluated based on ranking and likelihood metrics on 1, 5 and 10 step predictions. For the Chemistry environment, we evaluate the models on causal graphs with varying complexity, namely - chain, collider and full graphs. These graphs vary in the sparsity of edges and the length of cause-effect chains. For the Physics environment, we evaluate the model in the fully observed setting as well as the unobserved setting." }, { "heading": "4.1 EXPLICIT STRUCTURE AND CAUSAL INDUCTION", "text": "We found that for both the Physics and the Chemistry environments, models with explicit structure outperform monolithic models on both prediction metrics and downstream RL performances. In particular, models with explicit structure (GNNs and modular models) scale better to graphs of larger size and longer cause-effect chains.\nThe Physics environment has a complex underlying causal graph (full graph: refer Figure 1 (a)). We found that GNNs performed well in this environment with 3 variables. They achieved good prediction metrics (Figure 7) and high RL performance (Figure 13) even at longer timescales. However, their performance drops significantly on environments with 5 objects both in terms of prediction metrics (Figure 8) and RL performance (Figure 14). We also see in Figure 8 and 14 that modular models scale much better compared to all other models, suggesting that they hold an advantage for larger\ncausal graphs. Further, modular models and GNNs when evaluated on zero shot settings outperform monolithic models by a significant margin (Figures 19, 20 and Tables 15, 16).\nFor the chemistry environment, we find that modular models outperform all other models for almost all causal graphs in terms of both prediction metrics (Fig 23) and RL performance (Fig 25). This is especially true on more complex causal graphs, such as chain and full graphs which have long cause-effect chains. This suggests that modular models scales better to more complex causal graphs.\nOverall, these results suggest that structure, and in particular modularity, help causal induction in MBRL when scaling up to larger and more complex causal graphs. The performance comparisons on modular networks and C-SWM (Kipf et al., 2019) suggest that both factorized representation of variables and directed edges between variables can help for causal induction in MBRL." }, { "heading": "4.2 COMPLEXITY OF THE UNDERLYING CAUSAL GRAPH", "text": "There are several ways to vary complexity in a causal graph: size of the graph, sparsity of edges and length of cause-effect chain (Figure 1). Increasing the size of the graph significantly impacts all models’ performances. We evaluate models on the Physics environments with 3 objects (Figure 7) and 5 objects (Figure 8) and find that increasing the number of objects from 3 to 5 has a significant impact on performance. Modular models achieve over 90 on ranking metrics over 10-step prediction for 3 objects while for 5 objects, they achieve only 50 (almost half the performance on 3 objects). A similar pattern is found in almost all models. Another factor impacting complexity of the graph is the length of cause-effect chain.We see that collider graphs are the easiest to learn, with modular models and autoencoders significantly outpeforming all other models (Figure 23). This is because the collider graph has short pair-wise interactions, i.e, intervention on any node in a collider graph can impact at most one other node. Chain and full graphs are significantly more challenging because of longer cause-effect chains. For a chain or a full graph of n nodes, an intervention on the kth node can impact all the subsequent (n− k) nodes. Modeling interventions on chain and full graphs require modeling more than pairwise relationships, hence, making it much more challenging. We find that modular models slightly outperform all other models on these graphs." }, { "heading": "4.3 PREDICTION METRICS AND RL PERFORMANCE", "text": "As discussed in Section 2.2, there are multiple evaluation metrics based on either prediction metrics or RL performance. The performance of the model on one metric may not necessarily transfer to another. We would like to analyze if this is the case for the models trained under various environments. We first note that while the ranking metrics were relatively good for most models on physics environments, most of them only did slightly better than a random policy on downstream RL, especially on larger graphs (Figures 7 - 12 and Tables 3 - 8 for ranking metrics; Figures 13 - 18 and Tables 9 - 14 for downstream RL). Figures 21, 22 and 27 show scatter plots for each pair of losses, with one loss on each axis. While there is some correlation between ranking metric and RL performance (Modular\nand GNN; Figure 21), we did not find this trend to be consistent across models and environment settings. We feel that these results give further evidence of need to evaluate on RL performance." }, { "heading": "4.4 TRAINING OBJECTIVES AND LEARNING", "text": "Likelihood loss and contrastive loss (Oord et al., 2018; Kipf et al., 2019) are two frequently used objectives for training world models in MBRL. We trained the models under each of these objective functions to understand how they impact learning. In almost all cases, models with explicit structure (modular models and GNNs) trained on contrastive loss perform better in terms of ranking loss compared to those trained on likelihood loss (refer to Figures 7 - 12). We don’t see a very clear trend between training objective and downstream RL performance but we do see a few cases where contrastively trained models performed much better than others (refer to Figures 6, 13, 17, 18 and Tables 9, 13, 14).\nFor other key insights and experimental conclusions on different environments, we refer the readers to Appendix E.6 for the physics environment and Appendix F.3 for the chemistry environment." }, { "heading": "5 RELATED WORK", "text": "Video Prediction and Visual Question Answering. There exist a number of video prediction (Yi et al., 2019; Baradel et al., 2019) and visual question answering (Johnson et al., 2017) datasets that also make use of a blocks world for visual representation. Though these datasets can appear visually similar to ours at first glance, they lack two essential ingredients for systematically evaluating models for causal induction in MBRL. The first is that they do not allow active interventions and hence make it challenging for evaluating model-based reinforcement learning algorithms. Another key point is that these environments do not allow one to systematically perturb different aspects of causal graphs, hence, preventing to systematically study the performances of models for causal induction. RL Environments. There exist several benchmarks for multi-task learning for robotics (Meta-World (Yu et al., 2019) and RLBench (James et al., 2020)) and for video gaming domain (Arcade Learning Environment, CoinRun (Cobbe et al., 2018), Sonic Benchmark (Machado et al., 2018), MazeBase (Nichol et al., 2018) and BabyAI (Chevalier-Boisvert et al., 2018)). However, as mentioned earlier, these benchmarks do not allow one to systematically controll different aspects of causal models (such as the structure, the sparsity of edges and the size of the graph), hence making it difficult to systematically study causal induction in MBRL.\nBlock World. The AI community has been using the “blocks world” for decades as a testbed for various AI problems, including learning theory (Winston, 1970), natural language (Winograd, 1972), and planning (Fahlman, 1974). Block world allows to easily vary different aspects of the underlying causal structure, and also allow interventions to be performed on many high level variables of the environment giving rise to a large space of tasks which have well-defined relations between them." }, { "heading": "6 DISCUSSIONS AND CONCLUSIONS", "text": "In our work, we focus on studying various model-based approaches for causal induction in modelbased RL. We highlighted the limitations of existing benchmarks and introduced a novel suite of environments that can help measure progress and facilitate research in this direction. We evaluated various models under many different settings and discuss the essential problems and challenges in combining both fields i.e ingredients, that we believe are common in the real world, such as modular factorization of the objects and interactions of objects governed by some unknown rules. Using a proposed evaluation framework, we demonstrate that structural inductive biases are beneficial to learning causal relationships and yield significantly improved performances in learning world models.\nThere are several interesting future directions that can be taken from here. One direction is extending the environments to settings such as meta-learning, where different causal graphs are set for each episode of training. Another interesting direction is extending this to an environment where the cause and effect does not happen at fixed timescale. For example, if a person smokes, it can take variable amount of time until they get cancer. This is very relevant for reinforcement learning, as this is tightly related to credit assignment in RL." } ]
null
null
SP:ec1ff351fa8fb2cd61f9662a9d0e7db6531fcb4f
[ "The authors present a neural-based decompilation framework. They generate synthetic input data in order to make sure source code examples have a consistent code style that can be more easily learned. They predict types of variables and the actual code in two separate steps. For both steps, they employ a custom transformer architecture where evey second layer of the encoder is a graph neural network. There are two separate encoders of this kind, one conditions on a CFG obtained from static analysis of the input assembly, while the other one conditions on the partial output AST that has been generated thus far." ]
Binary decompilation is a powerful technique for analyzing and understanding software, when source code is unavailable. It is a critical problem in the computer security domain. With the success of neural machine translation (NMT), recent efforts on neural-based decompiler show promising results compared to traditional approaches. However, several key challenges remain: (i) Prior neuralbased decompilers focus on simplified programs without considering sophisticated yet widely-used data types such as pointers; furthermore, many high-level expressions map to the same low-level code (expression collision), which incurs critical decompiling performance degradation; (ii) State-of-the-art NMT models (e.g., transformer and its variants) mainly deal with sequential data; this is inefficient for decompilation, where the input and output data are highly structured. In this paper, we propose N-Bref 1, a new framework for neural decompilers that addresses the two aforementioned challenges with two key design principles: (i) N-Bref designs a structural transformer with three key design components for better comprehension of structural data – an assembly encoder, an abstract syntax tree encoder, and a tree decoder, extending transformer models in the context of decompilation. (ii) N-Bref introduces a program generation tool that can control the complexity of code generation and removes expression collisions. Extensive experiments demonstrate that N-Bref outperforms previous neural-based decompilers by a margin of 6.1%/8.8% accuracy in datatype recovery and source code generation. In particular, N-Bref decompiled human-written Leetcode programs with complex library calls and data types in high accuracy.
[]
[ { "authors": [ "Wasi Uddin Ahmad", "Saikat Chakraborty", "Baishakhi Ray", "Kai-Wei Chang" ], "title": "A transformer-based approach for source code summarization", "venue": "arXiv preprint arXiv:2005.00653,", "year": 2020 }, { "authors": [ "Tiffany Bao", "Jonathan Burket", "Maverick Woo", "Rafael Turner", "David Brumley" ], "title": "BYTEWEIGHT}: Learning to recognize functions in binary code", "venue": "In 23rd {USENIX} Security Symposium ({USENIX} Security", "year": 2014 }, { "authors": [ "I.D. Baxter", "A. Yahin", "L. Moura", "M. Sant’Anna", "L. Bier" ], "title": "Clone detection using abstract syntax trees", "venue": "In Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272),", "year": 1998 }, { "authors": [ "Tal Ben-Nun", "Alice Shoshana Jakobovits", "Torsten Hoefler" ], "title": "Neural code comprehension: A learnable representation of code semantics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Brumley", "Ivan Jager", "Thanassis Avgerinos", "Edward J. Schwartz" ], "title": "Bap: A binary analysis platform", "venue": "Computer Aided Verification,", "year": 2011 }, { "authors": [ "David Brumley", "JongHyup Lee", "Edward J Schwartz", "Maverick Woo" ], "title": "Native x86 decompilation using semantics-preserving structural analysis and iterative control-flow structuring", "venue": "In Presented as part of the 22nd {USENIX} Security Symposium ({USENIX} Security", "year": 2013 }, { "authors": [ "Bo Chen", "Le Sun", "Xianpei Han" ], "title": "Sequence-to-action: End-to-end semantic graph generation for semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Towards synthesizing complex programs from inputoutput examples", "venue": "arXiv preprint arXiv:1706.01284,", "year": 2017 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Tree-to-tree neural networks for program translation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Execution-guided neural program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Cristina Cifuentes" ], "title": "Reverse compilation techniques", "venue": null, "year": 1994 }, { "authors": [ "Marcella Cornia", "Matteo Stefanini", "Lorenzo Baraldi", "Rita Cucchiara" ], "title": "M2̂: Meshed-memory transformer for image captioning", "venue": "arXiv preprint arXiv:1912.08226,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Li Dong", "Mirella Lapata" ], "title": "Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 33–43, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1004. URL https://www.aclweb.org/anthology/P16-1004", "year": 2016 }, { "authors": [ "Li Dong", "Mirella Lapata" ], "title": "Coarse-to-fine decoding for neural semantic parsing", "venue": "arXiv preprint arXiv:1805.04793,", "year": 2018 }, { "authors": [ "MV Emmerik", "Trent Waddington" ], "title": "Using a decompiler for real-world source recovery", "venue": "In 11th Working Conference on Reverse Engineering,", "year": 2004 }, { "authors": [ "Cheng Fu", "Huili Chen", "Haolan Liu", "Xinyun Chen", "Yuandong Tian", "Farinaz Koushanfar", "Jishen Zhao" ], "title": "Coda: An end-to-end neural program decompiler", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jingxuan He", "Pesho Ivanov", "Petar Tsankov", "Veselin Raychev", "Martin Vechev. Debin" ], "title": "Predicting debug information in stripped binaries", "venue": "In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2018 }, { "authors": [ "Omer Katz", "Yuval Olshaker", "Yoav Goldberg", "Eran Yahav" ], "title": "Towards neural decompilation", "venue": "arXiv preprint arXiv:1905.08325,", "year": 2019 }, { "authors": [ "Jeremy Lacomis", "Pengcheng Yin", "Edward Schwartz", "Miltiadis Allamanis", "Claire Le Goues", "Graham Neubig", "Bogdan Vasilescu" ], "title": "Dire: A neural approach to decompiled identifier naming", "venue": "34th IEEE/ACM International Conference on Automated Software Engineering (ASE),", "year": 2019 }, { "authors": [ "JongHyup Lee", "Thanassis Avgerinos", "David Brumley" ], "title": "Tie: Principled reverse engineering of types in binary programs", "venue": null, "year": 2011 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "arXiv preprint arXiv:1511.05493,", "year": 2015 }, { "authors": [ "Zhen Li", "Deqing Zou", "Shouhuai Xu", "Xinyu Ou", "Hai Jin", "Sujuan Wang", "Zhijun Deng", "Yuyi Zhong" ], "title": "Vuldeepecker: A deep learning-based system for vulnerability detection", "venue": "In 25th Annual Network and Distributed System Security Symposium,", "year": 2018 }, { "authors": [ "Zhiqiang Lin", "Xiangyu Zhang", "Dongyan Xu" ], "title": "Automatic reverse engineering of data structures from binary execution", "venue": "In Proceedings of the 11th Annual Information Security Symposium,", "year": 2010 }, { "authors": [ "Wang Ling", "Phil Blunsom", "Edward Grefenstette", "Karl Moritz Hermann", "Tomáš Kočiský", "Fumin Wang", "Andrew Senior" ], "title": "Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Sifei Luan", "Di Yang", "Celeste Barnaby", "Koushik Sen", "Satish Chandra" ], "title": "Aroma: Code recommendation via structural code search", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Charith Mendis", "Alex Renda", "Saman Amarasinghe", "Michael Carbin" ], "title": "Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks", "venue": "arXiv preprint arXiv:1808.07412,", "year": 2018 }, { "authors": [ "Lili Mou", "Ge Li", "Lu Zhang", "Tao Wang", "Zhi Jin" ], "title": "Convolutional neural networks over tree structures for programming language processing", "venue": "arXiv preprint arXiv:1409.5718,", "year": 2014 }, { "authors": [ "Anh Tuan Nguyen", "Tung Thanh Nguyen", "Tien N. Nguyen" ], "title": "Lexical statistical machine translation for language migration", "venue": "In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering,", "year": 2013 }, { "authors": [ "Maxim Rabinovich", "Mitchell Stern", "Dan Klein" ], "title": "Abstract syntax networks for code generation and semantic parsing", "venue": "arXiv preprint arXiv:1704.07535,", "year": 2017 }, { "authors": [ "Nathan E Rosenblum", "Xiaojin Zhu", "Barton P Miller", "Karen Hunt" ], "title": "Learning to analyze binary computer", "venue": null, "year": 2008 }, { "authors": [ "A. Sanfeliu", "K. Fu" ], "title": "A distance measure between attributed relational graphs for pattern recognition", "venue": "IEEE Transactions on Systems, Man, and Cybernetics,", "year": 1983 }, { "authors": [ "Abigail See", "Peter J. Liu", "Christopher D. Manning" ], "title": "Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Zhan Shi", "Kevin Swersky", "Daniel Tarlow", "Parthasarathy Ranganathan", "Milad Hashemi" ], "title": "Learning execution through neural code fusion", "venue": "arXiv preprint arXiv:1906.07181,", "year": 2019 }, { "authors": [ "Eui Chul Richard Shin", "Dawn Song", "Reza Moazzezi" ], "title": "Recognizing functions in binaries with neural networks", "venue": "In 24th {USENIX} Security Symposium ({USENIX} Security", "year": 2015 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "arXiv preprint arXiv:1503.00075,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Minjie Wang", "Lingfan Yu", "Da Zheng", "Quan Gan", "Yu Gai", "Zihao Ye", "Mufei Li", "Jinjing Zhou", "Qi Huang", "Chao Ma", "Ziyue Huang", "Qipeng Guo", "Hao Zhang", "Haibin Lin", "Junbo Zhao", "Jinyang Li", "Alexander J Smola", "Zheng Zhang" ], "title": "Deep graph library: Towards efficient and scalable deep learning on graphs", "venue": "ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "K. Yakdan", "S. Dechand", "E. Gerhards-Padilla", "M. Smith" ], "title": "Helping johnny to analyze malware: A usability-optimized decompiler and malware analysis user study", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Xuejun Yang", "Yang Chen", "Eric Eide", "John Regehr" ], "title": "Finding and understanding bugs in c compilers", "venue": "In Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation,", "year": 2011 }, { "authors": [ "Fangke Ye", "Shengtian Zhou", "Anand Venkat", "Ryan Marucs", "Nesime Tatbul", "Jesmin Jahan Tithi", "Paul Petersen", "Timothy Mattson", "Tim Kraska", "Pradeep Dubey" ], "title": "Misim: An end-to-end neural code similarity system", "venue": "arXiv preprint arXiv:2006.05265,", "year": 2020 }, { "authors": [ "Pengcheng Yin", "Graham Neubig" ], "title": "A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Pengcheng Yin", "Graham Neubig" ], "title": "Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation", "venue": "arXiv preprint arXiv:1810.02720,", "year": 2018 }, { "authors": [ "Yaqin Zhou", "Shangqing Liu", "Jingkai Siow", "Xiaoning Du", "Yang Liu" ], "title": "Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qihao Zhu", "Yingfei Xiong", "Yican Sun", "Lili Mou", "Lu Zhang" ], "title": "Treegen: A tree-based transformer architecture for code generation", "venue": null, "year": 1911 }, { "authors": [ "Cornia" ], "title": "2019) for training N-Bref and transformer baseline", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Decompilation, which is a process of recovering source code from binary, is useful in many situations where it is necessary to analyze or understand software for which source code is not available. For example, decompilation is highly valuable in many security and forensics applications (Lin et al. (2010); Lee et al. (2011); Brumley et al. (2011)). Given a binary executable, an ideal decompiler generates the high-level program that preserves both the semantics and the functionality of the source code. However, this process is difficult as the data structure and semantics are largely destroyed or obfuscated during the compilation. Inspired by remarkable performance in neural machine translation (NMT) tasks (Liu et al. (2019); Vaswani et al. (2017); Dai et al. (2019); Devlin et al. (2018); Dong & Lapata (2016)), recent works (Fu et al. (2019); Katz et al. (2019)) leverage NMT model for neural-based decompilation and achieve promising performance on small code snippets.\nTo make neural-based decompilation useful in practice, many challenges remain: (C1) Current stateof-the-art neural architectures for machine translation – transformer (Vaswani et al. (2017)) or its variants (Dai et al. (2019); Devlin et al. (2018); Liu et al. (2019)) – focused on sequential data (e.g., language), while neural decompilers deal with data with intrinsic structures (e.g., tree/graph) and long-range dependencies. (C2) The main decompilation task consists of many sub-tasks (e.g., datatype recovery, control/dataflow recovery). Training one neural network cannot solve them all. (C3) Practical data types (e.g., pointers) are not modeled and compiling configurations need to be known beforehand (Fu et al. (2019)). (C4) Due to a lack of unification in terms of library usage, variable type, and/or control-flow complexity, a simple crawling from public repositories does not\n1 N-Bref is the abbreviation for “neural-based binary reverse engineering framework”\nwork well. Source code of different styles can be compiled into identical binary code (i.e., “expression collision” or EC) and yield issues when evaluating decomplied code against original source code. To our best knowledge, no code generation toolkit with configurable code complexity exists.\nIn this paper, we present N-Bref, an end-to-end neural-based decompiler framework that learns to decompile the source code to assembly. For (C1), we design a back-bone structural transformer by incorporating inductive Graph Neural Networks (GNNs) (Hamilton et al. (2017)) to represent the low-level code (LLC) as control/dataflow dependency graphs and source code as Abstract Syntax Tree (AST). To better model long-range correlations in the structural representations, we add a graph neural network after each of the self-attention layers in the transformer. The AST decoder expands the AST of the source code in a tree fashion to better capture the dependency of each predicted node. Also, we adopt memory augmentation (Cornia et al. (2019)) and new tokenizing methods to improve the scalability of our neural networks with the growing size of programs. The backbone network is learned to iteratively generate AST for source code from structured representation of assembly.\nFor (C2) and (C3), we decouple decompilation into two sub-tasks: data type solver (DT-Solver) and source code generator (SC-Gen), both use the same backbone structural transformer with different parameters. The output of the data type solver is used as the decoder input of the source code generation. For (C4), we design a dataset generator to generate training data, test and analyze the performance of different design principles across configurable code complexity. Different from conventional dataset generators (Yang et al. (2011); IntelC++compiler (2017)) used in programming language studies, our generator produces similar code styles as those written by human programmers, has unified source code representation that avoids EC, has configurable complexity and data types to facilitate factor analysis, and is specifically designed for learning-based methodologies.\nExtensive experiments show that on our new metrics, N-Bref outperforms transformer baseline/previous neural-based decompiler (Fu et al. (2019)) by 3.5%/6.1% and 5.5%/8.8% in data type recovery and source code generation tasks, respectively. Furthermore, on 5 human-written Leetcode solutions, N-Bref shows 4.1%/6.0% and 6.0%/9.7% margins over transformer/previous neural decompiler in data type recovery and source code generation, respectively. We also perform a comprehensive study of the design component in neural-based decompiler across different dataset configurations. In summary, this paper makes the following contributions:\nWe construct an end-to-end decompilation system by integrating a LLC Encoder, an AST encoder, an AST decoder, and a set of novel embedding methods in a holistic manner. Our new architectures bridge the gap between low-level code and high-level code by transforming both of them into a graph space.\nWe perform a comprehensive analysis of the influence of each neural-based decompiler design component to the overall program recovery accuracy across different dataset configurations. We corroborate the design performance on various generated benchmarks and Leetcode tasks.\nWe boost decompilation performance by decomposing the decompilation process into separate tasks, data type recovery and AST generation. In addition, we present corresponding new metrics to evaluate data type recovery and source code generation.\nWe develop the first dataset generation tool for neural-based decompiler development and testing. It randomly generates programs with configurable complexity and data types; it also unifies source code representation to prevent ”expression collision”." }, { "heading": "2 PRELIMINARIES OF DECOMPILERS", "text": "Decompilation takes an executable file as input and attempts to create high-level source code that are more semantically meaningful and can be compiled back. Figure 1 shows a low-level code snippet disassembled from a stripped binary and the corresponding high-level program.\nA commonly used low-level code (LLC) is assembly (ASM). An assembly program is a sequence of instructions that can be executed on a particular processor architecture (e.g. MIPS, x86-64). The first token for each instruction is called an ”opcode”, which specifies the operation to be performed by the instruction. Many instructions in a program operate on processor registers (a small amount of fast storage in the processor) or instant values to perform arithmetic operations, such as shifting (e.g.shl , shr ), floating-point multiplications (e.g. mulss), etc. Other instructions include (1)\n2 Complete assembly code and graph are shown in Appendix H & I.\nmemory instructions that load (Figure 1(b) Line 1) or store (Line 9) data from memory/register to register/memory; (2) branch instructions that conditionally (Line 6) or unconditionally (Line 10) redirect program execution to a different sequence.\nEach instruction has a certain internal structure, depending on the opcode. For example, in Line 8 of Figure 1(b), the first operand is a floating-point value in the memory and multss multiplies the value with the destination register (xmm0 ) and stores the value back to xmm0 . Besides, connections also exist between instructions: (i) branch instructions (e.g., je , jmp) reveal the ‘control flow’ of the high-level program; (ii) the register which stores the new value of multss (Line 8) is consumed later as a source register (Line 9). These data movements reveal the ’data flow’ of the program. In this paper, we formulate the low-level instructions as a graph using the instruction structure, control-flow and data-flow between each nodes as shown in Figure 1(b).\nHigh-level programming languages can be represented in its equivalent abstract syntax tree (AST) (Baxter et al. (1998)) during code generation (Figure 1(a)). This representation has many advantages over its sequential representations: (i) adjacent nodes are logically closer in AST compared with sequential representations, (ii) error propagation in sequential expansion can be alleviated in a tree decoder, and (iii) AST grammar helps prevent error predictions." }, { "heading": "3 N-BREF OVERVIEW", "text": "In this section, we provide an overview of our design components with an illustrative example. Figure 2 shows an example of the prediction procedures.\nThe Backbone Structural Transformer. Our structural transformer has three components: (1) LLC encoder, (2) AST encoder, and (3) AST decoder (Detailed in Sec. 4). The LLC encoder takes the low-level instructions converted from binaries using disassembler as input. AST encoder takes the input of a previous (partial) AST, and the predictions of AST decoder are AST nodes, which can be equivalently converted to the high-level program. As mentioned earlier, we formulate input low-level code into graphs and high-level code into tree structures.\nAs the AST of the data declaration is very distinct from the rest of the code (Figure 1(a)) in highlevel program, we decompose decompilation into two tasks: data type solver (DT-Solver) and source code generator (SC-Gen). Both have the backbone structural transformer.\nPrediction Procedure. Figure 1 shows an example of the code recovery process of N-Bref. The assembly graph and high-level AST is the input of LLC encoder and AST encoder. The input of the AST decoder is the tree path from the root node to the expansion node. Initially, a single root-node is fed into the AST encoder/decoder. Once a new node is generated from decoder in each step, we\nupdate the AST and use it as the AST encoder input in the next prediction step. We expand the AST in a breadth-first (BFS) fashion.\nAST contains explicit terminal nodes, which are tokens with no child, such as registers, numerics, variable references and variable types. Non-terminal nodes (e.g. binary operator ‘=’) must have children, otherwise there is a syntax error. The branch stop expansion when its leaf nodes are all terminal nodes. Note that during training, we apply ‘teacher forcing’ by attaching the correct node label into the AST encoder at each step. (See Appendix E for formal algorithm)\nCascading DT-Solver and SC-Gen. As shown in Figure 1, we divide the AST into two parts: (i) AST of data type and (ii) AST of main code body. Each part is generated using DT-Solver and SC-Gen respectively. This method allows the network to focus on each task individually and resolves more complicated data types. During testing, DT-Solver first generates the left part of the AST in Figure 1, then the SC-Gen will continue the expansion from this intermediate results. During training, the initial data type input to the SC-Gen is the program golden." }, { "heading": "4 METHODOLOGY: DATA GENERATOR", "text": "In this section, we detail the data generator designed in N-Bref. We design data generator so that it has no expression collision (EC). For example, ‘unary’ operators are converted to ‘binary’ operators (i++ and i=i+1) and all the ‘while’ loops are transferred into ‘for’ loops. Experimentally, we observe that our data generator is free of EC and performance improves. EC hurts the performance because (1) the same input assembly can be mapped to multiple equivalent outputs; (2) extra high-level semantics result in extra token dimensions; (3) the training under EC is both difficult and slow due to label adjustment at runtime.\nThe generator is configurable with multiple hyper-parameters (Table 1), which makes it easy for N-Bref to accommodate with different decompiliation tasks. It also allows us to analyze which components in the pipeline improve the scalability and performance (See Sec. 6).\nFor each data point in the dataset, we sample bsdepth, b s size and b s num with a uniform distribution between 1 and a user-specific maximal value (Table 1). The number of sampled variables (varsnum) of a program is related to bsnum and b s depth following a Poisson Distribution (Equations in Appendix D). The generator also takes the libraries libin that are pre-defined by the user as potential generated components. If a function call is sampled, the data generator will filter out the variables that do not match with its input / output types (line 4 Figure 1(a)).\nIn summary, bdepth and Ec control the difficulty of control/data flow, while bsize and bnum control the length of the code. For example, the code snippet in Figure 1(a) has a configuration of [Ec, bdepth, bsize, bnum] = [2, 1, 1, 1]. Note that in N-Bref, the program is compiled using gcc with no optimizations. Previous works (Brumley et al. (2013); Lee et al. (2011); Lin et al. (2010)) also disabled optimizations, because many optimizations change variable types and rewrite source code (e.g. loop-invariant optimization) which will result in unfair accuracy evaluations." }, { "heading": "5 METHODOLOGY: PIPELINE", "text": "Here, we present the details of the backbone structural transformer in N-Bref.\nLLC Encoder. As shown in Figure 1(b), we first formulate the assembly code as graphs by adding the following edges between nodes: (i) between the ‘opcode’ node of branch instructions and all their possible next instructions (control flow). (ii) between instructions ‘opcode’ and its ‘operands’ (i.e., registers / instant values). (iii) between the same register node of different instructions (register read-after-write) which indicates the data-dependency. Different from (Shi et al. (2019)), we do not add redundant pseudo nodes to indicates node positions, because this method is not scalable for long programs due to the exponential size of input using pseudo nodes. Instead, we directly concatenate all one-hot meta-features of a node together, namely the register / instruction type (vart / inst), position in the instruction (npos), node id (nid) and numerical field (nnum) during tokenizing process. If the token is a number, we represent it in a binary format to help the transformer generalize to unseen numerical values (e.g., 12 is represented as nnum=[0,0,0,0,1,1,0,0], a 16-by-1 vector). This representation method can greatly reduce the length of the transformer input and make the decompiler more robust for long programs.\nThe tokenized vector for each node (h0 = [vart; inst;npos;nblock;nid;nnum;nzeros]T , h0 ∈ Rd×1) are fed into an embedding graph neural network (GNN) – GraphSAGE (Hamilton et al. (2017)), an inductive framework that can generalize representations for unseen graphs. We pad nzeros to the input vector h0 to match the GNN output feature size d. Note that to better represent the data flow in assembly code, we leverage a character embedding for registers (vart = [c1; c2; c3]). For instance, if the register is $rax, we would break it into ‘r’,‘a’,‘x’. That is because the naming policy of x86-64 (also MIPS) hardware registers indicates their underlying connections – $eax is the first 32-bit of register $rax and $ax/$ah is the first 16-/8-bit of register $eax.\nAfter getting the assembly graph V and each node’s representation h0, each node v (where v ∈ V ) aggregates the feature representations of its sampled neighbors:\nhlN(v) = max(σ(W lhlu + b l)) ,∀u ∈ N(v) (1) Here, the hlu represents the hidden state of a node’s neighbours and N(v) represents the set of neighbours of the node v. W l is a trainable matrix (d-by-d) and bl is a bias vector, σ represents the sigmoid activation function. We choose to use an element-wise max-pooling (max) as an aggregator to collect the states from neighbours. The aggregation vector is concatenated with the current state of the node hlu as the input to a fully-connected layer to get the new state:\nhl+1v =W l+1([hlv, h l N(v)]) (2)\nHere, W l+1 ∈ Rd×2d is the trainable embedding matrix and hl+1v (a d-by-1 vector) is the output of the current layer of GNN. [·, ·] is the concatenation operation.\nAST Encoder. The AST encoder encodes the AST tree from the AST decoder to guide future tree expansions. In this work, we treat the AST tree as a graph (V ) and embed it using GNN in the same way as LLC encoder following Eq. (2)(1). The input of the GNN includes meta-features of the tokenized AST node feature (nfeat) and a boolean indicating whether the node is expanded in this step (nexpand). The input vector hv = [nexpand;nfeat] (v ∈ V ) is fed into a GNN and the output (h′v) is added with the positional encoding result:\nh′′v = h ′ v +W1h depth v +W2h idx v (3)\nHere hdepthv and h idx v are the one-hot vector of the node’s (v) depth in the tree and node’s position among the parent’s children. W1 and W2 are trainable matrices for embedding. The output hidden states is fed into our designed self-attention module (Sec. 5.1). At the end of the AST encoder, we integrate the AST encoder output (Hast) and LLC encoder output (Hllc) using an additional multihead attention layer with Hllc as input K and V , and Hast as Q (Figure 3). The result (H ′ast) will be used for networks downstream.\nAST Decoder. The AST decoder takes the encoding result from the previous stage as input. The querying node of the AST decoder is represented as a path from the root to itself using the same methods proposed in (Zhu et al. (2019); Chen et al. (2018a)). This method reduces the length of the input into the decoder. The results from the low-level code encoder Hllc and the AST encoder H ′ast are integrated into the decoder using two attention layers following Eq. (4) as shown in Figure 3. The output of the AST decoder is mapped into its output space with dimension do×1 using another fullyconnected layer. do is the number of possible tokens of high-level code. After the new prediction is generated, we update the AST tree for the next time step (Figure. 2)." }, { "heading": "5.1 MEMORY AND GRAPH AUGMENTED SELF-ATTENTION MODULE", "text": "Decompilation is hardly a word-by-word translation like natural language. Each individual token in low-level code or AST tree must be interpreted in the context of its surrounding semantics. To capture the prior knowledge on programming structures and emphasize the node connections after embedding, we leverage two additional modules (i) memory-augmentation (ii) graph-augmentation in transformer attention. The formal descriptions are shown below.\nMemory augmentation. We propose a memory-augmented attention layer similar to the method in (Cornia et al. (2019)). The input prior information is trained and does not depend on the input data. Traditional transformer’s building block is self-attention layer which takes three sets of vectors (queries Q, keys K and values V ) as input. The layer first computes the similarity distribution between Q and K and use the resulted probability to do a weighted sum on V . (equations in Vaswani et al. (2017)) In N-Bref, we add two trainable matrices for each head as an extra input to the transformer for memory augmentation. And the computation is adjusted to:\nH =MultiHead(Q,K, V ) = Concat(head1, ..., headt)Ẇ O (4)\nheadi = Attention(Q ′,K ′, V ′) = softmax( Q′K ′T√ d )V ′ (5)\nwhere Q = QWqi, V ′ = [VWvi,Mvi],K = [KWki,Mki] (6)\nHere, t is the number of parallel attention heads. (Wqi,Wki,Wvi) are trainable matrices with a dimension of d × dt . W\nO has the dimension of d × d. Mvi, Mki ∈ Rdm×d and dm controls the number of slots of the memory. Note that we remove the positional embedding in the original transformer for LLC encoder as the position information is integrated into GNN embedding.\nGraph augmentation. We propose to emphasize the connections of assembly nodes after attention layer. The output of the multi-head attention layer s can be viewed as a matrix Ht = [hs,0, hs,1, ..., hs,N ], H ∈ Rd×N , where N is the number of nodes in the assembly graph. We first convert Ht back to a graph structure using the edge information of assembly code. The new graph with connected hidden states is the input to an additional GNN layer using Eq. (1)(2) after each self-attention layer." }, { "heading": "6 EVALUATION", "text": "" }, { "heading": "6.1 EXPERIMENTAL SETUP", "text": "We assess the performance of N-Bref on various benchmarks generated by our dataset generator with various difficulty levels and Leetcode solutions (Problems (2017); details in Appendix B) as shown in Table 2. This binary is disassembled using the GNU binary utilities to obtain the assembly code. To generate the AST for C code, we implement the parser using clang compiler for python. Our dataset generator is built on csmith (Yang et al. (2011)), which is a code generation tool for compiler debugging. The original tool is not suitable in our cases and thus N-Bref modifies most of the implementation for decompilation tasks. For the neural network setting discussed in Figure 3, we choose [N1, N2, N3, t] = [3, 2, 2, 8] (t is the number of heads used in N-Bref), an embedding dimensionality d = 256, and memory slots dm = 128. The training/evaluation is implemented using Pytorch 1.4 and DGL library (Wang et al. (2019)). Details about the training hyper-parameters and settings are included in Appendix A.\nComplexity arguments and benchmarks. This section describes the tasks in our evaluation. We randomly generate 25,000 pairs of high-level programs and the corresponding assembly code in each task for network training (60%), validation (20%) and evaluation (20%). We mainly focus on tuning bdepth and bnum (see Table. 1). We set Ec = 3, bsize = 3, bias = 2 and test libin with different complexities: (i)<math.h> and (ii)<string.h>. Function recursion is allowed for code generation. Other than function calls, normal expressions ( ”+,−, ∗, \\, ‖, ,&,==,∧” etc.) are also possible operators during code generation. (Code examples in Appendix C.)\nMetrics. We evaluate the performance of N-Bref using token accuracy. For evaluation of SC-Gen, we expand the decompiled AST from the root to match the structure of the golden AST (ASTG). The token accuracy is calculated as:\nacc = num(AST = ASTG)\nnum(ASTG) (7)\nWe also show the evaluation result using graph edit distance (Sanfeliu & Fu (1983)) without enforcing the match between the decompiled AST and ASTG on the graph generation in Appendix G.\nThe metric is fair to evaluate decompilation tasks as we remove all the EC. Eq. 7 is able to evaluate sequence output by treating the sequence as a tree.\nFor DT-Solver, we have two metrics: (i) macro accuracy (Accmac) and (ii) micro accuracy (Accmic). The Accmac treats unsigned and signed variables as the same. This is because unsigned and signed variables have no difference in assembly code except through hints from type-revealing functions (for example, the return value of strlen() must be an unsigned integer). Note that we do not recover numerical values that are directly assigned to a variable and we replace them with a token ‘num’. These values exist in the stack memory or assembly instructions which can be resolved after the AST is correctly recovered using additional analysis methods. One of our future works can leverage the pointer network (See et al., 2017) to directly copy numerical values to the output." }, { "heading": "6.2 RESULTS", "text": "Performance impact of each design component. To study the effectiveness of potential design principles, we perform a thorough sensitivity study as shown in Figure 4.\nIn SC-Gen, we observe that with the growth of code length and complexity, each component in NBref preserves more performance compared to the baseline transformer. The transformer with LLC encoder shows the least performance degradation when complexity and length increase (as shown by the slope). This is because the GNN in N-Bref connects instructions that are distant from each other to alleviate the performance drop. By expanding the source code into a tree structure (AST encoder+decoder), the result shows that it also prevents the accuracy of degradation.\nFor DT-Solver, increasing bsize improves the performance, because short programs do not have enough semantics to identify variable types. Also, the performance declines when increasing the program complexity (bdepth), and we assume that is because wiring control-flow complicates the analysis of data-flow. Traditional decompiler REWARD shows a large performance drop along the axis of bdepth. That is because the dynamic analysis in REWARD is for a single control path. As such, it has limited performance among complex control-flow. We also tested many other design options but they cannot achieve better scalability and performance empirically compared to N-Bref.\nComparison to previous works. N-Bref yields the highest token accuracy across all benchmarks (91.1% on average) as shown in Table 2 in both data type recovery and AST generation. N-Bref engenders 5.5% and 8.8% margin over transformer-baseline and Ins2AST, which is a previous neuralbased program decompiler (Fu et al. (2019)). The encoder in Ins2AST leverages N-ary Tree LSTM, which cannot collect information for instructions far apart but logically adjacent. Our LLC encoder, on the other hand, leverages GNN embedding and graph augmentation to emphasize node connections. We do not show the results of traditional decompilers (RetDec (2017); Hex-Rays (2017)) as they do not preserve any semantics and achieves very low token accuracy using Eq. 7. (Examples of traditional decompilation results are shown in Fu et al. (2019)).\nFor type recovery, N-Bref also achieves 3.55% / 6.1% / 30.3% average margin over transformer, Ins2AST, and REWARD respectively. Traditional decompiler REWARD leverages type-revealing instructions and does not consider other low-level representations. Also, REWARD focuses on a single path in the program executed using dynamic analysis. As such, they cannot handle control flow properly. N-Bref uses static analysis and considers all paths during execution.\nFor baseline transformer and N-Bref, we also present the Accmic in parentheses of Table 2. The gap between Accmac and Accmic is reduced when the program gets longer (reduced from 28.5% to 20.2% for <math.h> using N-Bref). That is because longer program have more type-revealing instructions/functions which can help the network to identify data types (‘unsigned/signed’) correctly. Also, N-Bref shows higher tolerance to the complexity and length growth compared to other works. For libin =<string.h>, the token accuracy drops by 6.1% from the easiest to the hardest settings compared to 9.5% / 8.2% accuracy drop for baseline transformer and Ins2AST, respectively. That is because GNN can gather information from adjacent nodes for large assembly graphs. Also, AST decoders can effectively prevent error propagation through tree expansion when the source code grow larger, unlike sequence generation where early prediction errors affect the later nodes.\nWe also select 5 Leetcode Problems (2017) solutions in C as an auxiliary test dataset and train a new model with the complexity of (bdepth = 2, bsize = 4) using <string.h> library. The result shows N-Bref is able to recover real benchmarks and achieves 6% / 9.7% margin over transformer and previous neural-based decompiler. This means N-Bref is able to generalize to human-writing code. The complexity of datasets we generate can cover real-world applications.\nAblation Study. Table 3 shows the ablation studies of techniques in N-Bref. Graph augmentation in LLC and AST encoder (1st column) helps increase the accuracy by 1.1%. Depth and child index positional encoding improve the performance by 0.53%. When replacing our method with Ahmad et al. (2020) for positional encoding, accuracy has a 0.23% drop. The ‘node representation’ refers to character embedding for assembly registers, concatenation of meta-features (Details in Sec. 5). Removing these techniques leads to a 1.8% accuracy drop on average. Memory augmentation helps to capture the prior knowledge of the code structure and removes it shows a 1.7% performance drop. Also, splitting the decompilation task into two-part shows a 2.5% improvement in accuracy." }, { "heading": "7 RELATED WORK", "text": "Data Type Recovery and Program Decompilation. There has been a long line of research on reverse engineering of binary code (Cifuentes (1994); Emmerik & Waddington (2004); Brumley et al. (2011); Bao et al. (2014); Rosenblum et al. (2008); Yakdan et al. (2016)). Commercialized decompilers ( Hex-Rays (2017); RetDec (2017)) do not care about semantics of the code which means their recovered code is very distinct from source code (see examples in Fu et al. (2019)). For type recovery, traditional methods (Lee et al. (2011); Lin et al. (2010)) leveraged the type-revealing operations or functions calls as a hint to inference variable types. These methods incur accuracy drop when there is not enough type-revealing semantics. N-Bref proposes a learning method which can collect more fine-grained information and achieve better accuracy in type inference. For control/data-flow recovery, Fu et al. (2019); Katz et al. (2019) propose neural network methods for decompilation. However, both works are based on a sequence-to-sequence neural network and tested on simple programs. N-Bref leverages a structural transformer-based model that achieves better results.\nNeural Networks for Code Generation. Neural networks have been used for code generation in prior works (Ling et al. (2016); Yin & Neubig (2017); Rabinovich et al. (2017); Yin & Neubig (2018)). These prior efforts are different from a decompilation task as the input is input-output pairs (Chen et al. (2019; 2017)), description of the code usage (Zhu et al. (2019)), or other domainspecific languages (Nguyen et al. (2013)). The abstract syntax tree (AST) was used in these recent works (Chen et al. (2018b); Yin & Neubig (2018)). Yet, most of the works leverage the Tree LSTM (Tai et al. (2015); Dong & Lapata (2018)) or Convolutional Neural networks (Chen et al. (2018a)). N-Bref demonstrates the effectiveness of transformer in the decompilation framework. Neural Networks for Binary Analysis. There is a significant body of work on binary analysis using neural networks, such as predicting execution throughput (Mendis et al. (2018)), guiding the branch predictions (Shi et al. (2019)), program analysis (Ben-Nun et al. (2018)) and verification Li et al. (2015). Most of the works using RNN to encode binary or assembly code (Mendis et al. (2018); Ben-Nun et al. (2018)). (Shin et al. (2015)) proposes to use RNN to identify function entry point in binary. GNNs were used in some of these works to encode memory heap information (Li et al. (2015)) or assembly code (Shi et al. (2019)), but the original representation methods are not scalable as they added many pseudo nodes and the node representation is not suitable for the transformer. He et al. (2018); Lacomis et al. (2019) use naive NMT model to predict debug information and to assign meaningful names to variables from binaries, yet they did not leverage the structural programming information. Many designs in N-Bref are easy to integrate with various neural-based binary analysis tasks where the input is also low-level code." }, { "heading": "8 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we present N-Bref, an end-to-end framework that customizes the design of a neuralbased decompiler. N-Bref designs a dataset generator that removes expression collision and generates random programs with any complexity configurations. N-Bref disentangles decompilation into two parts – data type recovery and AST generation, and incorporates a new architecture to facilitate structural translation tasks by integrating structural encoder/decoder into the transformer. New embedding/representation techniques help to further improve accuracy. Experiments show that N-Bref outperform previous decompilers and the state-of-the-art transformer.\nMeanwhile, we observe that many other challenges remain, for example: (i) reverse engineering binary that has been optimized or obfuscated is still challenging; (ii) directly recovering numerical values from assembly code requires more efforts. We leave these more challenging problem setups as future work." }, { "heading": "A TRAINING SETUP AND HYPER-PARAMETERS", "text": "We ran our experiments on Amazon EC2 using p3.16xlarge instance which contains Nvidia Tesla V100 GPUs with 16GB main memory. Hyper-parameters for Lang2logic and Ins2AST are selected using cross-validation with grid search. We present the hyper-parameters used by different neural networks in Table 4. The number of GNN layer in N-Bref is set to 3. We use an Adam optimizer with β1 = 0.9, β1 = 0.98 which is the setting in the original Transformer Vaswani et al. (2017). We add label smoothing and a dropout rate 0.3. Weights of attentive layers are initialized from the uniform distribution, while weights of feed-forward layers are initialized using techniques the same as Vaswani et al. (2017). Other scheduling methods (e.g. learning rate, warm-up steps) are the same as Cornia et al. (2019) for training N-Bref and transformer baseline." }, { "heading": "B LEETCODE SOLUTIONS EXAMPLES", "text": "We present the examples of the tested Leetcode solutions in Figure 1 and 2. The tasks that are tested includes ”Isomorphic Strings”, ”Multiply Strings”, ”Longest Palindromic Substring”, ”Implement strStr()”, ”ZigZag Conversion”. Many easy problems are too short (e.g. ”Length of the last word”) to justify the performance of N-Bref and some of them use their own-defined functions which is beyond the scope of N-Bref." }, { "heading": "C EXAMPLES OF N-BREF GENERATED PROGRAMS", "text": "We present the dataset examples in Figure 3 and 4. We define char, short, int, long as ‘int8 t’, ‘int16 t’, ‘int32 t’ and ‘int64 t’ to simplify tokenizing process (64-bit machine). ‘uint’ refers to ‘unsigned’ type.\nFigure 3: <math.h> random generated example with bsnum ← 1 bsdepth ← 4 bssize ← 2.\nFigure 4: <String.h> with bsnum ← 4 bsdepth ← 2 bssize ← 1.\n# i n c l u d e <math . h> # i n c l u d e < s t r i n g . h> i n t 3 2 t * foo ( vo id ) { i n t 3 2 t foo ( i n t 8 t * l 0 ){\ni n t 3 2 t l 0 = 0x3BL ; i n t 8 t * l 1 = &l 0 [ 1 ] ; i n t 3 2 t l 1 = 13L ; i n t 3 2 t l 2 [ 5 ] ; f l o a t l 2 = 0x2p +9; u i n t 1 6 t l 3 = 0x47L ; u i n t 8 t l 3 = 13UL; i n t 3 2 t l 4 = 0L ; i n t 3 2 t * l 4 = &l 1 ; i n t l 5 ; i n t 3 2 t l 5 = 0x9BL ; i n t 1 6 t l 6 = 0x15L ; i n t 3 2 t l 6 = 0xA9L ; i n t 3 2 t * l 7 = &l 2 [ 3 ] ; i n t 3 2 t l 7 = 0L ; i n t 3 2 t * l 8 [ 2 ] ; u i n t 3 2 t l 8 = 0x45L ; u i n t 3 2 t l 9 = 0x11L ; f l o a t l 9 = 0x1p +1; i n t 1 6 t l 1 0 = 1L ; f l o a t l 1 0 = 0xE . 3 p +15; i n t 3 2 t l 1 1 = 0xD1L ; f o r ( l 0 = 1 0 ; ( l 0 ! = 5 ) ; l 0 = l 0 −5) { f o r ( l 5 = 0 ; l 5 < 4 ; l 5 ++)\nf o r ( l 6 = 1 7 ; l 0 [ l 5 ] = 13L ; ( l 6 == ( −9 ) ) ; l 6 = l 6 −1){ f o r ( l 5 = 0 ; l 5 < 5 ; l 5 ++)\ni f ( ( l 5 >= ( l 3 * l 6 ) ) ) { l 2 [ l 5 ] = 0xBDL ; l 1 &= 0xFBL ; i f ( s t rncmp ( s t r c a t ( l 1 , l 7 ˆ= ( ( l 5 | | s t r c a t (& l 0 [ 3 ] , &l 0 [ 1 ] ) ) , (0 xD1L >> 1 8 ) ) & l 1 ) ; &l 0 [ 2 ] , l 2 [ 3 ] ) ) { l 1 = (0L && l 3 = l 3 − 6 ; ( ( l 0 <= l 5 ) | l 1 ) ) ; l 7 = l 7 ; } } e l s e { e l s e {\nl 8 = l 8 + 1 ; f o r ( l 5 = 0 ; l 5 < 2 ; l 5 ++) i f ( l 6 > 0) l 8 [ l 5 ] = &l 2 [ 1 ] ;\nl 9 = f r e x p ( l 2 ,& l 5 ) ; l 8 [ 1 ] = l 8 [ 1 ] ; c o n t i n u e ; l 4 ˆ= ( ( + ( l 2 [ 3 ] * l 2 [ 2 ] ) )\n} < ( l 2 [ 4 ] * 13L ) ) ; i f ( ( l 1 & ( l 7 >> }\n( ( l 7 + l 6 ) << l 7 ) ) ) ) { i f ( ( l 2 [ 4 ] / ( l 2 [ 3 ] * (0 x7EL / l 2 [ 4 ] ) ) ) ) l 4 = &l 5 ; {\n} f o r ( l 4 = 0 ; ( l 4 <= 3 ) ; l 4 += 1) e l s e { { l 2 [ 3 ] ˆ= l 9 ;\nl 3 = l 3 + 1 ; } l 2 = f l o o r ( pow ( l 9 , l 1 0 ) ) ; }\n} e l s e } {\n} r e t u r n l 4 ; l 1 0 = (0 x67L < l 2 [ 4 ] ) ;\n} l 1 1 = l 2 [ 3 ] ; } l 2 [ 3 ] = s t r c mp (& l 0 [ 1 ] , &l 0 [ 2 ] ) ; r e t u r n l 1 1 ;\n}" }, { "heading": "D EQUATIONS OF POSSION DISTRIBUTIONS FOR VARIABLE NUMBERS", "text": "For variable number (varnum or v) generated for a program, it follows Poisson distribution (Eq. 8) where λ = bsnum + b s depth + bias as discussed in Section Evaluation.\nP (v) = λve−λ\nv! v = 0, 1, 2, 3, . . . . (8)" }, { "heading": "E FORMAL ALGORITHM FOR PREDICTIONS", "text": "Algorithm 1 Algorithm for N-Bref prediction. INPUT: Assembly Graph Gasm; Root Node γ; Terminal Node Types (T ); LLC encoder, AST\nencoder, AST decoder (LLCen, ASTen, ASTde) ; N-Bref model (Model) OUTPUT: Complete AST Gast.\n1: Q← [γ] 2: Gast.update(γ) 3: while Q is not empty do 4: node← Q.pop() 5: child← model(LLCen = Gasm, ASTen = Gast, ASTde = Tree path(node)) 6: if child is not ’eos’ then 7: Gast.update(child) 8: if child /∈ T then 9: Q.append(child)\n10: Return: Gast" }, { "heading": "F PERFORMANCE IN GRAPH EDIT DISTANCE", "text": "We test the performance of N-Bref using graph edit distance (GED) which is calculated as Eq. 9. The distance is calculated as the minimum number of operations (i.e., node substitution and node insertion) to change our output AST (AST ) into the golden AST (ASTG).\nGED(AST,ASTG) = min e1,...,ek k∑ i=1 Cost(ei) , (9)\nHere, ei denotes the ith operations to change AST to ASTG. In our testing, we set Cost(e) = 1. The maximum possible GED between aAST andASTG is the number of nodes inASTG. Note that when ei substitutes a node from non-terminal to terminal type, the branch of the original terminal node is automatically deleted.\nThe tree expansion algorithm to generate AST is shown above in Appendix E. Table 5, shows the GED of N-Bref and transformer baseline. N-Bref shows 40.4% reduction on average in graph edit distance compared to traditional transformer." }, { "heading": "G PERFORMANCE OF N-BREF IN OTHER BINARY ANALYSIS TASKS", "text": "N-Bref’s structural transformer architecture / low-level code encoding and representations are easy to integrate with various neural-based binary analysis tasks as their input is also low-level code, allowing advances in such tasks, i.e., vulnerability detection, crypto algorithm classification, malware detection, etc.\nWe tried out two tasks using N-Bref’s encoder and low-level representation methods to analyze binary code:\n(i) Identify binary code vulnerabilities (Table 6). We test the performance of N-Bref on vulnerabilities detections using Devign dataset which includes 25872 data points collected from commit difference of FFmpeg and QEMU repository. Using the commit id given from Devign dataset (Zhou et al. (2019)), we clone the old repository, compile it and extract the binary of the function from the compiled project. We successfully generate 10302 binaries (25872 total data given) as many project commits in the dataset are not able to compile.\n(ii) Measure binary code similarity (Table 7). We test N-Bref on POJ-104 tasks (Mou et al. (2014)) by compiling them into binary codes and use the same metrics as MISIM (Ye et al. (2020)) to evaluate the performance of N-Bref.\nIn vulnerability detection task, the performance of N-Bref is 3.0% margin over transformer baseline on binaries and 4.08% margin over BiLSTM-based vulnerability detector (Li et al. (2018)) on highlevel source code using the same amount of dataset. For code similarity measures, N-Bref achieves 3.85% MAP@R performance increase compared to transformer baseline and shows 5.0%/20.16% better MAP@R than Aroma( Luan et al. (2019)) and NCC (Neural code comprehension Ben-Nun et al. (2018)) that are code searching frameworks that operates on high-level code. Note that binaries are more abstract and are difficult to analyze compared to high-level code." }, { "heading": "H COMPLETE ASSEMBLY CODE FOR FIGURE 1.", "text": "# i n c l u d e < s t r i n g . h> foo : c h a r * foo ( f l o a t l 0 , i n t * l 1 .L1 : ) { pushq %rbp\nc h a r l 2 [ 4 ] ; movq %rsp , %rbp s h o r t l 3 = 2 ; subq $48 , %r s p f l o a t l 4 = 0x9p +1; movss %xmm0, −36(%rbp ) i f ( s t r c h r ( l 2 , l 1 [ 0 ] ) ) { movq %r d i , −48(%rbp )\nl 0 = l 4 * l 0 ; movq %f s : 4 0 , %r a x } movq %rax , −8(%rbp ) e l s e { x o r l %eax , %eax l 2 [ l 3 ] = 7 ; movw $2 , −18(%rbp ) } movss .LC0(% r i p ) , %xmm0 r e t u r n l 2 ; movss %xmm0, −16(%rbp )\n} movq −48(%rbp ) , %r a x movl (% r a x ) , %edx l e a q −12(%rbp ) , %r a x movl %edx , %e s i movq %rax , %r d i c a l l strchr@PLT t e s t q %rax , %r a x j e .L2 movss −36(%rbp ) , %xmm0 mulss −16(%rbp ) , %xmm0 movss %xmm0, −36(%rbp ) jmp .L3\n.L2 : movswl −18(%rbp ) , %eax movb $7 , −12(%rbp ,% r a x )\n.L3 : movl $0 , %eax movq −8(%rbp ) , %r c x xorq %f s : 4 0 , %r c x j e .L5\n.L5 : l e a v e\nI COMPLETE ASSEMBLY CODE GRAPH FOR FIGURE 1\nFi gu\nre 5:\nA co\nm pl\net e\nas se\nm bl\ny gr\nap h\nof Fi\ngu re\n1. N\not e\nth at ea x\nis th\ne lo\nw er\n32 -b\nit of\nra x\n,s o\nit ha\ns da\nta de\npe nd\nen cy\nw ith\nra x\nfo rl\nin e\n12 -1\n3. A\nls o,\nw he\nn th\ner e is no de st in at io n no de s in th e in st ru ct io n (e .g ., lin e 9) ,t he de st in at io n sh ou ld be th e m em or y lo ca tio n re pr es en te d by ad dr es s re gi st er (e .g ., rb p )a nd of fs et (e .g .,- 36 in lin e 9) ." } ]
2,020
N-BREF: A HIGH-FIDELITY DECOMPILER EXPLOIT-
SP:0586aff632d77cbf60cefae509a93bc22c95655e
[ "This paper applies techniques from meta-learning to derive and end-to-end update rule for a workflow involving backtranslation, specificically maximizing translation performance of the forward model, while updating the backward model to produce backtranslations that are maximally useful to improve the forward model's quality (as measured on a meta validation set). The approach is evaluated on WMT EN-DE and EN-FR and compared against a simple sampling strategy for backtranslation, and dual learning. In addition, the paper considers a multilingual setup where the translation direction is low-resource, and the initial backtranslation model is trained on a mix of parallel data from the language pair of interest, as well as auxiliary data with a high-resource, related source language. " ]
Back-translation (Sennrich et al., 2016) is an effective strategy to improve the performance of Neural Machine Translation (NMT) by generating pseudo-parallel data. However, several recent works have found that better translation quality of the pseudo-parallel data does not necessarily lead to a better final translation model, while lower-quality but diverse data often yields stronger results instead. In this paper we propose a new way to generate pseudo-parallel data for back-translation that directly optimizes the final model performance. Specifically, we propose a meta-learning framework where the back-translation model learns to match the forward-translation model’s gradients on the development data with those on the pseudo-parallel data. In our evaluations in both the standard datasets WMT EnDe’14 and WMT En-Fr’14, as well as a multilingual translation setting, our method leads to significant improvements over strong baselines.
[]
[ { "authors": [ "REFERENCES Atilim Gunes Baydin", "Robert Cornish", "David Martínez-Rubio", "Mark Schmidt", "Frank Wood" ], "title": "Online learning rate adaptation with hypergradient descent", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yong Cheng", "Wei Xu", "Zhongjun He", "Wei He", "Hua Wu", "Maosong Sun", "Yang Liu" ], "title": "Semi-supervised learning for neural machine translation", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Jonathan Clark", "Chris Dyer", "Alon Lavie", "Noah Smith" ], "title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability", "venue": "In ACL,", "year": 2011 }, { "authors": [ "Anna Currey", "Antonio Valerio Miceli Barone", "Kenneth Heafield" ], "title": "Copied monolingual data improves low-resource neural machine translation", "venue": "In WMT,", "year": 2017 }, { "authors": [ "Zi-Yi Dou", "Antonios Anastasopoulos", "Graham Neubig" ], "title": "Dynamic data selection and weighting for iterative back-translation", "venue": "arXiv preprint arXiv:2004.03672,", "year": 2020 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML, 2017", "year": 2017 }, { "authors": [ "Jiatao Gu", "Yong Wang", "Yun Chen", "Victor O.K. Li", "Kyunghyun Cho" ], "title": "Meta-learning for lowresource neural machine translation", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Caglar Gulcehre", "Orhan Firat", "Kelvin Xu", "Kyunghyun Cho", "Loic Barrault", "Huei-Chi Lin", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "On using monolingual corpora in neural machine translation", "venue": "arXiv preprint arXiv:1503.03535,", "year": 2015 }, { "authors": [ "Junxian He", "Jiatao Gu", "Jiajun Shen", "Marc’Aurelio Ranzato" ], "title": "Revisiting self-training for neural sequence generation", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Vu Cong Duy Hoang", "Philipp Koehn", "Gholamreza Haffari", "Trevor Cohn" ], "title": "Iterative back-translation for neural machine translation", "venue": "In ACL, 2018", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Dmitry Lepikhin", "HyoukJoong Lee", "Yuanzhong Xu", "Dehao Chen", "Orhan Firat", "Yanping Huang", "Maxim Krikun", "Noam Shazeer", "Zhifeng Chen" ], "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "venue": "Arxiv 2006.16668,", "year": 2020 }, { "authors": [ "Yu-Hsiang Lin", "Chian-Yu Chen", "Jean Lee", "Zirui Li", "Yuyan Zhang", "Mengzhou Xia", "Shruti Rijhwani", "Junxian He", "Zhisong Zhang", "Xuezhe Ma", "Antonios Anastasopoulos", "Patrick Littell", "Graham Neubig" ], "title": "Choosing transfer languages for cross-lingual learning", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: differentiable architecture", "venue": "search. 2019", "year": 2019 }, { "authors": [ "Graham Neubig", "Junjie Hu" ], "title": "Rapid adaptation of neural machine translation to new languages. EMNLP, 2018", "venue": null, "year": 2018 }, { "authors": [ "Graham Neubig", "Zi-Yi Dou", "Junjie Hu", "Paul Michel", "Danish Pruthi", "Xinyi Wang" ], "title": "compare-mt: A tool for holistic comparison of language generation systems", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In ACL,", "year": 2002 }, { "authors": [ "Hieu Pham", "Qizhe Xie", "Zihang Dai", "Quoc V. Le" ], "title": "Meta pseudo labels", "venue": "Arxiv 2003.10580,", "year": 2020 }, { "authors": [ "Ye Qi", "Devendra Singh Sachan", "Matthieu Felix", "Sarguna Padmanabhan", "Graham Neubig" ], "title": "When and why are pre-trained word embeddings useful for neural machine translation", "venue": null, "year": 2018 }, { "authors": [ "Jurgen Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "In Neural Computation,", "year": 1992 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Improving neural machine translation models with monolingual data", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "Mass: Masked sequence to sequence pre-training for language generation", "venue": "arXiv preprint arXiv:1905.02450,", "year": 1905 }, { "authors": [ "Xabier Soto", "Dimitar Shterionov", "Alberto Poncelas", "Andy Way" ], "title": "Selecting backtranslated data from multiple sources for improved neural machine translation", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Felipe Petroski Such", "Aditya Rawal", "Joel Lehman", "Kenneth O. Stanley", "Jeff Clune" ], "title": "Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data", "venue": "arxiv,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xinyi Wang", "Graham Neubig" ], "title": "Target conditioned sampling: Optimizing data selection for multilingual neural machine translation", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Xinyi Wang", "Hieu Pham", "Philip Arthur", "Graham Neubig" ], "title": "Multilingual neural machine translation with soft decoupled encoding", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Xinyi Wang", "Hieu Pham", "Paul Mitchel", "Antonis Anastasopoulos", "Jaime Carbonell", "Graham Neubig" ], "title": "Optimizing data usage via differentiable rewards", "venue": "In arxiv, 2019b. URL https: //arxiv.org/abs/1911.10088", "year": 1911 }, { "authors": [ "Xinyi Wang", "Yulia Tsvetkov", "Graham Neubig" ], "title": "Balancing training for multilingual neural machine translation", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Yingce Xia", "Di He", "Tao Qin", "Liwei Wang", "Nenghai Yu", "Tie-Yan Liu", "Wei-Ying Ma" ], "title": "Dual learning for machine translation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "While Neural Machine Translation (NMT) delivers state-of-the-art performance across many translation tasks, this performance is usually contingent on the existence of large amounts of training data (Sutskever et al., 2014; Vaswani et al., 2017). Since large parallel training datasets are often unavailable for many languages and domains, various methods have been developed to leverage abundant monolingual corpora (Gulcehre et al., 2015; Cheng et al., 2016; Sennrich et al., 2016; Xia et al., 2016; Hoang et al., 2018; Song et al., 2019; He et al., 2020). Among such methods, one particularly popular approach is back-translation (BT; Sennrich et al. (2016)).\nIn BT, in order to train a source-to-target translation model, i.e., the forward model, one first trains a target-to-source translation model, i.e., the backward model. This backward model is then employed to translate monolingual data from the target language into the source language, resulting in a pseudoparallel corpus. This pseudo-parallel corpus is then combined with the real parallel corpus to train the final forward translation model. While the resulting forward model from BT typically enjoys a significant boost in translation quality, we identify that BT inherently carries two weaknesses.\nFirst, while the backward model provides a natural way to utilize monolingual data in the target language, the backward model itself is still trained on the parallel corpus. This means that the backward model’s quality is as limited as that of a forward model trained in the vanilla setting. Hoang et al. (2018) proposed iterative BT to avoid this weakness, but this technique requires multiple rounds of retraining models in both directions which are slow and expensive.\nSecond, we do not understand how the pseudo-parallel data translated by the backward model affects the forward model’s performance. For example, Edunov et al. (2018) has observed that pseudoparallel data generated by sampling or by beam-searching with noise from the backward model train better forward models, even though these generating methods typically result in lower BLEU scores compared to standard beam search. While Edunov et al. (2018) associated their observation to the diversity of the generated pseudo-parallel data, diversity alone is obviously insufficient – some degree of quality is necessary as well.\nIn summary, while BT is an important technique, training a good backward model for BT is either hard or slow and expensive, and even if we have a good backward model, there is no single recipe how to use it to train a good forward model.\nIn this paper, we propose a novel technique to alleviate both aforementioned weaknesses of BT. Unlike vanilla BT, which keeps the trained backward model fixed and merely uses it to generate pseudo-\nparallel data to train the forward model, we continue to update the backward model throughout the forward model’s training. Specifically, we update the backward model to improve the forward model’s performance on a held-out set of ground truth parallel data. We provide an illustrative example of our method in Fig. 1, where we highlight how the forward model’s held-out set performance depends on the pseudo-parallel data sampled from the backward model. This dependency allows us to mathematically derive an end-to-end update rule to continue training the backward model throughout the forward model’s training. As our derivation technique is similar to meta-learning (Schmidhuber, 1992; Finn et al., 2017), we name our method Meta Back-Translation (MetaBT).\nIn theory, MetaBT effectively resolves both aforementioned weaknesses of vanilla BT. First, the backward model continues its training based on its own generated pseudo-parallel data, and hence is no longer limited to the available parallel data. Furthermore, MetaBT only trains one backward model and then trains one pair of forward model and backward model, eschewing the expense of multiple iterations in Iterative BT (Hoang et al., 2018). Second, since MetaBT updates its backward model in an end-to-end manner based on the forward model’s performance on a held-out set, MetaBT no longer needs to explicitly understand the effect of its generated pseudo-parallel data on the forward model’s quality.\nOur empirical experiments verify the theoretical advantages of MetaBT with definitive improvements over strong BT baselines on various settings. In particular, on the classical benchmark of WMT En-De 2014, MetaBT leads to +1.66 BLEU score over sampling-based BT. Additionally, we discover that MetaBT allows us to extend the initial parallel training set of the backward model by including parallel data from slightly different languages. Since MetaBT continues to refine the backward model, the negative effect of language discrepancy is eventually rebated throughout the forward model’s training, boosting up to +1.20 BLEU score for low-resource translation tasks." }, { "heading": "2 A PROBABILISTIC PERSPECTIVE OF BACK-TRANSLATION", "text": "To facilitate the discussion of MetaBT, we introduce a probabilistic framework to interpret BT. Our framework helps to analyze the advantages and disadvantages of a few methods to generate pseudo-parallel data such as sampling, beam-searching, and beam-searching with noise (Sennrich et al., 2016; Edunov et al., 2018). Analyses of these generating methods within our framework also motivates MetaBT and further allows us to mathematically derive MetaBT’s update rules in § 3.\nOur Probabilistic Framework. We treat a language S as a probability distribution over all possible sequences of tokens. Formally, we denote by PS(x) the distribution of a random variable x, whose each instance x is a sequence of tokens. To translate from a source language S into a target language T , we learn the conditional distribution PS,T (y|x) for sentences from the languages S and T with a parameterized probabilistic model P (y|x; θ). Ideally, we learn θ by minimizing the objective:\nJ(θ) = Ex,y∼PS,T (x,y)[`(x, y; θ)] where `(x, y; θ) = −logP (y|x; θ) (1) Since PS,T (x, y) = PS,T (y)PS,T (x|y) = PT (y)PS,T (x|y), we can refactor J(θ) from Eq. 1 as:\nJ(θ) = Ey∼PT (y)Ex∼PS,T (x|y)[`(x, y; θ)] (2)\nMotivating BT. In BT, since it is not feasible to draw exact samples y ∼ PT (y) and x ∼ PS,T (x|y), we rely on two approximations. First, instead of sampling y ∼ PT (y), we collect a corpus DT of\nmonolingual data in the target language T and draw the samples y ∼ Uniform(DT ). Second, instead of sampling x ∼ PS,T (x|y), we derive an approximate distribution P̂ (x|y) and sample x ∼ P̂ (x|y). Before we explain the derivation of P̂ (x|y), let us state that with these approximations, the objective J(θ) from Eq. 2 becomes the BT following objective:\nĴBT(θ) = Ey∼Uniform(DT )Ex∼P̂ (x|y)[`(x, y; θ)] (3)\nRather unsurprisingly, P̂ (x|y) in Eq. 3 above is derived from a pre-trained parameterized backward translation model P (x|y;ψ). For example:\n• P̂ (x|y) 4= 1[x = argmaxẋ P (ẋ|y;ψ)] results in BT via beam-search (Sennrich et al., 2016). • P̂ (x|y) 4= P (x|y;ψ) results in BT via sampling (Edunov et al., 2018). • P̂ (x|y) 4= 1[x = argmaxẋ P̃ (ẋ|y;ψ)] results in BT via noisy beam-search (Edunov et al., 2018)\nwhere P̃ (x|y;ψ) denotes the joint distribution of the backward model P (x|y;ψ) and the noise.\nTherefore, we have shown that in our probabilistic framework for BT, three common techniques to generate pseudo-parallel data from a pre-trained backward model correspond to different derivations from the backward model’s distribution P (x|y;ψ). Our framework naturally motivates two questions: (1) given a translation task, how do we tell which derivation of P̂ (x|y) from P (x|y, ψ) is better than another? and (2) can we derive better choices for P̂ (x|y) from a pre-trained backward model P (x|y;ψ) according to the answer of question (1)?\nMetric for the Generating Methods. In the existing literature, the answer for our first question is relatively straightforward. Since most papers view the method of generating pseudo-parallel data as a hyper-level design, i.e. similar to the choice of an architecture like Transformer or LSTM, and hence practitioners choose one method over another based on the performance of the resulting forward model on held-out validation sets.\nAutomatically Derive Good Generating Methods. We now turn to the second question that our probabilistic framework motivates. Thanks to the generality of our framework, every choice for P̂ (x|y) results in an optimization objective. Using this objective, we can train a forward model and measure its validation performance to evaluate our choice of P̂ (x|y). This process of choosing and evaluating P̂ (x|y) can be posed as the following bi-level optimization problem:\nOuter loop: P̂ ∗ = argmax P̂ ValidPerformance(θ∗ P̂ ),\nInner loop: θ∗ P̂ = argmin\nθ ĴBT(θ; P̂ ),\nwhere ĴBT(θ; P̂ ) = Ey∼Uniform(DT )Ex∼P̂ (x|y)[`(x, y; θ)]\n(4)\nThe optimal solution of this bi-level optimization problem can potentially train a forward model that generalizes well, as the forward model learns on a pseudo-parallel dataset and yet achieves a good performance on a held-out validation set. Unfortunately, directly solving this optimization problem is not feasible. Not only is the inner loop quite expensive as it includes training a forward model from scratch according to P̂ , the outer loop is also poorly defined as we do not have any restriction on the space that P̂ can take. Next, in § 3, we introduce a restriction on the space that P̂ can take, and show that our restriction turns the task of choosing P̂ into a differentiable problem which can be solved with gradient descent." }, { "heading": "3 META BACK-TRANSLATION", "text": "Continuing our discussion from § 2, we design Meta Back-Translation (MetaBT) which finds a strategy to generate pseudo-parallel data from a pre-trained backward model such that if a forward model training on the generated pseudo-parallel data, it will achieve a strong performance on a held-out validation set.\nThe Usage of “Validation” Data. Throughout this section, readers will see that MetaBT makes extensive use of the “validation” set to provide feedback for refine the pseudo-parallel data’s generating strategy. Thus, to avoid nullifying the meaning of a held-out validation set, we henceforth refer to the ground-truth parallel dataset where the forward model’s performance is measured throughout its training as the meta validation dataset and denote it by DMetaDev. Other than this meta validation set, we also have a separate validation set for hyper-parameter tuning and model selection.\nA Differentiable Bi-level Optimization Problem. We now discuss MetaBT, starting with formulating a differentiable version of Problem 4. Suppose we have pre-trained a paramterized backward translation model P (x|y;ψ). Instead of designing the generating distribution P̂ (x|y) by applying actions such as sampling or beam-search to P (x|y;ψ), we let P̂ (x|y) 4= P (x|y;ψ) and continue to update the backmodel’s parameters ψ throughout the course of training the forward model. Clearly, under this association P̂ (x|y) 4= P (x|y;ψ), the parameters ψ controls the generating distribution of the pseudo-parallel data to train the forward model. By setting the differentiable parameters ψ as the optimization variable for the outer loop, we turn the intractable Problem 4 into a differentiable one:\nOuter loop: ψ∗ = argmax ψ Performance(θ∗(ψ), DMetaDev) Inner loop: θ∗(ψ) = argmin θ Ey∼Uniform(DT )Ex∼P̂ (x|y)[`(x, y; θ)] (5)\nBi-level optimization problems whose both outer and inner loops operate on differentiable variables like Problem 5 have appeared repeatedly in the recent literature of meta-learning, spanning many areas such as learning initialization (Finn et al., 2017), learning hyper-parameters (Baydin et al., 2018), designing architectures (Liu et al., 2019), and reweighting examples (Wang et al., 2019b). We thus follow their successful techniques and design a two-phase alternative update rule for the forward model’s parameters θ in the inner loop and the backward model’s parameters ψ in the outer loop:\nPhase 1: Update the Forward Parameters θ. Given a batch of monolingual target data y ∼ Uniform(DT ), we sample the pseudo-parallel data (x̂ ∼ P (x|y;ψ), y) and update θ as if (x̂, y) was real data. For simplicity, assuming that θ is updated using gradient descent on (x̂, y), using a learning rate ηθ, then we have: θt = θt−1 − ηθ∇θ`(x̂, y; θ) (6) Phase 2: Update the Backward Parameters ψ. Note that Eq. 6 means that θt depends on ψ, because x̂ is sampled from a distribution parameterized by ψ. This dependency allows us to compute the meta validation loss of the forward model at θt, which we denote by J(θt(ψ), DMetaDev), and back-propagate this loss to compute the gradient∇ψJ(θt(ψ), DMetaDev). Once we have this gradient, we can perform a gradient-based update on the backward parameter ψ with learning rate ηψ:\nψt = ψt−1 − ηψ∇ψ∇θJ(θt(ψ), DMetaDev) (7)\nComputing∇ψJ(θt(ψ), DMetaDev). Our derivation of this gradient utilizes two techniques: (1) the chain rule to differentiate J(θt(ψ), DMetaDev) with respect to ψ via θt; and (2) the log-gradient trick from reinforcement learning literature (Williams, 1992) to propagate gradients through the sampling of pseudo-source x̂. We refer readers to § A.1 for the full derivation. Here, we present the final result:\n∇ψJ(θt(ψ), DMetaDev) ≈ − [ ∇θJ(θt, DMetaDev)> · ∇θ`(x̂, y; θt−1) ] · ∇ψlogP (x̂|y;ψ) (8)\nIn our implementation, we leverage the recent advances in high-order AutoGrad tools to efficiently compute the gradient dot-product term via Jacobian-vector products. By alternating the update rules in Eq. 6 and Eq. 7, we have the complete MetaBT algorithm.\nRemark: An Alternative Interpretation of MetaBT. The update rule of the backward model in Eq. 8 strongly resembles the REINFORCE equation from the reinforcement learning literature. This similarity suggests that the backward model is trained as if it were an agent in reinforcement learning. From this perspective, the backward model is trained so that the pseudo-parallel data sampled from it would maximize the “reward”:\nR(x̂) = ∇θJ(θt, DMetaDev)> · ∇θ`(x̂, y; θt−1) (9) Since this dot-product measures the similarity in directions of the two gradients, it can be interpreted that MetaBT optimizes the backward model so that the forward model’s gradient on pseudo-parallel\ndata sampled from the backward model is similar to the forward model’s gradient computed on the meta validation set. This is a desirable goal because the reward guides the backward model’s parameters to favor samples that are similar to those in the meta validation set." }, { "heading": "4 A MULLTILINGUAL APPLICATION OF METABT", "text": "We find that the previous interpretation of MetaBT in Section 3 leads to a rather unexpected application MetaBT. Specifically, we consider the situation where the language pair of interest S-T has very limited parallel training data. In such a situation, BT approaches all suffer from a serious disadvantage: since the backward model needs to be trained on the parallel data T -S, when the amount of parallel data is small, the resulting backward model has very low quality. The pseudo-parallel corpus generated from the low-quality backward model can contaminate the training signals of the forward model (Currey et al., 2017).\nTo compensate for the lack of initial parallel data to train the backward model, we propose to use parallel data from a related language S′-T for which we can collect substantially more data. Specifically, we train the backward model on the union of parallel data T -S′ and T -S, instead of only T -S. Since this procedure results in a substantially larger set of parallel training data, the obtained backward model has a higher quality. However, since the extra S′-T parallel data dominates the training set of the backward model, the pseudo source sentences sampled from the resulting backward model would have more features of the related language S′, rather than our language of interest S.\nIn principle, MetaBT can fix this discrepancy by adapting the backward model using the forward model’s gradient on the meta validation set that only contains parallel data for S-T . This would move the back-translated pseudo source sentences closer to our language of interest S." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate MetaBT in two settings: (1) a standard back-translation setting to verify that MetaBT can create more effective training data for the forward model, and (2) a multilingual NMT setting to confirm that MetaBT is also effective when the backward model is pre-trained on a related language pair as discussed in § 4." }, { "heading": "5.1 DATASET AND PREPROCESSING", "text": "Standard For the standard setting, we consider two large datasets: WMT En-De 2014 and WMT En-Fr 20141, tokenized with SentencePiece (Kudo & Richardson, 2018) using a joint vocabulary size of 32K for each dataset. We filter all training datasets, keeping only sentence pairs where both source and target have no more than 200 tokenized subwords, resulting in a parallel training corpus of 4.5M sentence pairs for WMT En-De and 40.8M sentences for WMT En-Fr. For the target monolingual data, we collect 250M sentences in German and 61 million sentences in French, both from the WMT news datasets between 2007 and 2017. After de-duplication, we filter out the sentences that have more than 200 subwords, resulting in 220M German sentences and 60M French sentences.\nMultilingual The multilingual setting uses the multilingual TED talk dataset (Qi et al., 2018), which contains parallel data from 58 languages to English. We focus on translating 4 low-resource languages to English: Azerbaijani (az), Belarusian (be), Glacian (gl), Slovak (sk). Each low-resource language is paired with a corresponding related high-resource language: Turkish (tr), Russian (ru), Portuguese (pt), Czech (cs). We following the setting from prior work (Neubig & Hu, 2018; Wang et al., 2019a) and use SentencePiece with a separate vocabulary of 8K for each language." }, { "heading": "5.2 BASELINES", "text": "Our first baseline is No BT, where we train all systems using parallel data only. For the standard setting, we simply train the NMT model on the WMT parallel data. For the multilingual setting, we train the model on the concatenation of the parallel training data from both the low-resource language\n1Data link: http://www.statmt.org/wmt14/\nand the high-resource language. The No BT baseline helps to verify the correctness of our model implementations. For the BT baselines, we consider two strong candidates:\n• MLE: we sample the pseudo source sentences from a fixed backward model trained with MLE. This baseline is the same with sampling-based BT (Edunov et al., 2018). We choose sampling instead of beam-search and beam-search with noise as Edunov et al. (2018) found sampling to be stronger than beam-search and on par with noisy beam-search. Our data usage, as specified in § 5.1, is also the same with Edunov et al. (2018) on WMT. We call this baseline MLE to signify the fact that the backward model is trained with MLE and then is kept fixed throughout the course of the forward model’s learning. • DualNMT (Xia et al., 2016): this baseline further improves the quality of the backward model\nusing reinforcement learning with a reward that combines the language model score and the reconstruction score from the forward model.\nNote that for the multilingual setting, we use top-10 sampling, which we find has better performance than sampling from the whole vocabulary in the preliminary experiments." }, { "heading": "5.3 IMPLEMENTATION", "text": "We use the Transformer-Base architecture (Vaswani et al., 2017) for all forward and backward models in our experiments’ NMT models. All hyper-parameters can be found in § A.2. We choose Transformer-Base instead of Transformer-Large because MetaBT requires storing in memory both the forward model and the backward model, as well as the two-step gradients for meta-learning, which together exceeds our 16G of accelerator memory when we try running Transformer-Large. We further discuss this in § 7.\nFor the standard setup, we pre-train the backward model on the WMT parallel corpora. In the meta-learning phases that we described in § 3, we initialize the parameters ψ0 using this pre-trained checkpoint. From this checkpoint, at each training step, our forward model is updated using two sources of data: (1) a batch from the parallel training data, and (2) a batch of sentences from the monolingual data, and their source sentences are sampled by the backward model.\nFor the multilingual setup, we pre-train the backward model on the reverse direction of the parallel data from both the low-resource and the high-resource languages. From this checkpoint, at each meta-learning step, the forward model receives two sources of data: (1) a batch of the parallel data from the low-resource language. (2) a batch of the target English data from the high-resource language, which are fed into the BT model to sample the pseudo-source data." }, { "heading": "5.4 RESULTS", "text": "We report the BLEU scores (Papineni et al., 2002) for all models and settings in Tab. 1. From the table, we observe that the most consistent baseline is MLE, which significantly improves over the No BT baseline. Meanwhile, DualNMT’s performance is much weaker, losing to MLE on all tasks except for az-en where its margin is only +0.19 BLEU compared to No BT. For WMT En-Fr, we even observe that DualNMT often results in numerical instability before reaching 34 BLEU score and thus we do not report the result. By comparing the baselines’ performance, we can see that continuing to train the backward model to outperform MLE is a challenging mission.\nDespite such challenge, MetaBT consistently outperforms all baselines in both settings. In particular, compared to the best baselines, MetaBT’s gain is up to +1.20 BLEU in the low-resource multilingual\nsetting, and is +1.66 BLEU for WMT En-De 14. We remark that WMT En-De 14 is a relatively classical benchmark for NMT and that a gain of +1.66 BLEU on this benchmark is very significant. While the gain of MetaBT over MLE on WMT En-Fr is somewhat smaller (+0.51 BLEU), our statistical test shows that the gain is still significant. Therefore, our experimental results confirm the theoretical advantages of MetaBT. Next, in § 5.5, we investigate the behaviors of MetaBT to further understand how the method controls the generating process of pseudo-parallel data.\n5.5 ANALYSIS\nMetaBT Flexibly Avoids both Overfitting and Underfitting. We demonstrate two constrasting behaviors of MetaBT in Fig. 2 and Fig. 3. In Fig. 2, MetaBT generates pseudo-parallel data for the forward model to learn in WMT En-Fr. Since WMT En-Fr is large (40.8 million parallel sentences), the Transformer-Base forward model underfits. By “observing” the forward model’s underfitting, perhaps via a low meta validation performance, the backward model generates the pseudo-parallel data that the forward model assigns a high probability, hence reducing the learning difficulty for the forward model. In contrast, Fig. 3 shows that for WMT En-De, the pseudo-parallel data generated by the backward model leads to a higher training loss for the forward model. Since WMT En-De has only 4.5 million parallel sentences which is about 10x smaller than WMT En-Fr, we suspect that MetaBT generates harder pseudo-parallel data for the backward model to avoid overfitting. In both cases, we have no control over the behaviors of MetaBT, and hence we suspect that MetaBT can appropriately adjusts its behavior depending on the forward model’s learning state.\nMetaBT Samples Pseudo-Parallel Data Closer to the Meta Validation Set. After showing that MetaBT can affect the forward model’s training in opposite ways, we now show that MetaBT actually tries to generate pseudo-parallel data that are closed to the meta validation data. Note that this is the expected behavior of MetaBT, since the ultimate objective is for the forward model to perform well on this meta validation set. We focus on the multilingual setting because this setting highlights the vast difference between the parallel data and the meta validation data. In particular, recall from § 4 that in order to translate a low-resource language S into language T , we use extra data from a language S′ which is related to S but which has abundant parallel data S′-T . Meanwhile, the meta validation set only consists of parallel sentences in S-T .\nIn Fig. 4, we group the sampled sentences throughout the forward model’s training into 10 bins based on the training steps that they are generated, and plot the percentage of words in the pseudo source sentences that are from the vocabulary of S for each bin. As seen from the figure, MetaBT keeps increasing the vocabulary coverage throughout training, indicating that it favors the sentences that are more similar to the meta validation data, which are from the low-resource language S.\n<- 20\n[-2 0,\n-1 0)\n[-1 0,\n-5 ) -5 -4 -3 -2 -1 0 1 2 3 4 5\n[6 ,1 1) [1 1, 21 ) >= 21\nlen(output)-len(reference)\n0\n100\n200\n300\n400\n500\nco un\nt\nmle mbt" }, { "heading": "6 RELATED WORK", "text": "Our work is related to methods that leverage monolingual data either on the source side (He et al., 2020) or on the target side (Sennrich et al., 2016; Edunov et al., 2018) to improve the final translation quality. Going beyond vanilla BT, IterativeBT (Hoang et al., 2018) trains multiple rounds of backward and forward models and observe further improvement. While MetaBT cannot push the backward model’s quality as well, MetaBT is also much cheaper than multiple training rounds of IterativeBT. DualNMT (Xia et al., 2016) jointly optimizes the backward model with the forward model, but relies on indirect indicators, leading to weak performances as we showed in § 5.4.\nAs MetaBT essentially learns to generate pseudo-parallel data for effective training, MetaBT is a natural extensions of many methods that learn to re-weight or to select extra data for training. For example, Soto et al. (2020) and Dou et al. (2020) select back-translated data from different systems using heuristic, while Wang & Neubig (2019); Lin et al. (2019); Wang et al. (2019b; 2020) select the multilingual data that is most helpful for a forward model. We find the relationship between MetaBT and these methods analogous to the relationship between sampling from a distribution and computing the distribution’s density.\nThe meta-learning technique in our method has also been applied to other tasks, such as: learning initialization points (Finn et al., 2017; Gu et al., 2018), designing architectures (Liu et al., 2019), generating synthetic input images Such et al. (2019), and pseudo labeling (Pham et al., 2020)." }, { "heading": "7 LIMITATION, FUTURE WORK, AND CONCLUSION", "text": "We propose Meta Back-Translation (MetaBT), an algorithm that learns to adjust a back-translation model to generate data that are most effective for the training of the forward model. Our experiments show that MetaBT outperforms strong existing methods on both a standard NMT setting and a multilingual setting.\nAs discussed in § 5.3 the large memory footprint is a current weakness that makes it impossible to apply MetaBT to larger models. However, the resulting Transformer-Base model trained by MetaBT still outperform Transformer-Large models trained in the standard settings. Since the smaller Transformer-Base model are cheaper to deploy, MetaBT still has its values. In the future, we expect this memory limitation will be lifted, e.g. when better technology, such as automated model parallelism (Lepikhin et al., 2020) or more powerful accelerators, become available. When that happens, MetaBT’s will better realize its potential." } ]
2,020
null
SP:ea8234f4533090e0cfe197ddb70f375f3ed49418
[ "The paper proposes an efficient framework to search for the optimal initial learning rate to train neural networks. The key idea is to introduce Knowledge Gain, a metric derived from the singular values of each layer, to indicate the convergency quality of training. Taking advantage of the metric, a logarithmic grid search algorithm (AutoHyper) is proposed to search for the optimal learning rate according to Eq 5 via short-time training (e.g. for 5 epochs), which is demonstrated to be very efficient and take effect to some extent. " ]
Hyper-parameter optimization (HPO) is critical in training high performing Deep Neural Networks (DNN). Current methodologies fail to define an analytical response surface (Bergstra & Bengio, 2012) and remain a training bottleneck due to their use of additional internal hyper-parameters and lengthy manual evaluation cycles. We demonstrate that the low-rank factorization of the convolution weights of intermediate layers of a CNN can define an analytical response surface. We quantify how this surface acts as an auxiliary to optimizing training metrics. We introduce a fully autonomous dynamic tracking algorithm – autoHyper – that performs HPO on the order of hours for various datasets including ImageNet and requires no manual intervention or a priori knowledge. Our method – using a single RTX2080Ti – is able to select a learning rate within 59 hours for AdaM (Kingma & Ba, 2014) on ResNet34 applied to ImageNet and improves in top-1 test accuracy by 4.93% over the default learning rate. In contrast to previous methods, we empirically prove that our algorithm and response surface generalize well across model, optimizer, and dataset selection removing the need for extensive domain knowledge to achieve high levels of performance.
[]
[ { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A next-generation hyperparameter optimization framework", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "James Bergstra", "Dan Yamins", "David D Cox" ], "title": "Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms", "venue": "In Proceedings of the 12th Python in science conference,", "year": 2013 }, { "authors": [ "Dami Choi", "Christopher J. Shallue", "Zachary Nado", "Jaehoon Lee", "Chris J. Maddison", "George E. Dahl" ], "title": "On empirical comparisons of optimizers for deep learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Gonzalo I Diaz", "Achille Fokoue-Nkoutche", "Giacomo Nannicini", "Horst Samulowitz" ], "title": "An effective algorithm for hyperparameter optimization of neural networks", "venue": "IBM Journal of Research and Development,", "year": 2017 }, { "authors": [ "Tobias Domhan", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Katharina Eggensperger", "Matthias Feurer", "Frank Hutter", "James Bergstra", "Jasper Snoek", "Holger Hoos", "Kevin Leyton-Brown" ], "title": "Towards an empirical foundation for assessing bayesian optimization of hyperparameters", "venue": "In NIPS workshop on Bayesian Optimization in Theory and Practice,", "year": 2013 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale, 2018", "venue": null, "year": 2018 }, { "authors": [ "Matthias Feurer", "Aaron Klein", "Katharina Eggensperger", "Jost Springenberg", "Manuel Blum", "Frank Hutter" ], "title": "Efficient and robust automated machine learning", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Matthias Feurer", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Initializing bayesian hyperparameter optimization via meta-learning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Ian J. Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Xin He", "Kaiyong Zhao", "Xiaowen Chu" ], "title": "Automl: A survey of the state-of-the-art", "venue": "arXiv preprint arXiv:1908.00709,", "year": 2019 }, { "authors": [ "Mahdi S. Hosseini", "Konstantinos N. Plataniotis" ], "title": "Adas: Adaptive scheduling of stochastic gradients, 2020", "venue": null, "year": 2020 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "Stanislaw Jastrzebski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kirthevasan Kandasamy", "Gautam Dasarathy", "Junier B Oliva", "Jeff Schneider", "Barnabás Póczos" ], "title": "Gaussian process bandit optimisation with multi-fidelity evaluations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Zohar Karnin", "Tomer Koren", "Oren Somekh" ], "title": "Almost optimal exploration in multi-armed bandits", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Simon Bartels", "Philipp Hennig", "Frank Hutter" ], "title": "Fast bayesian optimization of machine learning hyperparameters on large datasets", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Lars Kotthoff", "Chris Thornton", "Holger H Hoos", "Frank Hutter", "Kevin Leyton-Brown" ], "title": "Autoweka 2.0: Automatic model selection and hyperparameter optimization in weka", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Liam Li", "Kevin Jamieson", "Afshin Rostamizadeh", "Ekaterina Gonina", "Moritz Hardt", "Benjamin Recht", "Ameet Talwalkar" ], "title": "Massively parallel hyperparameter tuning", "venue": "arXiv preprint arXiv:1810.05934,", "year": 2018 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Yuanzhi Li", "Colin Wei", "Tengyu Ma" ], "title": "Towards explaining the regularization effect of initial large learning rate in training neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gang Luo" ], "title": "A review of automatic selection methods for machine learning algorithms and hyperparameter values", "venue": "Network Modeling Analysis in Health Informatics and Bioinformatics,", "year": 2016 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate, 2019", "venue": null, "year": 2019 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Shinichi Nakajima", "Masashi Sugiyama", "S Derin Babacan", "Ryota Tomioka" ], "title": "Global analytic solution of fully-observed variational bayesian matrix factorization", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Sensitivity and generalization in neural networks: an empirical study", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Prabhu Teja Sivaprasad", "Florian Mai", "Thijs Vogels", "Martin Jaggi", "François Fleuret" ], "title": "Optimizer benchmarking needs to account for hyperparameter tuning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jasper Snoek", "Oren Rippel", "Kevin Swersky", "Ryan Kiros", "Nadathur Satish", "Narayanan Sundaram", "Mostofa Patwary", "Mr Prabhat", "Ryan Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jost Tobias Springenberg", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Bayesian optimization with robust bayesian neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Kevin Swersky", "Jasper Snoek", "Ryan P Adams" ], "title": "Multi-task bayesian optimization. In Advances in neural information processing", "venue": null, "year": 2004 }, { "authors": [ "Kevin Swersky", "Jasper Snoek", "Ryan Prescott Adams" ], "title": "Freeze-thaw bayesian optimization", "venue": "arXiv preprint arXiv:1406.3896,", "year": 2014 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "Sharan Vaswani", "Aaron Mishkin", "Issam H. Laradji", "Mark Schmidt", "Gauthier Gidel", "Simon Lacoste-Julien" ], "title": "Painless stochastic gradient: Interpolation, line-search, and convergence rates", "venue": "CoRR, abs/1905.09997,", "year": 2019 }, { "authors": [ "Ashia C. Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nathan Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Steven R Young", "Derek C Rose", "Thomas P Karnowski", "Seung-Hwan Lim", "Robert M Patton" ], "title": "Optimizing deep learning hyper-parameters through an evolutionary algorithm", "venue": "In Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments,", "year": 2015 }, { "authors": [ "Tong Yu", "Hong Zhu" ], "title": "Hyper-parameter optimization: A review of algorithms and applications", "venue": "arXiv preprint arXiv:2003.05689,", "year": 2020 }, { "authors": [ "Xiang Zhang", "Xiaocong Chen", "Lina Yao", "Chang Ge", "Manqing Dong" ], "title": "Deep neural network hyperparameter optimization with orthogonal array tuning", "venue": "In International Conference on Neural Information Processing,", "year": 2019 }, { "authors": [ "Wilson" ], "title": "2017) for AdaM, RMSProp, and AdaGrad", "venue": null, "year": 2017 }, { "authors": [ "Wilson" ], "title": "We additionally contribute that AdaS does generalize well. We also highlight SLS’ multi-order-of-magnitude tolerance to initial learning rate as well as the stability of the AdaS optimizer, particularly when applied on TinyImageNet", "venue": "Jastrzebski et al", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The choice of Hyper-Parameters (HP) – such as initial learning rate, batch size, and weight decay – has shown to greatly impact the generalization performance of Deep Neural Network (DNN) training (Keskar et al., 2017; Wilson et al., 2017; Li et al., 2019; Yu & Zhu, 2020). By increasing the complexity of network architectures (from high to low parameterized models) and training datasets (class number and samples), the manual intervention to tune these parameters for optimization becomes a practically expensive and highly challenging task. Therefore, the problem of Hyper-Parameter Optimization (HPO) becomes central to developing highly efficient training workflows.\nRecent studies shift the gear toward development of a meaningful metric measure to explain effective HP tuning for DNN training. This is done in several behavioural studies, including changes in loss surfaces (Keskar et al., 2017), input perturbation analysis (Novak et al., 2018), and the energy norm of the covariance of gradients (Jastrzebski et al., 2020), just to name a few. In fact, the abstract formulation of the HPO problem, as highlighted by Bergstra & Bengio (2012), can be modelled by\nλ∗ ← arg min λ∈Λ {Ex∼M [L(x;Aλ(X(train))]}, (1)\nwhere, X(train) and x are random variables, modelled by some natural distributionM , that represent the train and validation data, respectively, L(·) is some expected loss, and Aλ(X(train)) is a learning algorithm that maps X(train) to some learned function, conditioned on the hyper-parameter set λ. Note that this learned function, denoted as f(θ;λ;X(train)), involves its own inner optimization problem. The HPO in (1) highlights two optimization problems of which optimization over λ cannot occur until optimization over f(θ;λ;X(train)) is complete. This fact applies heavy computational burden for HPO. Bergstra & Bengio (2012) reduce this burden by attempting to solve the following\nλ∗ ← arg min λ∈Λ τ(λ), (2)\nwhere τ is called the hyper-parameter response function or response surface, and Λ is some set of choices for λ (i.e. the search space). The goal of the response surface is to introduce an auxiliary\n1\nUnder review as a conference paper at ICLR 2021\nfunction parameterized by λ of which its minimization is directly correlated to minimization of the objective function f(θ). Little advancements in an analytical model of the response surface has led to estimating it by (a) running multiple trials of different HP configurations (e.g. grid searching), using evaluation against validation sets as an estimate to τ ; or (b) characterizing the distribution model of a configuration’s performance metric (e.g. cross-validation performances) to numerically define a relationship between τ and λ.\nAn important shift occurred when Bergstra & Bengio (2012) showed that random searching is more efficient to grid searching, particularly when optimizing high-dimensional HP sets. To mitigate the time complexity and increase overall performance, subsequent methods attempted to characterize the distribution model for such random configurations (Snoek et al., 2012; Eggensperger et al., 2013; Feurer et al., 2015a;b; Klein et al., 2017; Falkner et al., 2018) or employed population control (Young et al., 2015; Jaderberg et al., 2017) or early-stopping (Karnin et al., 2013; Li et al., 2017; 2018). However, these methods suffer from (a) additional internal HPs that require manual tuning facilitated by extensive domain knowledge; (b) heavy computational overhead whereby the optimization process takes days to weeks in most cases (Li et al., 2017; Falkner et al., 2018; Yu & Zhu, 2020); (c) poor generalization across model selection, datasets, and general experimental configurations (e.g. optimizers); and (d) strong dependence on a manually defined search ranges that heavily influences results (Choi et al., 2020; Sivaprasad et al., 2020). Importantly, these ranges are generally chosen based on intuition, expert domain knowledge, or some form of a priori knowledge.\nIn this paper, we employ the notion of knowledge gain (Hosseini & Plataniotis, 2020) to model a response surface – solvable with low computational overhead – and use it to perform automatic HPO that does not require any a priori knowledge while still achieving competitive performance against baselines and existing state of the art (SOTA) methods. Our goal is therefore to develop an algorithm that is fully autonomous and domain independent that can achieve competitive performance (not necessarily superior performance). We restrict our response surface to consider a single HP, namely the initial learning rate η, and support this choice by noting that the initial learning rate is the most sensitive and important HP towards final model performance (Goodfellow et al., 2016; Bergstra & Bengio, 2012; Yu & Zhu, 2020) (see also Figure 10 in Appendix C). We demonstrate how our method’s optimization directly correlates to optimizing model performance. Finally, we provide empirical measures of the computational requirements of our algorithm and present thorough experiments on a diverse set of Convolutional Neural Network (CNN) and Computer Vision dataset that demonstrate the generalization of our response surface.\nThe main contributions of this work are as follows:\n1. Inspired by knowledge gain, we introduce a well-defined, analytical response surface using the low-rank-factorization of convolution weights (Equation 5).\n2. We propose a dynamic tracking algorithm of low computational overhead on the order of minutes and hours, dubbed autoHyper, to optimize our response surface and conduct HPO.\n3. This algorithm requires no domain knowledge, human intuition, or manual intervention, and is not bound by a manually set searching space, allowing for completely automatic setting of the initial learning rate; a novelty for deep learning practitioners." }, { "heading": "1.1 RELATED WORKS", "text": "We leave extensive analysis of the related works to established surveys (Luo, 2016; He et al., 2019; Yu & Zhu, 2020) but present a general overview here. Grid searching and manual tuning techniques that require extensive domain knowledge trial various configurations and retain the best. Random search (Bergstra & Bengio, 2012) was proven to be more efficient, particularly in high-dimensional cases, but these methods suffer from redundancy and high computational overhead. Bayesian optimization (Snoek et al., 2012; Eggensperger et al., 2013; Feurer et al., 2015a;b; Klein et al., 2017) techniques attempt to characterize the distribution model of the random HP configurations. They fail to properly define the response surface τ and resolve to estimating it by rationalizing a Gaussian process over sampling points. The use of neural networks over Gaussian to model the generalization performance was shown to have better computational performance (Snoek et al., 2015; Springenberg et al., 2016). Furthermore, the early stopping methods (Karnin et al., 2013; Li et al., 2017; 2018) spawn various configurations with equal resource distributions, successively stopping poorperforming configurations and reassigning resources dynamically. Population-based training (PBT)\n2\nUnder review as a conference paper at ICLR 2021\nmethods (Young et al., 2015; Jaderberg et al., 2017) follow an evolutionary approach by spawning various experimental configurations and adapting poor-performing trials to warm restart with inherited learnable parameters and HPs. In addition, other methods such as orthogonal array tuning (Zhang et al., 2019), box-constrained derivative-free optimization (Diaz et al., 2017), reverse dynamics algorithm for SGD optimization (Maclaurin et al., 2015), and hybrid methods (Swersky et al., 2013; 2014; Domhan et al., 2015; Falkner et al., 2018; Kandasamy et al., 2016) exist but demonstrate no significant benefits over the previous techniques. Generally, each of these methods suffer from high computational overheads – on the order of days to weeks to converge – as well as additional internal HPs that heavily influence performance and generalization. In recent years, many Python libraries have also been developed that include these optimization methods (Bergstra et al., 2013; Kotthoff et al., 2017; Akiba et al., 2019)." }, { "heading": "2 A NEW RESPONSE SURFACE MODEL", "text": "In this section, we motivate and develop a new response surface model τ(λ) based on the low-rank factorization of convolutional weights in a CNN. Unlike the common approach of cross-validation performance measures, we define a new measure on the well-posedness of the intermediate layers of a CNN and relate this measure to the general performance of the network. We first start by adopting the low-rank measure of convolution weights." }, { "heading": "2.1 KNOWLEDGE GAIN VIA LOW-RANK FACTORIZATION", "text": "Consider a four-way array (4-D tensor) W ∈ RN1×N2×N3×N4 as the convolution weights of an intermediate layer of a CNN (N1 and N2 being the height and width of kernel size, and N3 and N4 to the input and output channel size, respectively). Under the convolution operation, the input feature maps F I ∈ RW×H×N3 are mapped to an arbitrary output feature map FO ∈ RW×H×N4 by\nFO:,:,i4 = N3∑ i3=1 F I:,:,i3 ∗W:,:,i3,i4 .\nW (4-D Tensor) unfold−−−→Wd (2-D Matrix) factorize then decompose−−−−−−−−−−−−−→ ÛdΣ̂dV̂ Td︸ ︷︷ ︸\nŴd\n+Ed\nWe note the importance of factorizing the unfolded matrix Wd using a low-rank factorization (we use the Variational Bayesian Matrix Factorization (VBMF) (Nakajima et al., 2013)). Without this factorization, the presence of noise will inhibit proper analysis. This noise Ed will “capture” the randomness of initialization and ignoring it will allow us to better analyze our unfolded matrices and make our response surface robust to initialization method.\nFollowing the definition of Knowledge Gain (KG) from Hosseini & Plataniotis (2020), one can now define a metric for each network layer using the norm energy of the low-rank factorization as\nGd(Ŵd) = 1\nNd · σ1(Ŵd)\nN ′ d∑\ni=1\nσi(Ŵd). (3)\nwhere, σ1 ≥ σ2 ≥ . . . ≥ σNd are the associated low-rank singular values in descending order. Here Nd = rank{Ŵd} and the unfolding can be done in either input or output channels i.e. d ∈ {3, 4}. For more information on KG as well as its efficient algorithmic computation, we refer the reader to Hosseini & Plataniotis (2020).\nThe metric defined in (3) is normalized such that Gd ∈ [0, 1] and can be used to probe CNN layers to monitor their efficiency in the carriage of information from input to output feature maps. We can further parameterize the KG by the HP set λ, epoch t, and network layer ` as Ḡd,t,`(λ). A perfect network and set of HPs would yield Ḡd,T,`(λ) = 1 ∀` ∈ [L] where L is the number of layers in the network and T is the last epoch. In this case, network layer functions as a better autoencoder through iterative training and the carriage of information throughout the network is maximized. Conversely, Ḡd,T,`(λ) = 0 indicates that the information flow is very weak such that the mapping is effectively random (‖Ed ‖ is maximized.).\n3\nUnder review as a conference paper at ICLR 2021" }, { "heading": "2.2 DEFINITION OF NEW RESPONSE FUNCTION", "text": "Interestingly, if Ḡd,t,`(λ) = 0 in early stages of training, it is evidence that no learning has occurred, indicative of an initial learning rate that is too small (no progress has been made to reduce the randomization). It then becomes useful to track the zero-valued KGs within a network’s intermediate layers’ input and output channels, which effectively becomes a measure of channel rank. We denote this rank per epoch as follows:\nZt(λ)← 1\n2L ∑ `∈[L] ∑ d∈{3,4} b1− Ḡd,t,`(λ)c\nwhere Zt(λ) ∈ [0, 1). Finally, we define the average rank across T epochs as\nZ(λ)← 1 T ∑ t∈[T ] Zt(λ). (4)\nNote that Z(λ) ∈ [0, 1). The average rank measure in (4) is therefore a normalized summation of the zero-valued singular values of the low-rank factorization across all layers’ input and output unfolded tensor arrays.\nRelating to the notion of HPO and the response surface, we return to (1) and (2). Where previously the nature of these two optimization problems was poorly understood or practically unsolvable, we propose a new problem that is well understood and practically solvable (on the computational order of hours). To solve for the optimal HP set λ, we look at the following optimization problem\nλ∗ ← arg min λ 1−Z(λ), subject to ‖ ∇λZ(λ) ‖22 ≤ (5)\nwhere ∈ [0, 1) is some small conditioning error. Returning to equation 2, our response surface is therefore defined as τ = 1 − Z(λ) subject to ‖ ∇λZ(λ) ‖22≤ . Note that we now simplify our problem to only consider λ = η. Also, we do not explicitly calculate the gradient ∇λZ(λ), but rather use this constraint to guide our dynamic tracking algorithm (see section 3). To explain this somewhat counterintuitive formulation, we analyze Figures 1(a) & 2, which demonstrate that as learning rates increase, Z(η) plateaus to zero. Specifically, we notice that optimal learning rates lie towards the inception of the plateau of Z(η), before Z(η) = 0. This can also be seen in Figures 8 & 7 in Appendix A. Therefore, we wish to design our response surface such that the solution lies at the inception of the plateau-ing region (see the red dot in Figure 1(a)). In this case, our constraint ‖ ∇λZ(λ) ‖22 ≤ promotes learning rates that lie along this plateau-ing region, while arg min\nλ 1 − Z(λ) promotes, of those learning rates in the plateau-ing region, a learning rate that\nlies towards the inception of this plateau-ing region.\nMore generally, high Z(η) indicates a learning rate that is too small, as intermediate layers do not make sufficient progress in early stages of learning and therefore their KGs remain very low. This observation follows that of Li et al. (2019) in which larger initial learning rates result in better\n4\nUnder review as a conference paper at ICLR 2021\ngeneralization performance. Promotion of these larger learning rates is therefore achieved by our gradient constraint in Equation 5. Conversely, too large of a learning rate can over-regulate a network and, therefore, we wish not to minimize Z(η) completely but tune it to be sufficiently small to arrive at the inception of its plateau, creating a sort of kick-start if you will. This is achieved by the balance between our minimization and constraint in Equation 5.\nFinally, we choose T = 5 in our experiment. The early phase of training has been shown to be an important criterion in optimal model performance (Jastrzebski et al., 2020; Li et al., 2019) and we therefore wish to only consider 5-epochs to ensure optimization within this phase. Additionally, Figure 2 and Figures 8 & 7 in Appendix A tell us that Z(η) stabilizes after 5 epochs." }, { "heading": "2.3 EMPIRICAL EVALUATION OF NEW RESPONSE MODEL", "text": "Figure 1(a) visualizes results of our method on ResNet34 on CIFAR10 optimized with AdaM. The learning rate selected by our method results in lowest training loss and highest top-1 training accuracy over the 5-epoch range we consider. We note the importance of stopping at the inception of this plateau region as even though higher learning rates, highlighted by the gray-scale lines/markers, result in potentially lower Z(η), they do not guarantee lower training losses or higher training accuracies. We conclude that our response surface is a strong auxiliary to training loss, and optimizing HPs relative to our response surface will in fact optimize towards training loss and accuracy.\nFigure 1(b) displays the histogram of Z(η) values over various experimental configurations (see subsection 4.1). Note the presence of a multimodal distribution that peaks at low Z(η) but importantly not zero. This visualizes our method’s tendency to converge to a consistent range of values for irrespective of experimental configuration, showing the generalization of our response surface." }, { "heading": "3 AUTOHYPER: AUTOMATIC TUNING OF INITIAL LEARNING RATE", "text": "The pseudo-code for autoHyper is presented in Algorithm 3. Analyzing equation 5, we state that the optimal solution lies within the inception of the plateauing region of Z(η). To find this region, autoHyper first initializes a logarithmic grid space, from ηmin = 1× 10−4 to ηmax = 0.1 of S = 20 step sizes, denoted by Ω. It iterates through each ηi ∈ Ω; i ∈ {0, . . . , 19}, and computesZ(η), until a plateau is reached. Once a plateau is reached, Ω is reset such that ηmin and ηmax “zoom” towards the learning rates at the plateau. This process is repeated recursively until no significant difference between ηmin and ηmax exists. On average, this recursion occurs 3 to 4 times and as shown in Figure 3, the number of trialled learning rates remains very low (between 10-30 on average). Importantly, our algorithm is not constrained by its initial grid space. As it tracks Z(η) over learning rates, it may grow and shrink its search bounds dynamically. This permits our method to be fully autonomous and require no human intuition in setting of the initial grid space.\n5\nUnder review as a conference paper at ICLR 2021\nAlgorithm 1 autoHyper Require: grid space function Ψ, learning rate significant difference delta α = 5× 10−5, and rate of change function ζ 1: procedure RESTART( ) 2: learning rate index i = 0 3: Ω = Ψ(ηmin, ηmax, S) 4: end procedure 5: RESTART( ) 6: while True do 7: if i = |Ω| then // increase search space since no plateau has been found yet 8: set ηmin = ηmax, increase ηmax and RESTART( ) 9: end if 10: if ηmax − ηmin < α then // limits of search space are not significantly different 11: return ηmax 12: end if 13: with ηi ← Ωi, train for 5 epochs 14: compute rank: Z(ηi) per equation 4 15: if Z(ηi) = 1.0 then // all KG is zero-valued, ηmin is too small 16: increase ηmin and RESTART( ) 17: end if 18: if i = 0 and Z(ηi) < 0.5 and this is the first run then // initial ηmin is too large 19: reduce ηmin and RESTART( ) 20: else 21: if Z(ηi) = 0.0 then // all KG is non-zero, don’t search further, perform “zoom” 22: set ηmin = Ωi−2, ηmax = Ωi and RESTART( ) 23: end if 24: compute rate of change of Z(ηi): δ ← ζ({Z(η0), . . . ,Z(ηi)}) 25: if rate of change plateaus then // perform “zoom” 26: set ηmin = Ωi−1, ηmax = Ωi and RESTART( ) 27: end if 28: end if 29: i += 1 30: end while\nWe note here that the choice of Ψ and ζ mentioned in Algorithm 3 (i.e. grid space and rate of change functions, respectively) will have a significant affect on the final generated learning rate.\n(a) ResNet34/Adam (b) EffNetB0/AdaSβ = 0.8\nis not guaranteed to monotonically decrease, as shown in Figure 4(b), we employ the cumulative product of Z(ηi) (as our rate of change function), which is a monotonically decreasing function\n6\nUnder review as a conference paper at ICLR 2021\n– since Z(ηi) ∈ [0, 1) – and is therefore always guaranteed to converge. The cumulative product (to the power of 0.8) is a good choice because it (a) is always guaranteed to plateau (since 0 ≤ Z(ηi) < 1), which removes the need for some manually tuned threshold and (b) because it dampens noise well. Because the cumulative product on its own degrades to zero rather quickly in many scenarios, raising it to the power of 0.8 regulates this effect. This power is technically tune-able, however we show empirically in Figure 4(a) and 4(b) that 0.8 behaves well for both stable and unstable architectures. Refer to Figure 9 in Appendix C for the performance results of EfficientNetB0." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we conduct an ablative study of our algorithm autoHyper and response surface on various network architectures trained using various optimizers and applied to image classification datasets. We also compare autoHyper against existing SOTA; Random Search." }, { "heading": "4.1 EXPERIMENTAL SETUPS", "text": "Ablative study. All experiments are run using an RTX2080Ti, 3 cores of an Intel Xeon Gold 6246 processor, and 64 gigabytes of RAM. In our ablative study, we run experiments on CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), TinyImageNet (Li et al.), and ImageNet (Russakovsky et al., 2015). On CIFAR10 and CIFAR100, we apply ResNet18 (He et al., 2015), ResNet34 (He et al., 2015), ResNeXt50 (Xie et al., 2016), and DenseNet121 (Huang et al., 2017). On TinyImageNet and ImageNet, we apply ResNet34. For architectures applied to CIFAR10 and CIFAR100, we train using AdaM (Kingma & Ba, 2014), AdaBound (Luo et al., 2019), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), AdaS(β = {0.8, 0.9, 0.95, 0.975}) (Hosseini & Plataniotis, 2020) (with early-stop), and SLS (Vaswani et al., 2019). For ResNet34 applied to TinyImageNet, we train using AdaM, AdaBound, AdaGrad, and AdaS(β = {0.8, 0.9, 0.95, 0.975}). For ResNet34 applied to ImageNet, we train using AdaM and AdaS(β = {0.9, 0.95, 0.975}). Note that the β in AdaS-variants is known as the gain factor and trades between performance and convergence rate : a low β converges faster but at the cost of performance and vice-versa. For each experimental setup we ran one training sequence using suggested learning rates (baseline) and one training sequence using learning rates generated by autoHyper (see Tables 1-4 in Appendix B). Refer to Appendix B for additional details on the ablative study.\nComparison to Random Search. Because Random Search generally requires iterative manual refinement and is highly sensitive to the manually set search space (Choi et al., 2020; Sivaprasad et al., 2020), we attempt a fair comparison by providing the same initial search space that autoHyper starts with, and allow for the same number of trials that autoHyper takes (see Figure 3). We note however that this does provide the Random Search with a slight advantage since a priori knowledge of how many trials to consider is not provided to autoHyper. See Appendix D for additional detail." }, { "heading": "4.2 RESULTS", "text": "Consistent performance across architectures, datasets, and optimizers. We visualize the primary results of each experiment in Figure 5(a) (additional results are shown in Figure 11 in Appendix C). From these figures we see how well our method generalizes to experimental configurations by noting the consistency in top-1 test accuracies when training using the autoHyper generated initial learning rate vs. the baseline. Further, we note that if there is loss of performance when using an initial learning rate generated by autoHyper, we identify that this loss is < 1% in all experiments except three: On CIFAR100, the baselines of ResNeXt50 trained using AdaM, ResNext50 trained using RMSProp, and DenseNet121 trained using AdaBound achieve 1.2%, 2.28% and 1.9% better top-1 test accuracy, respectively. We note importantly however that when accounting for the standard deviation of each of these results, only the DenseNet121 experiement maintains its > 1% improvement. Refer to Appendix C (Tables 5-8) to see the tabulated results of each experiment. We also importantly highlight how autoHyper is able to generalize across experimental setup whereas Random Search cannot (see Figure 5(b)). Because Random Search (and other SOTA methods) depend heavily on their manually defined parameters such as epoch budget or initial search space,\n7\nUnder review as a conference paper at ICLR 2021\n8\nUnder review as a conference paper at ICLR 2021\ngeneralization to experimental setup is not feasible, as demonstrated here. In contrast, we have shown that autoHyper is perfectly capable of performing well no matter the experimental setting without need for manual intervention/refinement of any kind; a novelty.\nFully autonomous discovery of optimal learning rates. Importantly, we highlight how our method is able to fully autonomously tune the initial learning and achieve very competitive performance. Whereas traditional HPO methods (like Random Search) are extremely sensitive to initialization of the search space, which would normally require extensive domain or a priori knowledge to set, our method is not: given a new dataset, model, and/or other hyper-parameter configurations, a practitioner could use simply call our algorithm to automatically set a very competitive initial learning rate. If truly superior performance is required, one could perform more extensive HPO around the autoHypersuggested learning rate, removing the need to perform iterative manual refinement.\nSuperior performance over existing SOTA. As visualized in Figure 13, although Random Search proves competitive for Adabound and AdaM applied on CIFAR10 and CIFAR100, it cannot find a competitive learning rate for AdaSβ = 0.9 or AdaGrad and performs worse for AdaM applied on TinyImageNet. AdaGrad applied on TinyImageNet loses as much as 4% top-1 test accuracy. This highlights how autoHyper can automatically find more competitive learning rates to a Random Search given the same computational budget, and with significantly less manual intervention. These results additionally highlight why validation loss (or accuracy) cannot be used as a substitute to our metric (see Figure 14 in subsection D.2 for additional discussion).\nDrastic improvements in AdaM applied to TinyImageNet and ImageNet. ResNet34 trained using AdaM and applied to TinyImageNet and ImageNet achieves final improvements of 3.14% and 4.93% in top-1 test accuracy, respectively (see Table 5 in Appendix C). Such improvements come at a minimal cost using our method, requiring 13 trials (4 hours) and 16 trials (59 hours) for TinyImageNet and ImageNet, respectively (see Figure 3).\nExtremely fast and consistent convergence rates. We visualize the convergence rates of our method in Figure 3. Importantly, we identify the consistency of required trials per optimizer across architecture and dataset selection as well as the low convergence times. We identify that the longest convergence time for our method is on ResNet34 trained using AdaSβ = 0.95 applied to ImageNet, which took 31 trials and a total of 114 hours. We note that our method exhibits less consistent results when optimizing using SLS as SLS tends to result in high Z(η) over multiple epochs and different learning rates. Despite this, our model still converges and results in competitive performance.\nPerformance improvement over increased epoch budgets. In reference to Table 6 in Appendix C, we highlight how, of the 29 experimental configurations, when trained using the initial learning rate suggested by autoHyper, only 12 of them outperform the baseline. However, as training progresses, we note that by the end of the fixed epoch budget, 18 of the 29 experiments trained using the initial learning rate suggested by autoHyper outperform the baselines. Further, in many of the cases where baselines perform better, they remain within the standard deviation of trials, and are therefore not significantly better. These results are surprising as our goal with this method was to achieve competitive results in tuning the initial learning rate however, in more than half the cases, our method results in increased performance at a significantly smaller computational cost." }, { "heading": "5 CONCLUSION", "text": "tify that autoHyper could be adapted to simultaneously optimize multiple HPs by tracking tangents across this surface towards the minimum, but leave this to future work.\n9\nUnder review as a conference paper at ICLR 2021" }, { "heading": "A RANK BEHAVIOUR OVER MULTIPLE EPOCHS", "text": "" }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS FOR SUBSECTION 4.1", "text": "We note the additional configurations for our experimental setups.\nDatasets: For CIFAR10 and CIFAR100, we perform random cropping to 32 × 32 and random horizontal flipping on the training images and make no alterations to the test set. For TinyImageNet,\n13\nUnder review as a conference paper at ICLR 2021\nwe perform random resized cropping to 64 × 64 and random horizontal flipping on the training images and center crop resizing to 64× 64 on the test set. For ImageNet, we follow He et al. (2015) and perform random resized cropping to 224 × 244 and random horizontal flipping and 256 × 256 resizing with 224× 224 center cropping on the test set. Additional Configurations: Experiments on CIFAR10, CIFAR100, and TinyImageNet used minibatch sizes of 128 and ImageNet experiments used mini-batch sizes of 256. For weight decay, 5× 10−4 was used for AdaS-variants on CIFAR10 and CIFAR100 experiments and 1× 10−4 for all optimizers on TinyImageNet and ImageNet experiments, with the exception of AdaM using a weight decay of 7.8125× 10−6. For AdaS-variant, the momentum rate for momentum-SGD was set to 0.9. All other hyper-parameters for each respective optimizer remained default as reported in their original papers. For CIFAR10 and CIFAR100, we use the manually tuned suggested learning rates as reported in Wilson et al. (2017) for AdaM, RMSProp, and AdaGrad. For TinyImageNet and ImageNet, we use the suggested learning rates as reported in each optimizer’s respective paper. Refer to Tables 1-4 to see exactly which learning rates were used, as well as the learning rates generated by autoHyper. CIFAR10, CIFAR100, and TinyImageNet experiments were trained for 5 trials with a maximum of 250 epochs and ImageNet experiments were trained for 3 trials with a maximum of 150 epochs. Due to AdaS’ stable test accuracy behaviour as demonstrated by Hosseini & Plataniotis (2020), an early-stop criteria, monitoring testing accuracy, was used for CIFAR10, CIFAR100, and ImageNet experiments. For CIFAR10 and CIFAR100, a threshold of 1× 10−3 for AdaSβ = 0.8 and 1× 10−4 for AdaSβ = {0.9, 0.95} and patience window of 10 epochs. For ImageNet, a threshold of 1× 10−4 for AdaSβ = {0.8, 0.9, 0.95} and patience window of 20 epochs. No early stop is used for AdaSβ = 0.975.\nLearning Rates: We report every learning rate in Tables 1-4.\nUnder review as a conference paper at ICLR 2021" }, { "heading": "C ADDITIONAL RESULTS FOR SUBSECTION 4.2", "text": "Large deviation from the suggested initial learning rates. Referring to Tables 1-4 & 9, we notice variation in autoHyper suggested learning rates as compared to the author-suggested and Random Search-selected ones. The learning rates generated by our method reveal the “blind spots” that the authors originally overlooked in their HPO. Interestingly, however, we note the similarity in initial learning for ResNet34 trained using AdaM on CIFAR10, and can confirm this as an optimal learning rate. Importantly, our method is significantly quicker than the grid searching technique employed by Wilson et al. (2017). Observations on the generalization characteristics of optimizers. Figure 5(a) identifies the poor generalization characteristics of AdaM, AdaBound, AdaGrad, and SLS where they consistently achieve low training losses, but do not exhibit equivalently high top-1 test accuracies. We note that these observations are similar to those made by Wilson et al. (2017); Li et al. (2019); Jastrzebski et al. (2020). We additionally contribute that AdaS does generalize well. We also highlight SLS’ multi-order-of-magnitude tolerance to initial learning rate as well as the stability of the AdaS optimizer, particularly when applied on TinyImageNet.\n15\nUnder review as a conference paper at ICLR 2021\n16\nUnder review as a conference paper at ICLR 2021\n17\nUnder review as a conference paper at ICLR 2021\n18\nU nderreview as a conference paperatIC L R 2021\n19\nU nderreview as a conference paperatIC L R 2021\n20\nU nderreview as a conference paperatIC L R 2021\n21\nU nderreview as a conference paperatIC L R 2021\n22\nUnder review as a conference paper at ICLR 2021" }, { "heading": "D COMPARISON AGAINST STATE OF THE ART (RANDOM SEARCH)", "text": "D.1 SETUP\nThe search space is set to [1× 10−4, 0.1] and a loguniform (see SciPy) distribution is used for sampling. This is motivated by the fact that autoHyper also uses and logarithmically-spaced grid space. We note that we ran initial tests against a uniform distribution for sampling was done and showed slightly worse results, as the favouring of smaller learning rates benefits the optimizers we considered. In keeping with autoHyper’s design, the learning rate that resulted in lowest training loss after 5 epochs was chosen. One could also track validation accuracy, however as visualized in Figures 5(a) & 13, validation loss is more stable for the datasets we are considering. This selection could be altered if the dataset being used exhibits a different behaviour, however this would be a manual alteration at the selection of the practitioner – one that does not need to be made if using autoHyper.\nD.2 ADDITIONAL DISCUSSION AND RESULTS\nCan you replace Z(η) with validation loss? Replacing Z(η) with validation loss does not work because greedily taking validation loss (or accuracy) is not stable nor domain independent. Analyzing Figures 5(a) & 9, validation loss/accuracy is unstable since either the network (EfficientNetB0 in Figure 9) or the dataset (TinyImageNet in Figure 5(a)) results in unstable top-1 test accuracy/test loss scores that are unreliable to track. See also Figure 14, which demonstrates the inbaility to track validation loss/accuracy for various learning rates. Further, validation accuracy/loss can vary greatly based on initialization, whereas our method does not vary due to its low-rank factorization. Finally, our metric, Z(η), is always guaranteed to be zero with a sufficiently small learning rate and maximized with large learning rates, therefore we can always dynamically adapt our search range to the proper range. This fact is not so true for tracking validation accuracy/loss.\nAdditionally, low validation loss does not correlate to high validation accuracy (an additional figure, Figure 12, in Appendix C shows this). You might then suggest to take a k of the best performing learning rates based on validation accuracy/loss and focus on those, but this requires you to manually define k then attempt a manually defined Grid/Random Search refinements around those areas, with manual heuristics to indicate when to stop searching, whereas our method is fully automatic and self-converges. Not to mention, this would take more time.\nIn summation, existing SOTA method like Random Search cannot compete with autoHyper when given similar budgets and minimizing the manual intervention/refinement. This displays autoHyper’s prominent feature of being a low-cost, fully automatic algorithm to search for optimal hyperparameter bounds (namely in this work, the initial learning rate). Future work could include using autoHyper to quickly discover this optimal hyper-parameter range, and then further refine using more extensive HPO methods with greater budgets if truly superior performance is required, and this could further alleviate a lot of manual refinement that currently plagues existing SOTA methods.\nUnder review as a conference paper at ICLR 2021\n24" } ]
2,020
null
SP:3d65be849f99ab19e16eab84ba7cd7748d3ed8ad
[ "The paper proposes a new method, UA-GAN to train GANs in a federated learning setup. The method simulates a central discriminator D_ua such that the odds values of the central discriminator is equivalent to the weighted sum of the local discriminators. The central generator is then trained based on the simulated central discriminator. The paper provides theoretical analysis on its proposed method and conducts experiments on toy datasets and mixtures of real world datasets to simulate a federated learning setup." ]
Recently, Generative Adversarial Networks (GANs) have demonstrated their potential in federated learning, i.e., learning a centralized model from data privately hosted by multiple sites. A federated GAN jointly trains a centralized generator and multiple private discriminators hosted at different sites. A major theoretical challenge for the federated GAN is the heterogeneity of the local data distributions. Traditional approaches cannot guarantee to learn the target distribution, which is a mixture of the highly different local distributions. This paper tackles this theoretical challenge, and for the first time, provides a provably correct framework for federated GAN. We propose a new approach called Universal Aggregation, which simulates a centralized discriminator via carefully aggregating the mixture of all private discriminators. We prove that a generator trained with this simulated centralized discriminator can learn the desired target distribution. Through synthetic and real datasets, we show that our method can learn the mixture of largely different distributions where existing federated GAN methods fail.
[]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "Milan Vojnovic" ], "title": "Qsgd: Communicationefficient sgd via gradient quantization and encoding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "George J Annas" ], "title": "Hipaa regulations-a new era of medical-record privacy", "venue": "New England Journal of Medicine,", "year": 2003 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Fan Yang", "William W Cohen", "Russ R Salakhutdinov" ], "title": "Good semisupervised learning that requires a bad gan", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Lawrence O Gostin", "Laura A Levit", "Sharyl J Nass" ], "title": "Beyond the HIPAA privacy rule: enhancing privacy, improving health through research", "venue": null, "year": 2009 }, { "authors": [ "Sudipto Guha", "Piotr Indyk", "Andrew McGregor" ], "title": "Sketching information divergences", "venue": "In International Conference on Computational Learning Theory (COLT),", "year": 2007 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Corentin Hardy", "Erwan Le Merrer", "Bruno Sericola" ], "title": "Md-gan: Multi-discriminator generative adversarial networks for distributed datasets", "venue": "IEEE International Parallel and Distributed Processing Symposium (IPDPS),", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [], "title": "Challenges in the e-health regulatory policy", "venue": "HOW DEEP IS YOUR LAW? BREXIT. TECHNOLOGIES. MODERN CONFLICTS,", "year": 2017 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Tom Fletcher" ], "title": "Semi-supervised learning with gans: Manifold invariance with improved inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xiang Li", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "Communication efficient decentralized training with multiple local updates", "venue": "arXiv preprint arXiv:1910.09126,", "year": 2019 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jianhua Lin" ], "title": "Divergence measures based on the shannon entropy", "venue": "IEEE Transactions on Information theory,", "year": 1991 }, { "authors": [ "Huidong Liu", "Xianfeng Gu", "Dimitris Samaras" ], "title": "A two-step computation of the exact GAN Wasserstein distance", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Huidong Liu", "Xianfeng Gu", "Dimitris Samaras" ], "title": "Wasserstein gan with quadratic transport cost", "venue": "In The IEEE International Conference on Computer Vision (ICCV), October", "year": 2019 }, { "authors": [ "Ming-Yu Liu", "Xun Huang", "Arun Mallya", "Tero Karras", "Timo Aila", "Jaakko Lehtinen", "Jan Kautz" ], "title": "Few-shot unsupervised image-to-image translation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "Rebecca T Mercuri" ], "title": "The hipaa-potamus in health care data security", "venue": "Communications of the ACM,", "year": 2004 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lawrence Mo" ], "title": "Internet-based font server, January", "venue": "US Patent App", "year": 2002 }, { "authors": [ "Lam M Nguyen", "Phuong Ha Nguyen", "Marten van Dijk", "Peter Richtárik", "Katya Scheinberg", "Martin Takáč" ], "title": "Sgd and hogwild! convergence without the bounded gradients assumption", "venue": "arXiv preprint arXiv:1802.03801,", "year": 2018 }, { "authors": [ "Guo-Jun Qi" ], "title": "Loss-sensitive generative adversarial networks on lipschitz densities", "venue": "International Journal of Computer Vision,", "year": 2019 }, { "authors": [ "Hui Qu", "Yikai Zhang", "Qi Chang", "Zhennan Yan", "Chao Chen", "Dimitris Metaxas" ], "title": "Learn distributed gan with temporary discriminators", "venue": "arXiv preprint arXiv:2007.09221,", "year": 2020 }, { "authors": [ "Benjamin Recht", "Christopher Re", "Stephen Wright", "Feng Niu" ], "title": "Hogwild: A lock-free approach to parallelizing stochastic gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jonathan JM Seddon", "Wendy L Currie" ], "title": "Cloud computing and trans-border health data: Unpacking us and eu healthcare regulation and compliance", "venue": "Health policy and technology,", "year": 2013 }, { "authors": [ "Tamar Rott Shaham", "Tali Dekel", "Tomer Michaeli" ], "title": "Singan: Learning a generative model from a single natural image", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hoang Thanh-Tung", "Truyen Tran", "Svetha Venkatesh" ], "title": "Improving generalization and stability of generative adversarial networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kang Wei", "Jun Li", "Ming Ding", "Chuan Ma", "Howard H Yang", "Farhad Farokhi", "Shi Jin", "Tony QS Quek", "H Vincent Poor" ], "title": "Federated learning with differential privacy: Algorithms and performance analysis", "venue": "IEEE Transactions on Information Forensics and Security,", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Bangzhou Xin", "Wei Yang", "Yangyang Geng", "Sheng Chen", "Shaowei Wang", "Liusheng Huang" ], "title": "Private fl-gan: Differential privacy synthetic data generation based on federated learning", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Qiang Yang", "Yang Liu", "Tianjian Chen", "Yongxin Tong" ], "title": "Federated machine learning: Concept and applications", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2019 }, { "authors": [ "Hao Yu", "Rong Jin", "Sen Yang" ], "title": "On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ruixiang Zhang", "Tong Che", "Zoubin Ghahramani", "Yoshua Bengio", "Yangqiu Song" ], "title": "Metagan: An adversarial approach to few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 }, { "authors": [ "Ligeng Zhu", "Zhijian Liu", "Song Han" ], "title": "Deep leakage from gradients", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "VGG Simonyan", "Zisserman" ], "title": "11-layer model is used for the downstream classification task. We pad the image to 32 × 32 and then randomly crop them to 28 × 28 with a batch size of 64 as input. The model is trained with SGD optimizer using a learning rate of 0.01 for 150 epochs. Dataset Details: One of our foundational datasets is the Font dataset. It is created from 2500+ fonts of digits taken from the Google Fonts database", "venue": "Similar to MNIST,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative Adversarial Networks (GANs) have attracted much attention due to their ability to generate realistic-looking synthetic data (Goodfellow et al., 2014; Zhang et al., 2018; Liu et al., 2019b; Shaham et al., 2019; Dai et al., 2017; Kumar et al., 2017). In order to obtain a powerful GAN model, one needs to use data with a wide range of characteristics (Qi, 2019). However, these diverse data are often owned by different sources, and to acquire their data is often infeasible. For instance, most hospitals and research institutions are unable to share data with the research community, due to privacy concerns (Annas et al., 2003; Mercuri, 2004; lex, 2014; Gostin et al., 2009) and government regulations (Kerikmäe, 2017; Seddon & Currie, 2013).\nTo circumvent the barrier of data sharing for GAN training, one may resort to Federated Learning (FL), a promising new decentralized learning paradigm (McMahan et al., 2017). In FL, one trains a centralized model but only exchanges model information with different data sources. Since the central model has no direct access to data at each source, privacy concerns are alleviated (Yang et al., 2019; Kairouz et al., 2019). This opens the opportunity for a federated GAN, i.e., a centralized generator with multiple local and privately hosted discriminators (Hardy et al., 2019). Each local discriminator is only trained on its local data and provides feedback to the generator w.r.t. synthesized data (e.g., gradient). A federated GAN empowers GAN with much more diversified data without violating privacy constraints.\nDespite the promises, a convincing approach for training a federated GAN remains unknown. The major challenge comes from the non-identical local distributions from multiple data sources/entities. The centralized generator is supposed to learn a mixture of these local distributions from different entities, whereas each discriminator is only trained on local data and learns one of the local distributions. The algorithm and theoretical guarantee of traditional single-discriminator GAN (Goodfellow et al., 2014) do not easily generalize to this federated setting. A federated GAN should integrate feedback from local discriminators in an intelligent way, so that the generator can ‘correctly’ learn the mixture distribution. Directly averaging feedbacks from local discriminators (Hardy et al., 2019) results in a strong bias toward common patternsowever, such non-identical distribution setting is classical in federated learning (Zhao et al., 2018; Smith et al., 2017; Qu et al., 2020) and characteristic of local data improves the diversity of data.\nIn this paper, we propose the first theoretically guaranteed federated GAN, that can correctly learn the mixture of local distributions. Our method, called Universal Aggregation GAN (UA-GAN), focuses on the odds value rather than the predictions of local discriminators. We simulate an unbiased centralized discriminator whose odds value approximates that of the mixture of local discriminators. We prove that by aggregating gradients from local discriminators based on the odds value of the central discriminator, we are guaranteed to learn the desired mixture of local distributions.\nA second theoretical contribution of this paper is an analysis of the quality of the federated GAN when the local discriminators cannot perfectly learn with local datasets. This is a real concern in a federated learning setting; the quantity and quality of local data can be highly variant considering the limitation of real-world institutions/sites. Classical theoretical analysis of GAN (Goodfellow et al., 2014) assumes an optimal discriminator. To understand the consequence of suboptimal discriminators, we develop a novel analysis framework of the Jensen-Shannon Divergence loss (Goodfellow et al., 2014; Lin, 1991) through the odds value of the local discriminators. We show that when the local discriminators behave suboptimally, the approximation error of the learned generator deteriorates linearly to the error.\nIt is worth noting that our theoretical result on suboptimality also applies to the classical GAN. To the best of our knowledge, this is the first suboptimality bound on the federated or classical GAN.\nIn summary, the contributions are threefold.\n• We propose UA-GAN, a novel federated GAN approach that aggregates feedback from local discriminators through their odds value rather than posterior probability. • We prove that UA-GAN correctly learns the\nmixture of local distributions when they are perfectly modeled by local discriminators. • We prove when the discriminators are suboptimal in modeling their local distributions, the generator’s approximation error is also linear. We also show that our bound is tight.\nWe show with various experiments that our method (UA-GAN) outperforms the state-of-theart federated GAN approaches both qualitatively and quantitatively.\nTraining on large scale heterogeneous datasets makes it possible to unleash the power of GANs. Federated GANs show their promise in utilizing unlimited amount of sensitive data without privacy and regulatory concerns. Our method, as the first theoretically guaranteed GAN, will be one step further in building such a foundation. Fig. 1 shows the workflow of UA-GAN." }, { "heading": "2 RELATED WORK", "text": "The Generative Adversarial Networks (GANs) have enjoyed much success in various machine learning and computer vision tasks (Zhang et al., 2018; Liu et al., 2019b; Shaham et al., 2019; Dai et al., 2017; Kumar et al., 2017). Numerous methods are proposed for GAN training, such as Spectral Normalization (SN) (Miyato et al., 2018), zero-centered gradient penalty (Mescheder et al., 2018; Thanh-Tung et al., 2019), WGAN (Arjovsky et al., 2017) , WGAN-GP (Gulrajani et al., 2017), WGAN-TS (Liu et al., 2018), WGAN-QC (Liu et al., 2019a) etc. A common approach in practice is the conditional GAN (cGAN) (Mirza & Osindero, 2014), which uses supervision from data (e.g., class labels) to improve GAN’s performance.\nMulti-discriminator/-generator GANs have been proposed for various learning tasks. To train these GANs, one common strategy is to directly exchange generator/discriminator model parameters during training (Xin et al., 2020; Hardy et al., 2019). This is very expensive in communication; a simple ResNet18 (He et al., 2016a) has 11 million parameters (40MB). Closest to us is MD-GAN (Hardy\nAlgorithm 1 Training Algorithm of UA-GAN.\n1: Input: Batch size m, datasets {Dj}, size of datasets {πj = njn }. 2: Output: G, Dj ,∀j ∈ [K]. 3: for t = 1, · · · , T do 4: {Work at the central server.} 5: G generates synthetic data: x̂i = G(zi), i = 1, · · · ,m. 6: Send batch of synthetic data Dsyn = {x̂1, · · · , x̂m} to all K sites. 7: for j = 1, · · · ,K do 8: {Work at each local site.} 9: Update the local discriminator, Dj , using real samples from Dj and synthetic data batch,\nDsyn, based on Eq. 2. 10: Output predictions and gradients for synthetic data Dj(x̂i), ∂Dj(x̂i)/∂x̂i, i = 1, · · · ,m. Send them to the central server. 11: end for 12: {Work at the central server.} 13: Simulate value of Dua(x̂i) via Eq. 4, ∀i. 14: Update G based on Eq. 5, using gradients from Dj’s. 15: end for\net al., 2019), which aggregates feedbacks (gradients) from local discriminators through averaging. It also swaps parameters between discriminators. None of these methods provide theoretical guarantee as ours. Meanwhile, our method is the only one without model swapping, and thus is much more efficient in bandwidth consumption.\nFederated Learning (FL) (Kairouz et al., 2019; McMahan et al., 2016) offers the opportunity to integrate sensitive datasets from multiple sources through distributed training. Many works have been done tackling practical concerns in FL, such as convergence under Non-IID data assumption (Yu et al., 2019; Lian et al., 2017; Li et al., 2020), decentralized SGD without freezing parameters (Recht et al., 2011; Nguyen et al., 2018), communication efficiency (Konečnỳ et al., 2016; Li et al., 2019), provable privacy guarantees (Alistarh et al., 2017; Wei et al., 2020). Federated GAN is also of great interest from a federated learning perspective. A successful federated GAN makes it possible to train a centralized model (e.g., a classifier) using data synthesized by the centralized generator. This becomes a solution when existing trained FL model needs to be replaced and updated by advanced machine learning approaches , as one can retrain the model at any time using the generator. It also alleviates some privacy concerns of FL, e.g., the gradient leakage problem (Zhu et al., 2019)." }, { "heading": "3 METHOD", "text": "To introduce our algorithm, we first introduce notations and formalize the mixture distribution learning problem. Next, we present our Universal Aggregation approach and prove that it is guaranteed to learn the target mixture distribution. We also analyze the suboptimality of the model when local discriminators are suboptimal. For ease of exposition, we mostly use ordinary GAN to illustrate the algorithm and prove its theoretical properties. At the end of this section, we extend the algorithm, as well as its theoretical guarantees, to conditional GAN (cGAN) Mirza & Osindero (2014). The empirical results in this work are established on the cGAN since its training is much more controllable, thanks to the additional supervision by the auxiliary variable (e.g., classes of images).\nNotations and problem formulation. We assume a cross-silo FL setting Kairouz et al. (2019), i.e., K entities hosting K private datasets D1, ...,DK , with size n1, · · · , nK . The total data size n = ∑K j=1 nj . The overall goal is to learn a target mixture distribution\np(x) = ∑K\nj=1 πjpj(x), (1)\nin which a component distributions pj(x) is approximated by the empirical distribution from the j-th local dataset Dj . The mixing weight πj is computed using the fraction of dataset Dj : πj = nj/n. In general, different mixture components pi(x) and pj(x) may (but not necessarily) be non-identical, namely, ∃x, such that pi(x) 6= pj(x). Universal Aggregation GAN: Now we are ready to introduce our multi-discriminator aggregation framework. A pseudo-code of UA framework can be found in Algorithm 1. We have a centralized\n(conditional) generator G(z) seeking to learn the global distribution p(x). In each local site, a discriminator Dj(x) has access to local dataset Dj . Note data in Dj are only sampled from pj(x). During training, the generator generates a batch of synthetic data to allK sites. The j-th discriminator seeks to minimize the cross entropy loss of GAN from a local perspective Goodfellow et al. (2014):\nmax Dj V (G,Dj) = Ex∼pj(x)[logDj(x)] + Ez∼N (0,Id)[log(1−Dj(G(z)))] (2)\nTo formalize the generator training, we first introduce odds value. It is an essential quantity for our algorithm and its analysis.\nDefinition 1 (odds value). Given a probability φ ∈ (0, 1), its odds value is Φ(φ) , φ1−φ . Note the definition requires φ 6= 1.\nAlso it is straightforward to see φ = Φ(φ)1+Φ(φ) .\nThe central idea of UA Framework is to simulate a centralized discriminator Dua(x) which behaves like the mixture of all local discriminators (in terms of odds value). A well behaved Dua(x) can then train the centralized generator G using its gradient, just like in a classical GAN.\nWe design Dua so that its odds value Φ(Dua(x)) is identical to the mixture of the odds values of local discriminators: Φ(Dua(x)) = ∑K\nj=1 πjΦ(Dj(x)). (3)\nGiven Φ(Dua(x)), we can compute Dua(x) as\nDua(x) = Φ(Dua(x))\n1 + Φ(Dua(x)) (4)\nOnce the central discriminator Dua is simulated, the generator can be computed by minimizing the generator loss :\nmin G V (G,Dua) = Ex∼p(x)[logDua(x)] + Ez∼N (0,Id)[log(1−Dua(G(z))] (5)\nNote that mathematically, Eq. (5) can be directly written in terms of local discriminators Dj’s (by substituting in Eqs (3) and (4)). In implementation, the simulated central discriminator can be written as a pytorch or tensorflow layer.\nIntuition. The reason we define Dua’s behavior using a mixture of odds values instead of a mixture of the predictions is mathematical. It has been shown in Goodfellow et al. (2014) that a perfect discriminator learning a data distribution p(x) and a fixed generator distribution q(x) satisfies D(x) = p(x)p(x)+q(x) . It can be shown that only with the odds value equivalency, this optimal solution of the central discriminator D(x) can be recovered if each individual discriminator is optimal, i.e., Dj(x) =\npj(x) pj(x)+q(x) . This is not true if we define the central discriminator behavior using the average prediction, i.e., Dua = ∑ j πjDj . More details can be found in Theorem (4) and its proof.\nRemark 1 (Privacy Safety). For federated learning, it is essential to ensure information of real data are not leaked outside the local site. This privacy safety is guaranteed in our method. To optimize G w.r.t. Eq. (5), we only need to optimize the second term and use gradient on synthetic images G(z) from local discriminators.\nOne important concern is about the optimal discriminator condition. Dua(x) is designed to be optimal only when Dj’s are optimal. We need to investigate the consequence if the local discriminators Dj’s are suboptimal. We will provide an error bound of the learned distribution w.r.t., the suboptimality of Dj’s in Corollary (2)." }, { "heading": "3.1 THEORETICAL ANALYSIS OF UA-GAN", "text": "In this section, we prove the theoretical guarantees of UA-GAN. First, we prove the correctness of the algorithm. We show that if all local discriminators can perfectly learn from their local data, the algorithm is guaranteed to recover the target distribution (Eq. (1)). Second, we discuss the quality of the learned generator distribution when the local discriminators are suboptimal, due to real-world\nconstraints, e.g., insufficient local data. We show that the error of the learned distribution is linear to the suboptimality of the local discriminators. All results in this section are for original (unconditional) GANs. But they can be extended to conditional GANs (see Sec. (3.2)). Due to space constraints, we only state the major theorems and leave their proofs to the supplemental material." }, { "heading": "CORRECTNESS OF UA-GAN", "text": "The correctness theorem assumes all local discriminators behave optimally. Assumption 1 (Optimal Local Discriminator). Assume all local discriminators are optimal, i.e., they learn to predict whether a data is true/fake perfectly. Let q(x) be the probability of the current generator G. A local discriminator is optimal iff Dj(x) =\npj(x) q(x)+pj(x) .\nTheorem (4) states the unbiasedness of UA-GAN: with optimal local discriminators, the generator learns the target distribution. Theorem 1 (Correctness). Suppose all discriminators Dj’s are optimal. Dua(x) is computed via Eq. (3). Denote by q the density function of data generated by G. Let q∗(·) be the optimal distribution w.r.t. the Jenson Shannon divergence loss :\nq∗ := arg min q L(q) = Ex∼p(x)[logDua(x)] + Ex∼q(x)[log(1−Dua(x)]. (6)\nWe have q∗ equals to the true distribution, formally, q∗ = p.\nThe proof mainly establishes that when Dj’s are optimal, Dua is also optimal. With the optimality of Dua proved, the rest follows the correctness proof of the classic GAN (Theorem 1 in Goodfellow et al. (2014)). More details are in the supplemental material." }, { "heading": "ANALYSIS OF THE SUBOPTIMAL SETTING", "text": "In centralized learning setting, an optimal discriminator is a reasonable assumption since the model has access to all (hopefully sufficient) data. However, in federated GAN setting, available data in some site Dj may be limited. One natural question in this limited data scenario is: how would the generator behave if some local discriminators are suboptimal? We address this theoretical question.\nWe first focus on a single discriminator case. We show the behavior of a perturbed version of JensenShannon divergence loss Guha et al. (2007); Lin (1991); Csiszár et al. (2004). The suboptimality of a central discriminator D(x) is measured by the deviation in terms of the odds value. Denote by q(x) the generator distribution of the current G. Ideally, the odds value of an optimal discriminator should be p(x)/q(x). We show that a suboptimal D with δ deviation from the ideal odds value will result in O(δ) suboptimality in the target distribution.\nTheorem 2 (Suboptimality Bound for a Single Discriminator). Suppose a discriminator D̃(x) is a perturbed version of the optimal discriminator D(x), s.t. Φ(D̃(x)) = Φ(D(x))ξ(x) with |1− ξ(x)| ≤ δ and δ ≤ 1/8. Let q∗ be the optimal distribution of the Jensen-Shannon divergence loss based on the perturbed discriminator\nq∗ := arg min q L(q) = Ex∼p(x)[log D̃(x)] + Ex∼q(x)[log(1− D̃(x)]. (7)\nThen q∗ satisfies |q∗(x)/p(x)− 1| ≤ 16δ, ∀x.\nThis theorem shows that the ratio of the learned distribution q∗ is close to the target true distribution p when the suboptimality of Dj is small. To the best of our knowledge, this is the first bound on the consistency of Jensen-Shannon divergence with suboptimal discriminator, even for a classical GAN.\nNext, we show that the bound is also tight.\nTheorem 3 (Tightness of the Bound in Theorem (5)). Given a perturbed discriminator D̃(x) of the optimal one D(x), s.t. Φ(D̃(x)) = Φ(D(x))ξ(x) with |ξ(x) − 1| ≥ γ and γ ≤ 1/8. The optimal distribution q∗ as in Eq. (12) satisfies |q∗(x)/p(x)− 1| ≥ γ/16, ∀x.\nNext we extend these bounds for a single discriminator to our multi-discriminator setting. This is based on Theorem 5 and the linear relationship between the local discriminators and the central discriminator.\nCorollary 1 (Suboptimality Bound for UA-GAN). Assume suboptimal local discriminators D̃j(x) are the perturbed versions of the optimal ones Dj(x). And the suboptimality is bounded as: Φ(D̃j(x)) = Φ(Dj(x))ξj(x) with |ξj(x)−1| ≤ δ ≤ 1/8, ∀x. The centralized discriminator D̃ua(x) is computed using these perturbed local discriminators such that Φ(D̃ua(x)) = ∑K j=1 πjΦ(D̃j(x)). Let q∗ be the optimal distribution of the Jensen-Shannon divergence loss based on the perturbed UA discriminator D̃ua\nq∗ := arg min q L(q) = Ex∼p(x)[log D̃ua(x)] + Ex∼q(x)[log(1− D̃ua(x)]. (8)\nThen q∗ satisfies |q∗(x)/p(x) − 1| = O(δ). In particular, the optimal distribution q∗(x) has O(δ) total variation distance to the target distribution p(x).\nNote that the lowerbound of the suboptimality for single discriminator (Theorem 6)) can also be extended to UA-GAN similarly. Remark 2. The consistency gap in Corollary (2) assumes a uniform suboptimality bounded for all local discriminators. In practice, such assumption may not be informative if the sizes of Dj’s data are highly imbalanced. It would be interesting to relax such assumption and investigate the guarantees of UA-GAN w.r.t. the expected suboptimality of Dj’s." }, { "heading": "3.2 UNIVERSAL AGGREGATION FRAMEWORK FOR CONDITIONAL GAN", "text": "Our algorithm and analysis on unconditional GANs can be generalized to the more practical Conditional GAN Mirza & Osindero (2014). A conditional GAN learns the joint distribution of p(x, y). Here x represents an image or a vectorized data, and y is an auxiliary variable to control the mode of generated data (e.g., the class label of an image/data). Conditional GAN is much better to train in practice and is the common choice in most existing works. This is indeed the setting we use in our experiments.\nThe target distribution of the learning problem becomes a joint mixture distribution: p(x, y) = ∑ j πjωj(y)pj(x, y),\nin which πj = nj/n and ωj(y) is the proportion of class y data within the j-th local dataset Dj . We assume πj , and the dictionary of y and its fractions in each Dj , ωj(y) are known to the public. In practice, such knowledge will be used for generating y. Formally, y ∼ ∑K j=1 πjωj(y).\nTo design the UA-GAN for the conditional GAN, the odds value aggregation in formula Eq. (3) needs to be adapted to:\nΦ(Dua(x|y)) = K∑ j=1 πjωj(y)Φ(Dj(x|y)).\nThe computation of Dua(x|y) and the update of G and Dj’s need to be adjusted accordingly. The theoretical guarantees for unconditional GANs can also be established for a conditional GAN. Due to space limitation, we leave details to the supplemental material." }, { "heading": "4 EXPERIMENTS", "text": "On synthetic and real-world datasets, we verify that UA-GAN can learn the target distribution from both i.i.d and non-identical local datasets. We focus on conditional GAN Mirza & Osindero (2014) setting as it is the common choice in practice." }, { "heading": "4.1 SYNTHETIC EXPERIMENT", "text": "We evaluate UA-GAN on a toy dataset. See Fig. 2 first row. The toy dataset has 4 datasets, generated by 4 Gaussians centered at (10, 10), (10,-10), (-10,10), (-10,-10) with variance 0.5. Data samples are shown as blue points. The central generator takes Gaussian noise centered at (0, 0) with variance of 0.5 (green points) and learns to transform them into points matching the mixture distribution (orange points). The first figure shows the generator successfully recovers the Gaussian mixture. The contours\nshow the central discriminator Dua calculated according to Eq. 3. The generated points (orange) are evenly distributed near the 4 local Gaussians’ centers. The 2nd to 5th figures show the prediction of the local discriminators (Dj’s), overlaid with their respective samples Dj in blue. Meanwhile, we show in the second row the results of the average scheme, called Avg-GAN, which averages local discriminators’ outputs to train the generator. The first figure shows that the learned distribution Davg = 1K ∑ j Dj is almost flat. The generated samples (orange) collapse near the origin. Although each individual discriminator can provide valid gradient information (see 2nd to 5th figures for Dj’s), naively averaging their outputs cannot achieve the target distribution, when the distributions are symmetric. We show the results of the MD-GAN by Hardy et al. (2019) in the third row. MD-GAN also adopts the average scheme, but randomly shuffle discriminators parameters during training. Similar to Avg-GAN, MD-GAN cannot learn the four Gaussians." }, { "heading": "4.2 REAL-WORLD MIXTURE DATASETS", "text": "We evaluate our method on several mixture datasets, both i.i.d and non-identical.\nDatasets. Three real-world datasets are utilized to construct the mixture datasets: MNIST LeCun et al. (1998), Fashion-MNIST Xiao et al. (2017), and Font dataset. We create the Font dataset from 2500+ fonts of digits taken from the Google Fonts database Mo (2002). Similar to MNIST, it consists of 10 classes of 28× 28 grayscale images, with 60k samples for training and 29k samples for test. To make the differences more clear between font and handwrite images, we highlight the Font images with a circle when build the dataset. Using these foundation datasets, we create 2 different mixture datasets with non-identical local datasets: (1) non-identical MNIST+Fashion; (2) non-identical MNIST+Font. We uniformly sample Fashion/Font data for all 10 distributed sites. These are common patterns across all sites. Meanwhile, for each individual site, we add MNIST data with a distinct class among 0 ∼ 9. These data are distinguishable features for different sites. Ideally, a federated GAN should be able to\nlearn both the common patterns and the site-specific data. Please see supplemental material for more details and samples from these mixture datasets.\nBaselines. We compare UA-GAN with Avg-GAN and another SOTA federated GAN, MDGAN Hardy et al. (2019). We also include two additional baselines: Centralized GAN trains using the classic setting, i.e., one centralized discriminator that has access to all local datasets. Comparing with this baseline shows how much information we lose in the distributed training setting. Another baseline is Real, which essentially uses real data from the mixture datasets for evaluation. This is the upper bound of all GAN methods. More details can be found in supplementary material.\nEvaluation metrics. We adopt Inception score (IS) Salimans et al. (2016), Frechet Inception distance (FID) Heusel et al. (2017) to measure the quality of the generated images. As GAN is often used for training downstream classifiers, we also evaluate the methods by training a classifier using the generated data and report the Classification Accuracy as an evaluation metric. This is indeed very useful in federated learning; a centralized classifier can be trained using data generated by federated GANs without seeing the real data from private sites.\nDiscussion. The quantitative results on the two non-identical mixture datasets are shown in Table 1. UA-GAN significantly outperforms the other two federated GANs, Avg-GAN and MD-GAN. Its performance is even close to the centralized GAN, showing that our algorithm manages to mitigate the challenge of distributed training to a satisfying degree.\nThe superior performance of UA-GAN can be illustrated by qualitative examples in Fig. 3. On MNIST+Fashion dataset (subfigures a-c), the average aggregation strategy used by Avg-GAN could not effectively aggregate outputs of ‘non-identical’ local discriminators. Therefore, it only learns to generate the common patterns, e.g., Fashion images (Fig. 3(a)). MD-GAN fails to produce high quality images (Fig. 3(b)), probably because the discriminator switch makes the training not stable enough. Meanwhile, our UA-GAN is able to generate the mixture with both common patterns (Fashion images) and site-specific images (different digits from MNIST) with high quality. Similar phenomenon can be observed for MNIST+Font (subfigures d-f). Avg-GAN only learns the common pattern (computerized fonts from Font dataset), MD-GAN gives low quality images whereas UA-GAN can also learns the high-quality site-specific handwriting digits (MNIST).\nNote that we also compare the methods on mixture datasets with i.i.d local distributions, i.e., all local datasets are sampled in a same way from the real datasets. In an i.i.d setting, all federated GANs and the centralized GAN perform similarly. More results will be included in the supplemental material." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this work, we proposed a provably correct federated GAN. It simulates a centralized discriminator via carefully aggregating the feedbacks of all local discriminators. We proved that the generator learns the target distribution. We also analyzed the error bound when the discriminator is suboptimal due to local dataset limitation. A well-trained federated GAN enpowers GANs to learn from diversified datasets. It can also be used as a data provider for training task-specific models without accessing or storing privacy sensitive data." }, { "heading": "A PROOFS OF THEOREMS IN SECTION 3", "text": "We recall the definition of odds value. Definition 2 (odds value). Given a probability φ ∈ (0, 1), its odds value is Φ(φ) , φ1−φ . Note the definition requires φ 6= 1." }, { "heading": "A.1 ANALYSIS OF OPTIMAL DISCRIMINATOR", "text": "We recall the theorem of the correctness of UA-GAN. Theorem 4 (Correctness). Suppose all discriminators Dj’s are optimal. Dua(x) is computed via Eq. (11). The optimal distribution of the Jenson-Shannon divergence loss:\narg min q\nL(q) = Ex∼p(x)[logDua(x)] + Ex∼q(x)[log(1−Dua(x)] (9)\nis q∗ = p where q is the density (mass) function of G(z).\nTo prove the theorem, we first introduce the following Lemma, which is similar to Proposition 1 in Goodfellow et al. (2014). We include the Lemma and Proof here for completeness. Lemma 1. When generator G is fixed, the optimal discriminator Dj(x) is :\nDj(x) = pj(x)\npj(x) + q(x) (10)\nProof:\nmax Dj Vj(Dj) = max Dj ∫ x pj(x)logDj(x) + q(x)log(1−Dj(x))dx\n≤ ∫ x max Dj {pj(x)logDj(x) + q(x)log(1−Dj(x))}dx\nby setting Dj(x) = pj(x)\npj(x)+q(x) we can maximize each component in the integral thus make the\ninequality hold with equality.\nProof of Theorem 4: Suppose in each training step the discriminator achieves its maximal criterion in Lemma 1, the simulated Dua(x) becomes:\nDua(x) =\n∑K j πj(y) D∗j (x)\n1−D∗j (x) 1 + ∑K j πj(y) D∗j (x)\n1−D∗j (x)\n(11)\nGiven that discriminators D1, ..., DK behave optimally, the value of Dj(x) 1−Dj(x) = pj(x) q(x) which implies∑\nj πjDj(x) 1−Dj(x) =\n∑ j πjpj(x)\nq(x) = Dua(x) 1−Dua(x) . By the aggregation formula 11, the simulated discriminator will be Dua(x) = ∑\nj πjpj(x)∑ j πjpj(x)+q(x)\n= p(x)p(x)+q(x) . Suppose in each training step the discriminator achieves its maximal criterion in Lemma 1, the loss function for the generator becomes:\nmin a L(q) = Ex∼p(x)[logD(x)] + Ex̂∼q(x̂|y)[log(1−D(x̂)]\n= Ex∼p(x[logD(x)] + Ex̂∼q(x̂)[log(1−D(x̂))]\n= ∫ x p(x) log p(x) p(x) + q(x) + q(x) log\nq(x)\np(x) + q(x) dx\nThe above loss function has optimal solution of q due to Theorem 1 in Goodfellow et al. (2014)." }, { "heading": "A.2 ANALYSIS OF SUB-OPTIMAL DISCRIMINATOR", "text": "We provide proofs of Theorems 5, 6, and Corollary 2.\nTheorem 5 (Suboptimality Bound for a Single Discriminator). Suppose a discriminator D̃(x) is a perturbed version of the optimal discriminator D(x), s.t. Φ(D̃(x)) = Φ(D(x))ξ(x) with |1− ξ(x)| ≤ δ and δ ≤ 1/8. Let q∗ be the optimal distribution of the Jensen-Shannon divergence loss based on the perturbed discriminator\nq∗ = arg min q L(q) = Ex∼p(x)[log D̃(x)] + Ex∼q(x)[log(1− D̃(x)]. (12)\nThen q∗ satisfies |q∗(x)/p(x)− 1| ≤ 16δ, ∀x.\nLemma 2. Suppose 0 < |a|, |b| ≤ 18 , log ρ = log( 1 2 +a)+bwith 0 < ρ < 1, we have 1−2|a|−2|b| ≤ρ 1−ρ ≤ 1 + 4|a|+ 4|b|.\nProof: In the proof we will use following fact:\n1 + x ≤ ex ≤ 1 + 2x, for 0 < x < 1 8\n(13)\nBy equation log ρ = log(12 + a) + b we have ρ = ( 1 2 + a)e b thus:\n( 1 2 − |a|)(1− 2|b|) ≤ ρ ≤ (1 2 + |a|)(1 + 2|b|) 1\n2 − |a| − |b| ≤ ρ ≤ 1 2 + |a|+ |b|+ 2|ab| 1\n2 − |a| − |b| − 2|ab| ≤ 1− ρ ≤ 1 2 + |a|+ |b| 1 2 − |a| − |b| 1 2 + |a|+ |b| ≤ ρ 1− ρ ≤ 1 2 + |a|+ |b|+ 2|ab| 1 2 − |a| − |b| − 2|ab| (1− 2|a| − 2|b|) ≤ ρ 1− ρ ≤ (1 + 4|a|+ 4|b|)\n(14)\nLemma 3. Suppose h(x)/p(x) ≥ 12 , the following loss function is strongly convex:\nL(q) = ∫ x p(x) log h(x) h(x) + q(x) + q(x) log\nq(x)\nq(x) + h(x) dx (15)\nProof: The first order derivative of L(q) is ∂L(q)∂q(x) = h(x)−p(x) q(x)+h(x) + log q(x) q(x)+h(x) . The second order derivative is ∂ 2L(q) ∂q(x)2 = q(x)(2h(x)−p(x))+h(x)2 (q(x)+h(x))2q(x) . (There is no non-diagonal elements in the Hessian).\nThe proof for Theorem 5 is shown below. The theorem focuses on Jensen-Shannon Divergence loss Guha et al. (2007); Lin (1991). We stress that the analysis is general and works for both general GAN and conditional GAN." }, { "heading": "Proof of Theorem 5:", "text": "Note Φ(D(x)) = p(x)q(x) . The ξ(x) perturbed odds value gives Φ(D̃(x)) = Φ(D(x))ξ(x) = p(x)ξ(x) q(x) . Let h(x) = p(x)ξ(x), we have 1 − δ ≤ h(x)p(x) ≤ 1 + δ and 1 − δ ≤ p(x) h(x) ≤ 1 + 2δ. The perturbed value of discriminator will be D̃(x) = h(x)h(x)+q(x) . The loss function that the generator distribution q(x) seeks to minimize a perturbed Jensen-Shannon Divergence loss:\nL(q) = ∫ x p(x) log h(x) h(x) + q(x) + q(x) log\nq(x)\nh(x) + q(x) dx+ λ( ∫ x q(x)dx− 1). (16)\nwhere λ represents the Lagrangian Multiplier. In Lemma 3 we verify the convexity of this loss function with regulated behavior of h(x). The derivative of L(q) w.r.t. q(x) is:\n− p(x) q(x) + h(x) + log(q(x)) + 1− log(q(x) + h(x))− q(x) q(x) + h(x) + λ\nwhich needs to be 0 for all value of x Boyd & Vandenberghe (2004). Thus we have following equation:\nh(x)− p(x) q(x) + h(x) + log q(x) q(x) + h(x) = −λ (17)\nSince above equation holds for all x. Let h(x)−p(x)q(x)+h(x) = ∆(x) which has value bounded by −δ ≤ ∆(x) ≤ δ. we can multiply p(x) + q(x) on both side and integral over x:∫\nx\n(h(x) + q(x))∆(x) + (h(x) + q(x)) log q(x)\nq(x) + h(x) dx = − ∫ x λ(h(x) + q(x))dx\nwhich gives us ∆1 + ∆2 = −2λ where:\n∆1 = 1\n2 ∫ x (h(x) + q(x))∆(x)dx, ∆2 = 1 2 ∫ x (h(x) + q(x)) log q(x) q(x) + h(x) dx\nPlugging in above derived value of λ back into Equation 21 we have:\nlog q(x)\nq(x) + p(x) = ∆1 + ∆2 −∆(x) (18)\nBy the uniform upper bound on |∆(x)| ≤ 2δ we have 1 − 32δ ≤ e ∆1 ≤ 1 + 32δ and 1− 32δ ≤ e ∆ ≤ 1 + 32δ. By taking exponential operation on both sides of Equation 22 we have :\nq(x)\nq(x) + h(x) = e∆2 ∗ (1± 4δ)\nThus we can bound the value of e∆2 by the equation:\nq(x) = (q(x) + h(x))(e∆2 ∗ (1± 4δ)) (19) . Due to the fact that the equation holds for all value of x, by integrating Equation 19 we have e∆2 = 12 (1± 4δ) which is equivalent to ∆2 = log( 1 2 ± 4δ). Thus log q(x) q(x)+h(x) = log( 1 2 ± 4δ)±∆. By Lemma 2, we have q(x)p(x) = 1± 16δ.\nTheorem 6 (Tightness of the Bound in Theorem (5)). Given a perturbed discriminator D̃(x) of the optimal one D(x), s.t. Φ(D̃(x)) = Φ(D(x))ξ(x) with |ξ(x) − 1| ≥ γ and γ ≤ 1/8. The optimal distribution q∗ as in Eq. (12) satisfies |q∗(x)/p(x)− 1| ≥ γ/16, ∀x.\nProof: By similar steps to proof in Theorem 3 we have:\nL(q) = ∫ x p(x) log h(x) h(x) + q(x) + q(x) log\nq(x)\nh(x) + q(x) dx+ λ( ∫ x q(x)dx− 1). (20)\nBy setting the derivative of L(q) w.r.t. q(x) be 0 we have :\nh(x)− p(x) q(x) + h(x) + log q(x) q(x) + h(x) = −λ (21)\nLet h(x)−p(x)q(x)+h(x) = ∆(x). By assumption that |ξ(x)− 1| ≥ γ we have\n|∆(x)| = |h(x)− p(x)| q(x) + h(x) = |ξ(x)− 1| h(x) p(x) + q(x) p(x)\n. Due to the fact that |ξ(x)− 1| ≤ 18 , we have q(x) p(x) ≤ 2. Thus we can bound |∆(x)| ≥ γ 8 .\nNext we analyze following equation, we can multiply p(x) + q(x) on both side of (21) and integral over x:∫\nx\n(h(x) + q(x))∆(x) + (h(x) + q(x)) log q(x)\nq(x) + h(x) dx = − ∫ x λ(h(x) + q(x))dx\nwhich gives us ∆1 + ∆2 = −2λ where:\n∆1 = 1\n2 ∫ x (h(x) + q(x))∆(x)dx, ∆2 = 1 2 ∫ x (h(x) + q(x)) log q(x) q(x) + h(x) dx\nPlugging in above derived value of λ back into Equation 21 we have:\nlog q(x)\nq(x) + p(x) = ∆1 + ∆2 −∆(x) (22)\nNext we analyze the term ∆1 −∆(x).\n∆1 −∆(x) = 1\n2 ∫ x (h(x) + q(x))∆(x)dx−∆(x)\n= 1\n2 ∫ x (h(x) + q(x)) h(x)− p(x) q(x) + h(x) dx−∆(x)\n= ∆(x)\nDue to the fact that |∆(x)| > γ8 , We can derive that | e∆2\n1 2\n− 1| > γ16 . By an analysis similar to\nTheorem 3 we have | q(x)p(x) − 1| ≥ γ 64 .\nCorollary 2 (Suboptimality Bound for UA-GAN). Assume suboptimal local discriminators D̃j(x) are the perturbed versions of the optimal ones Dj(x). And the suboptimality is bounded as: Φ(D̃j(x)) = Φ(Dj(x))ξj(x) with |ξj(x)−1| ≤ δ ≤ 1/8, ∀x. The centralized discriminator D̃ua(x) is computed using these perturbed local discriminators such that Φ(D̃ua(x)) = ∑K j=1 πjΦ(D̃j(x)). Let q∗ be the optimal distribution of the Jensen-Shannon divergence loss based on the perturbed UA discriminator D̃ua\nq∗ = arg min q L(q) = Ex∼p(x)[log D̃ua(x)] + Ex∼q(x)[log(1− D̃ua(x)]. (23)\nThen q∗ satisfies |q∗(x)/p(x) − 1| = O(δ). In particular, the optimal distribution q∗(x) has O(δ) total variation distance to the target distribution p(x).\nProof: Let vj’s be odds values of optimal discriminators Dj(x)’s: vj =\nDj(x) 1−Dj(x) and ṽj’s be odds values of suboptimal discriminators D̃j(x)’s. It suffices to show | ∑\nj πjvj∑ j πj ṽj − 1| ≤ δ and apply Theorem 5." }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS", "text": "Implementation Details: Here we summarize details of the network we use in the experiments. Our UA-GAN has one centralized generator and multiple local discriminators. The generator consists of two fully-connected layers (for input noise and label, respectively), five residual blocks He et al. (2016b) and three upsampling layers. Each discriminator has two convolutional layers (for image and label, respectively), five residual blocks and three average pooling layers. LeakyReLU activation is used in both generator and discriminators. During training, we apply 1 gradient update of the discriminators in each round. Each model is trained with Adam optimizer for 400 epochs with a batch size of 256. The learning rate is initially 0.0002 and linear decays to 0 from epoch 200. The VGG Simonyan & Zisserman (2014) 11-layer model is used for the downstream classification task. We pad the image to 32 × 32 and then randomly crop them to 28 × 28 with a batch size of 64 as input. The model is trained with SGD optimizer using a learning rate of 0.01 for 150 epochs.\nDataset Details: One of our foundational datasets is the Font dataset. It is created from 2500+ fonts of digits taken from the Google Fonts database. Similar to MNIST, it consists of 10 classes of 28× 28 grayscale images, with 60k samples for training and 29k samples for test. Based on the MNIST, Fashion-MNIST and Font dataset, we create both i.i.d mixture datasets and non-identical datasets. Details on non-identical datasets have been provided in the main paper. Here we provide details on two i.i.d datasets. (1) i.i.d MNIST+Fashion; (2) i.i.d MNIST+Font. Each of the 10 distributed sites contains 10% of mixture dataset which is uniformly sampled (without replacement) from MNIST and Fashion/Font." }, { "heading": "B.1 EMPIRICAL RESULTS ON I.I.D. DATASETS", "text": "The quantitative results on the i.i.d mixture datasets are shown in Table 2. One can see all three distributed GAN methods have comparable performance. It can also be observed from qualitative examples in Fig. 4 that all three methods achieve similar results. This suggests that all three\nAlgorithm 2 Precise Training Algorithm of UA-GAN.\n1: Input: Batch size m, datasets {Dj}, size of datasets {πj = njn }. 2: Output: G, Dj ,∀j ∈ [K]. 3: for Number of total training iterations do 4: for Number of iterations to train discriminator do 5: {Work at the central server.} 6: G generates synthetic data: x̂i = G(zi), i = 1, · · · ,m. 7: Send batch of synthetic data Dsyn = {x̂1, · · · , x̂m} to all K sites. 8: for j = 1, · · · ,K do 9: {Work at each local site.}\n10: Uniformly randomly choose m real samples {xj1, · · · , xjm} from Dj : 11: Update the parameters of local discriminator Dj : θj using\n∇θj 1\nm m∑ i=1 [ log(Dj(x j i )) + log(1−Dj(x̂i))) ] 12: end for 13: end for 14: {Work at each local site.} 15: G generates synthetic data: x̂i = G(zi), i = 1, · · · ,m. 16: Send batch of synthetic data Dsyn = {x̂1, · · · , x̂m} to all K sites. 17: for j = 1, · · · ,K do 18: {Work at each local site.} 19: Output predictions and gradients for synthetic data Dj(x̂i), ∂Dj(x̂i)/∂x̂i, i = 1, · · · ,m. Send them to the central server. 20: end for 21: {Work at the central server.} 22: Simulate value of Dua(x̂i) via Eq. (4), ∀i. 23: Update parameter of G: θG by descending its stochastic gradient:\n1\nm m∑ i=1 ∂ log(1−Dua(x̂i)) ∂Dua(x̂i) ∂Dua ∂Φ(Dua) K∑ j=1 [ ∂Φ(Dua) ∂Dj(x̂i) ∂Dj(x̂i) ∂x̂i ] ∂x̂i ∂θG\n24: end for The gradient-based updates can use any standard gradient-based learning rule.\napproaches can be used to train distributed GAN when datasets have i.i.d. distribution e.g., the data is uniformly shuffled before sent to each discriminator. Note that with a similar performance, the UA-GAN has much smaller communication cost compared to MD-GAN since the UA-GAN does not swap model parameters during training process." }, { "heading": "B.2 ADDITIONAL EMPIRICAL RESULTS ON NON-IDENTICAL DISTRIBUTION", "text": "We provide additional synthetic images in non-identical distribution cases. See Fig. 5. By using average aggregation method, the synthetic image produced by Avg GAN and MD GAN only have Fashion images in (a), (c) and Font images in (b), (d). Our method in (e) and (f) could capture different patterns in MNIST + Fashion/Font and generate diverse images." }, { "heading": "B.3 EMPIRICAL RESULTS OF MIXING THREE DATASETS", "text": "We report the results of mixing the three datasets MNIST, FashionMNIST and Font. In the nonidentical setting, we add MNIST data with a distinct class among 0∼9. These data are distinguishable features for different sites. And we uniformly sample Fashion and Font data for all 10 distributed sites. These are common patterns across all sites. In the identical setting, all three datasets are uniformly distributed across the 10 sites. The quantitative results are shown in Table 3. The synthtic images are shown in Fig. 6 and Fig. 7. By using average aggregation method, the synthetic image produced by Avg-GAN and MD -GAN only have Fashion and Font images in Fig. 6(a), (b) . Our method in Fig. 6(c) could capture different patterns in MNIST + Fashion + Font and generate diverse images." }, { "heading": "B.4 EMPIRICAL RESULTS IN LARGER FEDERATED LEARNING SETTING", "text": "We report the results when using larger scale nodes(n = 50) in distributed GAN methods(Avg-GAN, MD-GAN and UA-GAN). We uniformly split each individual site of the non-identical MNIST + Fashion dataset into 5 distributed sites. In total, we adopt 50 non-identical MNIST+Fashion datasets with 2380 MNIST and Fashion images each. The quantitative results are shown in Table 4, and the synthetic images are shown in Fig 8." }, { "heading": "B.5 EMPIRICAL RESULTS IN UNCONDITIONAL SETTING", "text": "We report the results when using unconditional GAN in all methods (centralized, Avg-GAN, MDGAN and UA-GAN). The quantitative results are shown in Table 5 and Table 6. The synthetic images are shown in Fig. 9 and Fig. 10. In the unconditional setting, the condition variable (labels) won’t be given thus one can not directly apply the synthetic data in training classification model. Therefore we don’t compute the classification accuracy in Table 5 and Table 6." }, { "heading": "B.6 EMPIRICAL RESULTS OF IMBALANCED DATASETS IN DIFFERENT SITES", "text": "We report the results when the sizes of the 10 sites are not the same. Based on the non-identical MNIST + fashionMNIST dataset, we reduce the sample sizes of the first 5 sites by half and keep the\nother 5 sites unchanged. In this case, the numbers of images in each site are shown in Table 7. The quantitative results are shown in Table 8. The synthetic images are shown in Fig. 11." } ]
2,020
null
SP:f08a22a7a003f9fff3d1366b923ed961489d9158
[ "This paper proposes diff pruning, an alternative paradigm for parameter-efficient transfer learning of pre-trained models. Similar to adapters, diff pruning leaves the body of the pre-trained model unchanged. Rather than inserting additional task-specific parameters into the pre-trained model, diff pruning adds reparameterizes the parameters of the transferred model $\\theta_\\tau$ by adding a diff vector $\\delta_\\tau$ to them: $\\theta_\\tau = \\theta_{\\text{pretrained}} + \\delta_\\tau$. Parameter efficiency is achieved by regularizing $\\theta_\\tau$ to be sparse. The authors achieve this by using a relaxed mask vector to approximate the $L_0$ norm. They also propose a way to control for a specific sparsity rate via projection onto the $L_0$ ball after training and to enforce group sparsity that takes the model's structure into account. The approach is evaluated on the GLUE benchmark where it achieves competitive performance to full fine-tuning a BERT Large model and adapters while being more parameter-efficient than both of them." ]
While task-specific finetuning of deep networks pretrained with self-supervision has led to significant empirical advances in NLP, their large size makes the standard finetuning approach difficult to apply to multi-task, memory-constrained settings, as storing the full model parameters for each task become prohibitively expensive. We propose diff pruning as a simple approach to enable parameterefficient transfer learning within the pretrain-finetune framework. This approach views finetuning as learning a task-specific “diff” vector that is applied on top of the pretrained parameter vector, which remains fixed and is shared across different tasks. The diff vector is adaptively pruned during training with a differentiable approximation to the L0-norm penalty to encourage sparsity. Diff pruning becomes parameter-efficient as the number of tasks increases, as it requires storing only the nonzero positions and weights of the diff vector for each task, while the cost of storing the shared pretrained model remains constant. We find that models finetuned with diff pruning can match the performance of fully finetuned baselines on the GLUE benchmark while only modifying 0.5% of the pretrained model’s parameters per task.
[ { "affiliations": [], "name": "DIFF PRUNING" } ]
[ { "authors": [ "Sanjeev Arora", "Yingyu Liang", "Tengyu Ma" ], "title": "A Simple but Tough-to-Beat Baseline for Sentence Embeddings", "venue": "Proceedings of ICLR,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language Models are Few-Shot Learners", "venue": null, "year": 2020 }, { "authors": [ "Rich Caruana" ], "title": "Multitask Learning", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Daniel Cer", "Yinfei Yang", "Sheng-yi Kong", "Nan Hua", "Nicole Limtiaco", "Rhomni St. John", "Noah Constant", "Mario Guajardo-Cespedes", "Steve Yuan", "Chris Tar", "Brian Strope", "Ray Kurzweil" ], "title": "Universal sentence encoder for English", "venue": "In Proceedings of EMNLP: System Demonstrations,", "year": 2018 }, { "authors": [ "Tianlong Chen", "Jonathan Frankle", "Shiyu Chang", "Sijia Liu", "Yang Zhang", "Zhangyang Wang", "Michael Carbin" ], "title": "The Lottery Ticket Hypothesis for Pre-trained", "venue": "BERT Networks", "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Urvashi Khandelwal", "Christopher D. Manning", "Quoc V. Le" ], "title": "BAM! Born-Again Multi-Task Networks for Natural Language Understanding", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Alexis Conneau", "Douwe Kiela", "Holger Schwenk", "Loic Barrault", "Antoine Bordes" ], "title": "Supervised Learning of Universal Sentence Representations from Natural Language Inference Data", "venue": "In Proceedings of EMNLP,", "year": 2017 }, { "authors": [ "Andrew Dai", "Quoc V. Le" ], "title": "Semi-Supervised Sequence Learning", "venue": "In Proceedings of NIPS,", "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In Proceedings of NAACL,", "year": 2019 }, { "authors": [ "Robert French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Prakhar Ganesh", "Yao Chen", "Xin Lou", "Mohammad Ali Khan", "Yin Yang", "Deming Chen", "Marianne Winslett", "Hassan Sajjad", "Preslav Nakov" ], "title": "Compressing Large-Scale Transformer-Based Models: A Case Study on BERT", "venue": null, "year": 2002 }, { "authors": [ "Mitchell A. Gordon", "Kevin Duh", "Nicholas Andrews" ], "title": "Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning", "venue": "In Proceedings of Rep4NLP", "year": 2020 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding", "venue": "In Proceedings of ICLR,", "year": 2016 }, { "authors": [ "Felix Hill", "Kyunghyun Cho", "Anna Korhonen" ], "title": "Learning distributed representations of sentences from unlabelled data", "venue": "In Proceedings of ACL,", "year": 2016 }, { "authors": [ "Neil Houlsby", "Andrei Giurgiu", "Stanislaw Jastrzebski", "Bruna Morrone", "Quentin de Laroussilhe", "Andrea Gesmundo", "Mona Attariyanand Sylvain Gelly" ], "title": "Parameter-efficient transfer learning for nlp", "venue": "In Proceedings of ICML,", "year": 2019 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal Language Model Fine-tuning for Text Classification", "venue": "In Proceedings of ACL,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical Reparameterization with Gumbel-Softmax", "venue": "In Proceedings of ICLR,", "year": 2017 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "TinyBERT: Distilling BERT for Natural Language Understanding", "venue": null, "year": 1909 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A. Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Ruslan Salakhutdinov", "Richard S. Zemel", "Antonio Torralba", "Raquel Urtasun", "Sanja Fidler" ], "title": "Skip-Thought Vectors", "venue": "In Proceedings of NeurIPS,", "year": 2015 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations", "venue": "In Proceedings of ICLR,", "year": 2020 }, { "authors": [ "Quoc V. Le", "Tomas Mikolov" ], "title": "Distributed Representations of Sentences and Documents", "venue": "In Proceedings of ICML,", "year": 2014 }, { "authors": [ "Cheolhyoung Lee", "Kyunghyun Cho", "Wanmo Kang" ], "title": "Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models", "venue": "In Proceedings of ICLR,", "year": 2020 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Nelson F. Liu", "Matt Gardner", "Yonatan Belinkov", "Matthew E. Peters", "Noah A. Smith" ], "title": "Linguistic Knowledge and Transferability of Contextual Representations", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-Task Deep Neural Networks for Natural Language Understanding", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A Robustly", "venue": "Optimized BERT Pretraining Approach", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient Episodic Memory for Continual Learning", "venue": "In Proceedings of NeurIPS,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P", "Kingma" ], "title": "Learning Sparse Neural Networks through L0 Regularization", "venue": "In Proceedings of ICLR,", "year": 2018 }, { "authors": [ "Bryan McCann", "James Bradbury", "Caiming Xiong", "Richard Socher" ], "title": "Learned in translation: Contextualized word vectors", "venue": "In Proceedings of NeurIPS", "year": 2017 }, { "authors": [ "Antonio Valerio Miceli Barone", "Barry Haddow", "Ulrich Germann", "Rico Sennrich" ], "title": "Regularization techniques for fine-tuning in neural machine translation", "venue": "In Proceedings of EMNLP,", "year": 2017 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient Estimation of Word Representations in Vector Space", "venue": null, "year": 2013 }, { "authors": [ "German I. Parisi", "Ronald Kemker", "Jose L. Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual Lifelong Learning with Neural Networks: A Review", "venue": null, "year": 2018 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep Contextualized Word Representations", "venue": "In Proceedings of NAACL,", "year": 2018 }, { "authors": [ "Jonas Pfeiffer", "Aishwarya Kamath", "Andreas Ruckle", "Kyunghyun Cho amd Iryna Gurevych" ], "title": "AdapterFusion: Non-Destructive Task Composition for Transfer Learning", "venue": null, "year": 2020 }, { "authors": [ "Jonas Pfeiffer", "Andreas Ruckle", "Clifton Poth", "Aishwarya Kamath", "Ivan Vulic", "Sebastian Ruder", "Iryna Gurevych Kyunghyun Cho" ], "title": "AdapterHub: A Framework for Adapting Transformers", "venue": null, "year": 2020 }, { "authors": [ "Jonas Pfeiffer", "Ivan Vulic", "Iryna Gurevych", "Sebastian Ruder" ], "title": "MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer", "venue": null, "year": 2020 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language Models are Unsupervised Multitask Learners", "venue": null, "year": 2019 }, { "authors": [ "Evani Radiya-Dixit", "Xin Wang" ], "title": "How fine can fine-tuning be? Learning efficient language models", "venue": "In Proceedings of AISTATS,", "year": 2020 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Katherine Lee Adam Roberts", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the Limits of Transfer Learning with a Unified Text-toText Transformer", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Samyam Rajbhandari", "Jeff Rasley", "Olatunji Ruwase", "Yuxiong He" ], "title": "ZeRO: Memory Optimizations Toward Training Trillion", "venue": "Parameter Models", "year": 2019 }, { "authors": [ "S. Rebuffi", "A. Vedaldi", "H. Bilen" ], "title": "Efficient Parametrization of Multi-domain Deep Neural Networks", "venue": "In Proceedings of CVPR,", "year": 2018 }, { "authors": [ "Nils Reimers", "Iryna Gurevych" ], "title": "Sentence-BERT: Sentence Embeddings using Siamese BERTNetworks", "venue": "In Proceedings of EMNLP,", "year": 2019 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "venue": "In Proceedings of 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing,", "year": 2019 }, { "authors": [ "Victor Sanh", "Thomas Wolf", "Alexander M. Rush" ], "title": "Movement Pruning: Adaptive Sparsity by Fine-Tuning", "venue": null, "year": 2005 }, { "authors": [ "Timo Schick", "Hinrich Schutze" ], "title": "It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners", "venue": null, "year": 2009 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M. Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & Compress: A scalable framework for continual learning", "venue": "In Proceedings of ICML,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual Learning with Deep Generative Replay", "venue": "In Proceedings of NeurIPS", "year": 2017 }, { "authors": [ "Mohammad Shoeybi", "Mostofa Patwary", "Raul Puri", "Patrick LeGresley", "Jared Casper", "Bryan Catanzaro" ], "title": "Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism", "venue": null, "year": 1909 }, { "authors": [ "Asa Cooper Stickland", "Iain Murray" ], "title": "BERT and PALs: Projected attention layers for efficient adaptation in multi-task learning", "venue": "In Proceedings of ICML,", "year": 2019 }, { "authors": [ "Sandeep Subramanian", "Adam Trischler", "Yoshua Bengio", "Christopher J. Pal" ], "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning", "venue": "In Proceedings of ICLR,", "year": 2018 }, { "authors": [ "Siqi Sun", "Yu Cheng", "Zhe Gan", "Jingjing Liu" ], "title": "Patient Knowledge Distillation for BERT Model Compression", "venue": "In Proceedings of EMNLP,", "year": 2019 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "MobileBERT: a compact task-agnostic BERT for resource-limited devices", "venue": "In Proceedings of ACL,", "year": 2020 }, { "authors": [ "Ian Tenney", "Dipanjan Das", "Ellie Pavlick" ], "title": "BERT Rediscovers the Classical NLP Pipeline", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Iulia Turc", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Well-Read Students Learn Better: On the Importance of Pre-training", "venue": "Compact Models", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of ICLR,", "year": 2019 }, { "authors": [ "Georg Wiese", "Dirk Weissenborn", "Mariana Neves" ], "title": "Neural domain adaptation for biomedical question answering", "venue": "In Proceedings of CoNLL,", "year": 2017 }, { "authors": [ "John Wieting", "Mohit Bansal", "Kevin Gimpel", "Karen Livescu" ], "title": "Towards Universal Paraphrastic Sentence Embeddings", "venue": "In Proceedings of ICLR,", "year": 2016 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "John M. Wu", "Yonatan Belinkov", "Hassan Sajjad", "Nadir Durrani", "Fahim Dalvi", "James Glass" ], "title": "Similarity Analysis of Contextual Word Representation Models", "venue": "In Proceedings of ACL,", "year": 2020 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "venue": "In Proceedings of NeurIPS,", "year": 2019 }, { "authors": [ "Minghua Zhang", "Yunfang Wu", "Weikang Li", "Wei Li" ], "title": "Learning universal sentence representations with mean-max attention autoencoder", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Yan Zhang", "Ruidan He", "Zuozhu Liu", "Kwan Hui Lim", "Lidong Bing" ], "title": "An Unsupervised Sentence Embedding Method byMutual Information Maximization", "venue": "In Proceedings of EMNLP,", "year": 2020 }, { "authors": [ "Mengjie Zhao", "Tao Lin", "Martin Jaggi", "Hinrich Schutze" ], "title": "Masking as an Efficient Alternative to Finetuning for Pretrained Language Models", "venue": null, "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Task-specific finetuning of pretrained deep networks has become the dominant paradigm in contemporary NLP, achieving state-of-the-art results across a suite of natural language understanding tasks (Devlin et al., 2019; Liu et al., 2019c; Yang et al., 2019; Lan et al., 2020). While straightforward and empirically effective, this approach is difficult to scale to multi-task, memory-constrained settings (e.g. for on-device applications), as it requires shipping and storing a full set of model parameters for each task. Inasmuch as these models are learning generalizable, task-agnostic language representations through self-supervised pretraining, finetuning the entire model for each task is an especially inefficient use of model parameters.\nA popular approach to parameter-efficiency is to learn sparse models for each task where a subset of the final model parameters are exactly zero (Gordon et al., 2020; Sajjad et al., 2020; Zhao et al., 2020; Sanh et al., 2020). Such approaches often face a steep sparsity/performance tradeoff, and a substantial portion of nonzero parameters (e.g. 10%-30%) are still typically required to match the performance of the dense counterparts. An alternative is to use multi-task learning or feature-based transfer for more parameter-efficient transfer learning with pretrained models (Liu et al., 2019b; Clark et al., 2019; Stickland & Murray, 2019; Reimers & Gurevych, 2019; Feng et al., 2020). These methods learn only a small number of additional parameters (e.g. a linear layer) on top of a shared model. However, multi-task learning generally requires access to all tasks during training to prevent catastrophic forgetting (French, 1999), while feature-based transfer learning (e.g. based on taskagnostic sentence representations) is typically outperformed by full finetuning (Howard & Ruder, 2018).\nAdapters (Rebuffi et al., 2018) have recently emerged as a promising approach to parameterefficient transfer learning within the pretrain-finetune paradigm (Houlsby et al., 2019; Pfeiffer et al., 2020a;b;c). Adapter layers are smaller, task-specific modules that are inserted between layers of a pretrained model, which remains fixed and is shared across tasks. These approaches do not require access to all tasks during training making them attractive in settings where one hopes to obtain and share performant models as new tasks arrive in stream. Houlsby et al. (2019) find that adapter layers trained on BERT can match the performance of fully finetuned BERT on the GLUE benchmark (Wang et al., 2019a) while only requiring 3.6% additional parameters (on average) per task.\nIn this work, we consider a similar setting as adapters but propose a new diff pruning approach with the goal of even more parameter-efficient transfer learning. Diff pruning views finetuning as learning a task-specific difference vector that is applied on top of the pretrained parameter vector, which remains fixed and is shared across different tasks. In order to learn this vector, we reparameterize the task-specific model parameters as θtask = θpretrained +δtask, where the pretrained parameter vector θpretrained is fixed and the task-specific diff vector δtask is finetuned. The diff vector is regularized with a differentiable approximation to the L0-norm penalty (Louizos et al., 2018) to encourage sparsity. This approach can become parameter-efficient as the number of tasks increases as it only requires storing the nonzero positions and weights of the diff vector for each task. The cost of storing the shared pretrained model remains constant and is amortized across multiple tasks. On the GLUE benchmark (Wang et al., 2019a), diff pruning can match the performance of the fully finetuned BERT baselines while finetuning only 0.5% of the pretrained parameters per task, making it a potential alternative to adapters for parameter-efficient transfer learning." }, { "heading": "2 BACKGROUND: TRANSFER LEARNING FOR NLP", "text": "The field of NLP has recently seen remarkable progress through transfer learning with a pretrainand-finetune paradigm, which initializes a subset of the model parameters for all tasks from a pretrained model and then finetunes on a task specific objective. Pretraining objectives include context prediction (Mikolov et al., 2013), autoencoding (Dai & Le, 2015), machine translation (McCann et al., 2017), and more recently, variants of language modeling (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) objectives.\nHere we consider applying transfer learning to multiple tasks. We consider a setting with a potentially unknown set of tasks, where each τ ∈ T has an associated training set {x(n)τ , y(n)τ }Nn=1.1 For all tasks, the goal is to produce (possibly tied) model parameters θτ to minimize the empirical risk,\nmin θτ\n1\nN N∑ n=1 L ( f(x(n)τ ;θτ ), y (n) τ ) + λR(θτ )\nwhere f(·;θ) is a parameterized function over the input (e.g. a neural network), L(·, ·) is a loss function (e.g. cross-entropy), and R(·) is an optional regularizer with hyperparameter λ. This multi-task setting can use the pretrain-then-finetune approach by simply learning independent parameters for each task; however the large size of pretrained models makes this approach exceedingly parameter inefficient. For example, widely-adopted models such as BERTBASE and BERTLARGE have 110M and 340M parameters respectively, while their contemporaries such as T5 (Raffel et al., 2020), Megatron-LM (Shoeybi et al., 2019), and Turing-NLG (Rajbhandari et al., 2019) have parameter counts in the billions. Storing the fully finetuned models becomes difficult even for a moderate number of tasks.2 A classic approach to tackling this parameterinefficiency (Caruana, 1997) is to train a single shared model (along with a task-specific output layer) against multiple tasks through joint training. However, the usual formulation of multi-task learning requires the set of tasks T to be known in advance in order to prevent catastrophic forgetting (French, 1999),3 making it unsuitable for applications in which the set of tasks is unknown (e.g. when tasks arrive in stream)." }, { "heading": "3 DIFF PRUNING", "text": "Diff pruning formulates task-specific finetuning as learning a diff vector δτ that is added to the pretrained model parameters θpretrained. We first reparameterize the task-specific model parameters,\nθτ = θpretrained + δτ ,\n1Therefore our setup is different from the classic multitask setting which usually assumes that set of tasks is known\n2An intriguing line of work suggests that large-scale language models can be used without finetuning for a variety of tasks if given the appropriate context (Radford et al., 2019; Brown et al., 2020). While interesting, these models generally underperform task-specific models and require billions of parameters, though recent work suggests that they can be made substantially smaller (Schick & Schutze, 2020).\n3However, work on continual learning mitigates these issues to an extent (Shin et al., 2017; Lopez-Paz & Ranzato, 2017; Lee et al., 2017; Kirkpatrick et al., 2017; Parisi et al., 2018).\nwhich results in the following empirical risk minimization problem,\nmin δτ\n1\nN N∑ n=1 L ( f(x(n)τ ;θpretrained + δτ ), y (n) τ ) + λR(θpretrained + δτ ).\nThis trivial reparameterization is equivalent to the original formulation. Its benefit comes in the multi-task setting where the cost of storing the pretrained parameters θpretrained is amortized across tasks, and the only marginal cost for new tasks is the diff vector. If we can regularize δτ to be sparse such that ‖δτ‖0 ‖θpretrained‖0, then this approach can become more parameter-efficient as the number of tasks increases. We can specify this goal with an L0-norm penalty on the diff vector,\nR(θpretrained + δτ ) = ‖δτ‖0 = d∑ i=1 1{δτ,i 6= 0}.\n3.1 DIFFERENTIABLE APPROXIMATION TO THE L0-NORM\nThis regularizer is difficult to directly optimize as it is non-differentiable. In order to approximate this L0 objective, we follow the standard approach for gradient-based learning with L0 sparsity using a relaxed mask vector (Louizos et al., 2018). This approach involves relaxing a binary vector into continuous space, and then multiplying it with a dense weight vector to determine how much of the weight vector is applied during training. After training, the mask is deterministic and a large portion of the diff vector is true zero.\nTo apply this method we first decompose δτ into a binary mask vector multiplied with a dense vector,\nδτ = zτ wτ , zτ ∈ {0, 1}d,wτ ∈ Rd\nWe can now instead optimize an expectation with respect to zτ , whose distribution p(zτ ;ατ ) is initially Bernoulli with parameters ατ ,\nmin ατ ,wτ Ezτ∼p(zτ ;ατ )\n[ 1\nN N∑ n=1 L ( f(x(n)τ ;θpretrained + zτ wτ , ), y(n)τ ) + λ‖δτ‖0 ] .\nThis objective is still difficult in practice due to zτ ’s being discrete (which requires the score function gradient estimator), but the expectation provides some guidance for empirically effective relaxations. We follow prior work (Louizos et al., 2018; Wang et al., 2019b) and relax zτ into continuous space [0, 1]d with a stretched Hard-Concrete distribution (Jang et al., 2017; Maddison et al., 2017), which allows for the use of pathwise gradient estimators. Specifically, zτ is now defined to be a deterministic and (sub)differentiable function of a sample u from a uniform distribution,\nu ∼ U(0,1), sτ = σ (logu− log(1− u) + ατ ) , s̄τ = sτ × (r − l) + l, zτ = min(1,max(0, s̄τ )).\nHere l < 0 and r > 1 are two constants used to stretch sτ into the interval (l, r)d before it is clamped to [0, 1]d with the min(1,max(0, ·)) operation. In this case we have a differentiable closed-form expression for the expected L0-norm,\nE [‖δτ‖0] = d∑ i=1 E [1{zτ,i > 0}] = d∑ i=1 σ ( ατ,i − log −l r ) .\nThus the final optimization problem is given by,\nmin ατ ,wτ Eu∼U [0,1]\n[ 1\nN N∑ n=1 L ( f(x(n)τ ;θpretrained + zτ wτ , ), y(n)τ )] + λ d∑ i=1 σ ( ατ,i − log −l r ) ,\nand we can now utilize pathwise gradient estimators to optimize the first term with respect to ατ since the expectation no longer depends on it.4 After training we obtain the final diff vector δτ by sampling u once to obtain zτ (which is not necessarily a binary vector but has a significant number of dimensions equal to exactly zero due to the clamping function), then setting δτ = zτ wτ .5\n4To reduce notation clutter we subsume the parameters of the task-specific output layer, which is not pretrained, into θpretrained. We do not apply the L0-norm penalty on these parameters during training.\n5We found sampling once to work as well as more complicated alternatives (e.g. based on multiple samples).\n3.2 L0-BALL PROJECTION WITH MAGNITUDE PRUNING FOR SPARSITY CONTROL\nDifferentiable L0 regularization provides a strong way to achieve high sparsity rate. However, it would be ideal to have more fine-grained control into the exact sparsity rate in the diff vector, especially considering applications which require specific parameter budgets. As λ is just the Lagrangian multiplier for the constraint E [‖δτ‖0] < η for some η, this could be achieved in principle by searching over different values of λ. However we found it more efficient and empirically effective to achieve an exact sparsity rate by simply projecting onto the L0-ball after training.\nSpecifically we use magnitude pruning on the diff vector δτ and target a sparsity rate t% by only keeping the top t% × d values in δτ .6 Note that unlike standard magnitude pruning, this is based on the magnitude of the diff vector values and not the model parameters. As is usual in magnitude pruning, we found it important to further finetune δτ with the nonzero masks fixed to maintain good performance (Han et al., 2016). Since this type of parameter-efficiency through projection onto the L0-ball can be applied without adaptive diff pruning,7 such an approach will serve as one of our baselines in the empirical study." }, { "heading": "3.3 STRUCTURED DIFF PRUNING", "text": "Diff pruning, as presented above, is architecture-agnostic and does not exploit the underlying model structure—each dimension of zτ is independent from one another. While this makes the approach potentially more flexible, we might expect to achieve better sparsity/performance tradeoff through a structured formulation which encourages active parameters to group together and other areas to be fully sparse. Motivated by this intuition, we first partition the parameter indices into G groups {g(1), . . . , g(G)} where g(j) is a subset of parameter indices governed by group g(j).8 We then introduce a scalar zjτ (with the associated parameter α j τ ) for each group g(j), and decompose the task-specific parameter for index i ∈ g(j) as δjτ,i = zτ,i× zjτ ×wτ,i. The expected L0-norm is then given by, E [‖δτ‖0] = G∑ j=1 ∑ i∈g(j) E [1{zτ,i · zgτ > 0}] = G∑ j=1 ∑ i∈g(j) σ ( ατ,i − log −l r ) × σ ( αjτ − log −l r ) ,\nand we can train with gradient-based optimization as before." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 MODEL AND DATASETS", "text": "For evaluation we use the GLUE benchmark (Wang et al., 2019b), a popular finetuning dataset. Following adapters (Houlsby et al., 2019), we test our approach on the following subset of the GLUE tasks: Multi-Genre Natural Language Inference (MNLI), where the goal is two predict whether the relationship between two sentences is entailment, contradiction, or neutral (we test on both MNLIm and MNLImm which respectively tests on matched/mismatched domains); Quora Question Pairs (QQP), a classification task to predict whether two question are semantically equivalent; Question Natural Language Inference (QNLI), which must predict whether a sentence is a correct answer to the question; Stanford Sentiment Treebank (SST-2), a sentence classification task to predict the sentiment of movie reviews; Corpus of Linguistic Acceptability (CoLA), where the goal is predict whether a sentence is linguistically acceptable or not; Semantic Textual Similarity Benchmark (STSB), which must predict a similarity rating between two sentences; Microsoft Research Paraphrase Corpus (MRPC), where the goal is to predict whether two sentences are semantically equivalent; Recognizing Textual Entailment (RTE), which must predict whether a second sentence is entailed by the first. For evaluation, the benchmark uses Matthew’s correlation for CoLA, Spearman for STS-B, F1 score for MRPC/QQC, and accuracy for MNLI/QNLI/SST-2/RTE.\n6Wang et al. (2019b) show that it also is possible to inject such a constraint softly into the training objective by regularizing the expected model size towards a certain rate. However, since the constraint is soft this approach also makes it difficult to target an exact sparsity rate.\n7Concretely, one can obtain θτ through usual finetuning, set δτ = θτ −θpretrained, and then apply magnitude pruning followed by additional finetuning on δτ .\n8While groups can be defined in various ways, we found that defining groups based on each matrix/bias vector of the pretrained model was simple and worked well enough.\nFor all experiments, we use the BERTLARGE model from Devlin et al. (2019), which has 24 layers, 1024 hidden size, 16 attention heads, and 340M parameters. We use the Huggingface Transformer library (Wolf et al., 2019) to conduct our experiments." }, { "heading": "4.2 BASELINES", "text": "We compare both structured and non-structured variants of diff pruning against the following baselines: Full finetuning, which fully finetunes BERTLARGE as usual; Last layer finetuning, which only finetunes the penultimate layer (along with the final output layer)9; Adapters from Houlsby et al. (2019), which train task-specific bottleneck layers between between each layer of a pretrained model, where parameter-efficiency can be controlled by varying the size of the bottleneck layers; and Non-adaptive diff pruning, which performs diff pruning just based on magnitude pruning (i.e., we obtain θτ through usual finetuning, set δτ = θτ − θpretrained, and then apply magnitude pruning followed by additional finetuning on δτ ). For diff pruning we set our target sparsity rate to 0.5% and investigate the effect of different target sparsity rates in section 5.1." }, { "heading": "4.3 IMPLEMENTATION DETAILS AND HYPERPARAMETERS", "text": "Diff pruning introduces additional hyperparameters l, r (for stretching the Hard-Concrete distribution) and λ (for weighting the approximate L0-norm penalty). We found l = −1.5, r = 1.5, λ = 1.25× 10−7 to work well across all tasks. We also initialize the weight vector wτ to 0, and ατ to a positive vector (we use 5) to encourage zτ to be close to 1 at the start of training. While we mainly experiment with BERTLARGE to compare against prior work with adapters (Houlsby et al., 2019), in preliminary experiments we found these hyperparameters to work for finetuning RoBERTa (Liu et al., 2019c) and XLNet (Yang et al., 2019) models as well.\nFor all tasks we use a learning rate of 1×10−5 and perform a hyperparameter search over batch size ∈ {4, 6, 8, 10} and the number of epochs ∈ {2, 3, 4, 5}.10 However we found the default settings used for regular finetuning as suggested in the original BERT paper to work well for most tasks. Finetuning with the fixed mask after projecting onto the L0-ball with magnitude pruning is done with a learning rate of 5×10−5 for 3 or 5 epochs (3 epochs for QNLI, SST-2, MNLI-m, MNLI-mm, CoLA, QQP, 5 epochs for MRPC, STS-B, RTE). Grouping for the structured version of diff pruning is based on the matrix/bias vectors (i.e. parameters that belong to the same matrix or bias vector are assumed to be in the same group), which results in 393 groups.11" }, { "heading": "5 RESULTS AND ANALYSIS", "text": "Our main results on the GLUE benchmark are shown in Table 1. Structured diff pruning can match the performance of a fully finetuned BERTLARGE model while only requiring 0.5% additional pa-\n9Wu et al. (2020) observe that finetuning later layers generally performs better than finetuning earlier layers 10For the larger QNLI, SST-2, MNLI-m, MNLI-mm, CoLA, QQP datasets, we use batch size of 8 over 3 epochs. For the smaller MRPC, STS-B, RTE datasets, we use batch size of 8 over 3 epochs. 11This definition of groups is implementation-specific since it depends on how one concatenates the input vector before each affine layer. Our grouping is based on Huggingface’s BERT implementation at commit 656e1386a296d696327a9db37de2ccccc79e2cc7 (available at https://github.com/ huggingface/transformers/blob/656e1386a296d696327a9db37de2ccccc79e2cc7/ src/transformers/modeling_bert.py). In preliminary experiments we found this simple definition to work well compared to alternative group definitions (e.g. based on individual neurons).\nrameters per task. Diff pruning without structured sparsity also performs well, though slightly worse than the structured approach. Non-adaptive diff pruning, which magnitude prunes the diff vector without learning the binary mask zτ , performs significantly worse, indicating the importance of learning the masking vector. Compared to adapters, diff pruning obtains similar performance while requiring fewer parameters per task, making it a potential alternative for parameter-efficient transfer learning.12 We now perform a series of analysis experiments on the validation set." }, { "heading": "5.1 VARYING THE TARGET SPARSITY", "text": "In Figure 1 (left), we plot results on the GLUE validation set averaged across all tasks at target sparsity rates of 0.1%, 0.25%, 0.5%, 1.0% for the different baselines. Structured diff pruning consistently outperforms non-structured and and non-adaptive variants across different sparsity rates. The advantage of adaptive methods becomes more pronounced at extreme sparsity rates. In Table 2, we report the breakdown of accuracy of structured diff pruning across different tasks and sparsity rates, where we observe that different tasks have different sensitivity to target sparsity rates. This suggests that we can obtain even greater parameter-efficiency through targeting task-specific sparsity rates in the diff vector." }, { "heading": "5.2 STRUCTURED VS. NON-STRUCTURED DIFF PRUNING", "text": "Structured diff pruning introduces an additional mask per group, which encourages pruning of entire groups. This is less restrictive than traditional group sparsity techniques that have been used with L0-norm relaxations which force all parameters in a group to share the same mask (Louizos et al., 2018; Wang et al., 2019b). However we still expect entire groups to be pruned out more often in the structured case, which might bias the learning process towards either eliminating completely or clustering together nonzero diffs. In Figure 1 (right), we indeed find that structured diff pruning leads to finetuned models that are much more likely to leave entire groups unchanged from their pretrained values (zero diffs)." }, { "heading": "5.3 TASK-SPECIFIC SPARSITY", "text": "Different layers of pretrained models have argued to encode different information (Liu et al., 2019a; Tenney et al., 2019). Given that each task will likely recruit different kinds of language phenomena embedded in the hidden layers, we hypothesize that diff pruning will modify different parts of the\n12However diff pruning incurs additional storage cost due to storing the nonzero positions of the diff vector.\npretrained model through task-specific finetuning. Figure 2 shows the percentage of nonzero diff parameters attributable to the different layers for each task. We find that different tasks indeed modify different parts of the network, although there are some qualitative similarities between some tasks, for example between QNLI & QQP (both must encode questions), and MRPC & STS-B (both must predict similarity between sentences). The embedding layer is very sparsely modified for all tasks. While some of the variations in the sparsity distributions is due to simple randomness, we do observe some level of consistency over multiple runs of the same task, as shown in Figure 3 of the appendix.\nThe ability to modify different parts of the pretrained model for each task could explain the improved parameter-efficiency of our approach compared to Houlsby et al. (2019)’s adapter layers, which can only read/write to the pretrained model at certain points of the computational graph.13 This potentially suggests that adapter layers with more fine-grained access into model internals (e.g. adapters for key/value/query transformations) might result in even greater parameter-efficiency. While left as future work, we also note that diff pruning can be applied in conjunction with adapters, which might further improve results.\n5.4 EFFECT OF L0-BALL PROJECTION VIA MAGNITUDE PRUNING Applying magnitude pruning to project onto the L0-ball was crucial in achieving exact sparsity targets. As shown in Table 3, we observed little loss in performance through magnitude pruning. We re-iterate that it was crucial to finetune with the fixed mask in order to maintain good performance.14" }, { "heading": "5.5 SQUAD EXTRACTIVE QUESTION ANSWERING", "text": "To demonstrate the effectiveness of our approach beyond classification, we additionally experiment on the extractive question answering task SQuAD, which asks model to select the answer span to a question given a Wikipedia paragraph. To make direct comparisons with Houlsby et al. (2019), we run all experiments on SQuAD v1.1. For diff pruning, we use the same general hyper-parameters as our full finetuning baseline.15 Results are shown in Table 4. Diff pruning is able achieve comparable or better performance with only 1% additional parameters. Notably, we see that our method can improve the F1 score of full finetuning baseline by a significant margin (e.g. 90.8% ⇒ 93.2%)\n13To simulate this restricted setting, we tried applying diff pruning only on the dense transformations just before the output of each layer (i.e. after self-attention layers), and observed much worse performance.\n14Even for the approach that does not apply magnitude pruning, we found it helpful to fix the mask zτ after an initial training phase and finetune just wτ .\n15https://huggingface.co/transformers/v2.5.1/examples.html\nwhile modifying many fewer parameters (e.g., 100% ⇒ 1%), which potentially implies that diff pruning can have a useful regularization effect." }, { "heading": "6 DISCUSSION", "text": "" }, { "heading": "6.1 MEMORY REQUIREMENTS", "text": "For training, our approach requires more memory than usual finetuning due to additionally optimizing ατ and wτ . This did not present a significant challenge for pretrained models that we experimented with in this study, since majority of GPU memory was utilized by the minibatch’s activation layers. However, this could present an issue as model sizes get larger and larger. While training efficiency was not a primary concern of this work, diff pruning takes approxiamtely 1.5× to 2× more time per batch, which results in slower training. After training, storing the task-specific diff vector requires storing a compressed version with both the nonzero positions and weights, which incurs additional storage requirements." }, { "heading": "6.2 INFORMATION-EFFICIENT TRANSFER LEARNING", "text": "Efficiently representing pretrained models adapted to new tasks is becoming an increasingly important problem in contemporary NLP. This paper focuses on a rather narrow definition of efficiency— parameter-efficiency. An interesting direction might be to target generalizations of parameterefficiency, for example, information-efficiency, which aims to minimize the number of bits required to represent the task-specific model when given the pretrained model for free. This view can suggest other avenues for achieving information-efficient transfer learning: for example, “what is the minimum number of (potentially synthetic) datapoints that we can finetune BERT on to obtain a good task-specific model?”,16 or “what is the shortest prefix string that we can condition GPT3 on for it to become a good task-specific model”?" }, { "heading": "7 RELATED WORK", "text": "Multi-task learning Multi-task learning (Caruana, 1997), broadly construed, aims to learn models and representations that can be utilized across a diverse range of tasks, and offers a natural approach to training parameter-efficient deep models. Several works have shown that a single BERT model can obtain good performance across multiple tasks when jointly trained (Liu et al., 2019b; Clark et al., 2019; Stickland & Murray, 2019). Adapter layers, which are task-specific layers that read and write to layers of a shared model (Rebuffi et al., 2018), offer an alternative approach to multi-task learning that does not require access to all tasks during training, and have also been applied to obtain parameter-efficient BERT models (Houlsby et al., 2019; Pfeiffer et al., 2020a;b;c). A related line of work targets extreme parameter-efficiency through task-agnostic sentence representations that can be used without finetuning for downstream tasks (Le & Mikolov, 2014; Kiros et al., 2015; Wieting et al., 2016; Hill et al., 2016; Arora et al., 2017; Conneau et al., 2017; Cer et al., 2018; Zhang et al., 2018; Subramanian et al., 2018; Zhang et al., 2020). Reimers & Gurevych (2019), building on the earlier work of Conneau et al. (2017), show that BERT finetuned on natural language inference obtains sentence representations that perform well across multiple sentence-level tasks. These feature-based transfer learning methods are however generally outperformed by fully finetuned models (Howard & Ruder, 2018).\n16Dataset distillation (Wang et al., 2018) tackles this question in the context of vision models.\nModel compression There has been much recent work on compressing pretrained trained with self-supervision (see Ganesh et al. (2020) for a recent survey). A particularly promising line of work focuses on obtaining smaller pretrained models (for subsequent finetuning) through weight pruning (Gordon et al., 2020; Sajjad et al., 2020; Chen et al., 2020) and/or knowledge distillation (Sanh et al., 2019; Sun et al., 2019; Turc et al., 2019; Jiao et al., 2019; Sun et al., 2020). It would be interesting to see whether our approach can be applied on top of these smaller pretrained models to for even greater parameter-efficiency.\nLearning to prune Our work is closely related to the line of work on learning to prune pretrained models with differentiable relaxations of binary masks (Wang et al., 2019b; Zhao et al., 2020; Sanh et al., 2020; Radiya-Dixit & Wang, 2020). While these works also enable parameter-efficient transfer learning, they generally apply the masks directly on the pretrained parameters instead of on the difference vector as in the present work.\nRegularization towards pretrained models Finally, diff pruning is also related to works which regularize the learning process towards pretrained models for continual learning (Kirkpatrick et al., 2017; Schwarz et al., 2018), domain adaptation (Wiese et al., 2017; Miceli Barone et al., 2017), and stable finetuning (Lee et al., 2020). These works typically do not utilize sparse regularizers and target a different goal than parameter-efficiency." }, { "heading": "8 CONCLUSION", "text": "We propose diff pruning as a simple approach for parameter-efficient transfer learning with pretrained models. Experiments on standard NLP benchmarks and models show that diff pruning can match the performance of fully finetuned baselines while requiring only a few additional parameters per task. We also propose a structured variant of diff pruning which provides further improvements. Future work will consider (i) applying this approach to other architectures (e.g. ConvNets for vision applications), (ii) injecting parameter-efficiency objectives directly into the pretraining process (to pretrain models that are better suited towards sparse transfer learning), and (iii) combining diff pruning with other techniques (e.g. adapters) to achieve even greater parameter-efficiency." }, { "heading": "A APPENDIX", "text": "A.1 CONSISTENCY OF NONZERO PARAMETERS\nFigure 3 shows the percentage of modified parameters attributable to each layer across 5 runs of SST2. We find that there is nonotrivial variation in sparsity across runs, but also a degree of consistency. For example, the first layer is modified considerably more than other layers across all runs." } ]
2,020
null
SP:83a6062cbcad4c8c40fe066abc7bd32a62f38b52
[ "The paper presents a promising idea to build interpretable models by combining discriminative and generative approach. The proposed model uses an invertible neural network to model the data distribution. The invertibility helps in transforming the learned feature vector back to the image domain. A linear discriminative classifier is trained on the feature vector to perform binary classification. Using the inverse function, the model generates a counterfactual explanation by inverting a modified logit score to create a new image as an explanation. The authors further construct an orthogonal basis using PCA, such that modifying feature vector in those directions results in no change in the classifier's prediction. Decomposing the feature space into such a basis helps discover potential biases in the dataset and the classification model. The experiments compare the proposed method's performance with fully discriminative models and post-hoc interpretability methods such as gradient-based saliency maps." ]
Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional network. Our model generates meaningful, faithful, and ideal counterfactuals. Using PCA on the classifier’s input, we can also create “isofactuals”– image interpolations with the same outcome but visually meaningful different features. Counterand isofactuals can be used to identify positive and negative evidence in an image. This can also be visualized with heatmaps. We evaluate our approach against gradient-based attribution methods, which we find to produce meaningless adversarial perturbations. Using our method, we reveal biases in three different datasets. In a human subject experiment, we test whether nonexperts find our method useful to spot spurious correlations learned by a model. Our work is a step towards more trustworthy explanations for computer vision. For code: https://anonymous.4open.science/r/ae263acc-aad1-42f8a639-aec20ff31fc3/
[]
[ { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Julius Adebayo", "Michael Muelly", "Ilaria Liccardi", "Been Kim" ], "title": "Debugging tests for model explanations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ahmed Alqaraawi", "Martin Schuessler", "Philipp Weiß", "Enrico Costanza", "Nadia Berthouze" ], "title": "Evaluating saliency map explanations for convolutional neural networks: A user study", "venue": "In Proceedings of the 25th International Conference on Intelligent User Interfaces,", "year": 2020 }, { "authors": [ "Niek Andresen", "Manuel Wöllhaf", "Katharina Hohlbaum", "Lars Lewejohann", "Olaf Hellwich", "Christa Thöne-Reineke", "Vitaly Belik" ], "title": "Towards a fully automated surveillance of well-being status in laboratory mice using deep learning: Starting with facial expression analysis", "venue": "Plos one,", "year": 2020 }, { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PLoS ONE,", "year": 2015 }, { "authors": [ "Christian F Baumgartner", "Lisa M Koch", "Kerem Can Tezcan", "Jia Xi Ang", "Ender Konukoglu" ], "title": "Visual feature attribution using wasserstein gans", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Judy Borowski", "Roland S. Zimmermann", "Judith Schepers", "Robert Geirhos", "Thomas S.A. Wallis", "Matthias Bethge", "Wieland Brendel" ], "title": "Exemplary natural images explain cnn activations better than feature visualizations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Wieland Brendel", "Matthias Bethge" ], "title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chun-Hao Chang", "Elliot Creager", "Anna Goldenberg", "D. Duvenaud" ], "title": "Explaining image classifiers by counterfactual generation", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Chaofan Chen", "Oscar Li", "Daniel Tao", "Alina Barnett", "Cynthia Rudin", "Jonathan K Su" ], "title": "This looks like that: deep learning for interpretable image recognition", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eric Chu", "Deb Roy", "Jacob Andreas" ], "title": "Are visual explanations useful? a case study in model-in-theloop prediction, 2020", "venue": null, "year": 2020 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "CoRR, abs/1410.8516,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": null, "year": 2017 }, { "authors": [ "Patrick Esser", "Robin Rombach", "Bjorn Ommer" ], "title": "A disentangling invertible interpretation network for explaining latent representations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "T. Fitzpatrick" ], "title": "Ultraviolet-induced pigmentary changes: benefits and hazards", "venue": "Current problems in dermatology,", "year": 1986 }, { "authors": [ "Timnit Gebru", "Jamie Morgenstern", "Briana Vecchione", "Jennifer Wortman Vaughan", "Hanna Wallach", "Hal Daumé III au", "Kate Crawford" ], "title": "Datasheets for datasets, 2020", "venue": null, "year": 2020 }, { "authors": [ "R. Stuart Geiger", "Kevin Yu", "Yanlai Yang", "Mindy Dai", "Jie Qiu", "Rebekah Tang", "Jenny Huang" ], "title": "Garbage in, garbage out? do machine learning application papers in social computing report where human-labeled training data comes from", "venue": "In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency,", "year": 2020 }, { "authors": [ "Amirata Ghorbani", "James Wexler", "James Y Zou", "Been Kim" ], "title": "Towards automatic concept-based explanations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yash Goyal", "Amir Feder", "Uri Shalit", "Been Kim" ], "title": "Explaining classifiers with causal concept effect (cace), 2020", "venue": null, "year": 2020 }, { "authors": [ "Robert R Hoffman", "Shane T Mueller", "Gary Klein", "Jordan Litman" ], "title": "Metrics for explainable ai: Challenges and prospects", "venue": "arXiv preprint arXiv:1812.04608,", "year": 2018 }, { "authors": [ "Katharina Hohlbaum", "Bettina Bert", "Silke Dietze", "Rupert Palme", "Heidrun Fink", "Christa" ], "title": "ThöneReineke. Severity classification of repeated isoflurane anesthesia in c57bl/6jrj mice—assessing the degree of distress", "venue": "PLOS ONE, 12:1–21,", "year": 2017 }, { "authors": [ "Katharina Hohlbaum", "Bettina Bert", "Silke Dietze", "Rupert Palme", "Heidrun Fink", "Christa ThöneReineke" ], "title": "Impact of repeated anesthesia with ketamine and xylazine on the well-being of c57bl/6jrj mice", "venue": "PLOS ONE, 13(9):1–24,", "year": 2018 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold W.M. Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alon Jacovi", "Yoav Goldberg" ], "title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4198–4205,", "year": 2020 }, { "authors": [ "Harmanpreet Kaur", "Harsha Nori", "Samuel Jenkins", "Rich Caruana", "Hanna Wallach", "Jennifer Wortman Vaughan" ], "title": "Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning", "venue": "In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,", "year": 2020 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "P. Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1", "venue": "convolutions. ArXiv,", "year": 2018 }, { "authors": [ "Kimmo Kärkkäinen", "Jungseock Joo" ], "title": "Fairface: Face attribute dataset for balanced race, gender, and age, 2019", "venue": null, "year": 2019 }, { "authors": [ "Shusen Liu", "Bhavya Kailkhura", "Donald Loveland", "Yong Han" ], "title": "Generative counterfactual introspection for explainable deep learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Matthew MacKay", "Paul Vicol", "Jimmy Ba", "Roger B Grosse" ], "title": "Reversible recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Radek Mackowiak", "Lynton Ardizzone", "Ullrich Köthe", "Carsten Rother" ], "title": "Generative classifiers as a basis for trustworthy computer vision", "venue": "arXiv preprint arXiv:2007.15036,", "year": 2020 }, { "authors": [ "Kaushalya Madhawa", "Katushiko Ishiguro", "Kosuke Nakago", "Motoki Abe" ], "title": "Graphnvp: An invertible flow model for generating molecular graphs", "venue": null, "year": 1905 }, { "authors": [ "Grégoire Montavon", "Alexander Binder", "Sebastian Lapuschkin", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Layer-wise relevance propagation: an overview", "venue": "In Explainable AI: interpreting, explaining and visualizing deep learning,", "year": 2019 }, { "authors": [ "Daniel Neurath" ], "title": "Analysis of the eye as a visual indicator for the well-being status of laboratory mice", "venue": "Unpublished Bachelor Thesis,", "year": 2020 }, { "authors": [ "Weili Nie", "Yang Zhang", "Ankit Patel" ], "title": "A theoretical explanation for perplexing behaviors of backpropagation-based visualizations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should i trust you?”: Explaining the predictions of any classifier", "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Robin Rombach", "Patrick Esser", "Björn Ommer" ], "title": "Making sense of cnns: Interpreting deep representations & their invariances with inns", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Hua Shen", "Ting-Hao Kenneth Huang" ], "title": "How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels", "venue": "In Proceedings of the Eighth AAAI Conference on Human Computation and Crowdsourcing (HCOMP20),", "year": 2020 }, { "authors": [ "Sumedha Singla", "Brian Pollack", "Junxiang Chen", "Kayhan Batmanghelich" ], "title": "Explanation by progressive exaggeration", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Leon Sixt", "Maximilian Granz", "Tim Landgraf" ], "title": "When explanations lie: Why many modified bp attributions fail", "venue": "In Proceedings of the 37th International Conference on Machine Learning. PMLR,", "year": 2020 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": null, "year": 2017 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sandra Wachter", "Brent Mittelstadt", "Chris Russell" ], "title": "Counterfactual explanations without opening the black box: Automated decisions and the gdpr", "venue": "Harvard Journal of Law & Technology,", "year": 2018 }, { "authors": [ "Jennifer Wortman Vaughan", "Hanna Wallach" ], "title": "A human-centered agenda for intelligible machine learning. This is a draft version of a chapter in a book to be published in the 2020", "venue": "timeframe.,", "year": 2020 }, { "authors": [ "H. Zhang", "X. Gao", "Jacob Unterman", "Tom J Arodz" ], "title": "Approximation capabilities of neural odes and invertible residual networks", "venue": "arXiv: Learning,", "year": 2019 } ]
[ { "heading": null, "text": "Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional network. Our model generates meaningful, faithful, and ideal counterfactuals. Using PCA on the classifier’s input, we can also create “isofactuals”– image interpolations with the same outcome but visually meaningful different features. Counter- and isofactuals can be used to identify positive and negative evidence in an image. This can also be visualized with heatmaps. We evaluate our approach against gradient-based attribution methods, which we find to produce meaningless adversarial perturbations. Using our method, we reveal biases in three different datasets. In a human subject experiment, we test whether nonexperts find our method useful to spot spurious correlations learned by a model. Our work is a step towards more trustworthy explanations for computer vision. For code: https://anonymous.4open.science/r/ae263acc-aad1-42f8a639-aec20ff31fc3/" }, { "heading": "1 INTRODUCTION", "text": "The lack of interpretability is a significant obstacle for adopting Deep Learning in practice. As deep convolutional neural networks (CNNs) can fail in unforeseeable ways, are susceptible to adversarial perturbations, and may reinforce harmful biases, companies rightly refrain from automating high-risk applications without understanding the underlying algorithms and the patterns used by the model.\nInterpretable Machine Learning aims to discover insights into how the model makes its predictions. For image classification with CNNs, a common explanation technique are saliency maps, which estimate the importance of individual image areas for a given output. The underlying assumption, that users studying local explanations can obtain a global understanding of the model (Ribeiro et al., 2016), was, however, refuted. Several user studies demonstrated that saliency explanations did not significantly improve users’ task performance, trust calibration, or model understanding (Kaur et al., 2020; Adebayo et al., 2020; Alqaraawi et al., 2020; Chu et al., 2020). Alqaraawi et al. (2020) attributed these shortcomings to the inability to highlight global image features or absent ones, making it difficult to provide counterfactual evidence. Even worse, many saliency methods fail to represent the model’s behavior faithfully (Sixt et al., 2020; Adebayo et al., 2018; Nie et al., 2018). While no commonly agreed definition of faithfulness exists, it is often characterized by describing what an unfaithful explanation is (Jacovi & Goldberg, 2020). For example, if the method fails to create the same explanations for identically behaving models.\nTo ensure faithfulness, previous works have proposed building networks with interpretable components (e.g. ProtoPNet (Chen et al., 2018) or Brendel & Bethge (2018)) or mapping network activations to human-defined concepts (e.g. TCAV (Kim et al., 2018)). However, the interpretable network components mostly rely on fixed-sized patches and concepts have to be defined a priori.\nHere, we argue that explanations should neither be limited to patches and not rely on a priori knowledge. Instead, users should discover hypotheses in the input space themselves with faithful counterfactuals that are ideal, i.e. samples that exhibit changes that directly and exclusively correspond\nto changes in the network’s prediction (Wachter et al., 2018). We can guarantee this property by combining an invertible deep neural network z = ϕ(x) with a linear classifier y = wTϕ(x) + b. This yields three major advantages: 1) the model is powerful (can approximate any function Zhang et al. (2019)), 2) the weight vector w of the classifier directly and faithfully encodes the feature importance of a target class y in the z feature space. 3) Human-interpretable explanations can be obtained by simply inverting explanations for the linear classifier back to input space.\nAs a local explanation for one sample x, we generate ideal counterfactuals by altering its feature representation z along the direction of the weight vector z̃ = z + αw. The logit score can be manipulated directly via α. Inverting z̃ back to input space results in a human-understandable counterfactual x̃ = ϕ−1(z+αw). Any change orthogonal tow will create an “isofactual”, a sample that looks different but results in the same prediction. While many vectors are orthogonal to w, we find the directions that explain the highest variance of the features z using PCA. As the principal components explain all variance of the features, they can be used to summarize the model’s behavior globally.\nWe demonstrate the usefulness of our method on a broad range of evaluations. We compared our approach to gradient-based saliency methods and find that gradient-based counterfactuals are not ideal as they also change irrelevant features. We evaluated our method on three datasets, which allowed us to create hypotheses about potential biases in all three. After statistical evaluation, we confirmed that these biases existed. Finally, we evaluated our method’s utility against a strong baseline of example-based explanations in an online user study. We confirmed that participants could identify the patterns relevant to the model’s output and reject irrelevant ones. This work demonstrates that invertible neural networks provide interpretability that conceptually stands out against the more commonly used alternatives." }, { "heading": "2 METHOD", "text": "Throughout this work, we rely on the following definitions, which are based on Wachter et al. (2018):\nDefinition 2.1 (Counterfactual Example). Given a data point x and its prediction y, a counterfactual example is an alteration of x, defined as x̃ = x+ ∆x, with a altered prediction ỹ = y + ∆y where ∆y 6= 0. Samples x̄ with ∆y = 0 are designated “isofactuals”.\nAlmost any ∆x will match the counterfactual definition, including those that additionally change aspects which are unrelated to the model’s prediction, e.g. removing an object but also changing the background’s color. It is desirable to isolate the change most informative about a prediction:\nDefinition 2.2 (Ideal Counterfactual). Given a set of unrelated properties ξ(x) = {ξi(x)}, a sample x̃ is called ideal counterfactual of x if all unrelated properties ξi remain the same.\nThe following paragraphs describe how we generate explanations using an invertible neural network ϕ : Rn 7→ Rn. The forward function ϕ maps a data point x to a feature vector z = ϕ(x). Since ϕ is invertible, one can regain x by applying the inverse x = ϕ−1(z). We used the features z to train a binary classifier f(x) = wTz + b that predicts the label y. In addition to the supervised loss, we also trained ϕ as a generative model (Dinh et al., 2016; 2015) to ensure that the inverted samples are human-understandable.\nCounterfactuals To create a counterfactual example x̃ for a datapoint x, we can exploit that w encodes feature importance in the z-space directly. To change the logit score of the classifier, we simply add the weight vector to the features z and then invert the result back to the input space: x̃ = ϕ−1(z + αw). Hence, for any sample x, we can create counterfactuals x̃ with an arbitrary change in logit value ∆y = αwTw by choosing α accordingly. Figure 1a shows several such examples. Since the generation (ϕ−1) and prediction (ϕ) are performed by the same model, we know that x̃ will correspond exactly to the logit offset αwTw. Consequently, x̃ is a faithful explanation.\nTo show that our counterfactuals are ideal, we have to verify that no property unrelated to the prediction is changed. For such a property ξ(x) = vTz, v has to be orthogonal to w.1 As the unrelated property ξ does not change for the counterfactual ξ(x̃) = vT (z + αw) = vTz = ξ(x), we know that x̃ = ϕ−1(z + αw) is indeed an ideal counterfactual.\n1ξ(x) could actually be non-linear in the features z as long as the gradient ∂ξ ∂z is orthogonal tow.\nPCA Isosurface Since users can only study a limited number of examples, it is desirable to choose samples that summarize the model’s behavior well (Ribeiro et al., 2016; Alqaraawi et al., 2020). For counterfactual explanations, the change ∆x may vary significantly per example as ϕ(x) is a non-linear function. As each x has a unique representation z in the feature space, we want to find examples describing the different directions of the feature distribution. To isolate the effect of w, such examples would have the same prediction and only vary in features unrelated to the prediction.\nWe implement this by first removing the variation along w using a simple projection z⊥ = z − (wTz/wTw)w and then applying PCA on z⊥. The resulting principal components e1 . . . em are orthogonal tow except of the last principal component em which has zero variance and can therefore be discarded. The principal components span a hyperplane αw + ∑m−1 i βiei. Since all samples on this hyperplane have the same prediction (a logit value of αwTw), it is an isosurface.\nAs a principal component ei is a vector in the z-space, we can create counterfactuals for it ϕ−1(ei + αw) and understand how the changes of adding w differ per location in the z-space. The e1, . . . , em−1 are sorted by the explained variance allowing to prioritize the most relevant changes in the data. As the principal components cover the whole feature distribution, understanding the effect of w on them allows forming a global understanding of the model’s behavior.\nSaliency maps Saliency maps are supposed to draw attention to features most relevant to a prediction. In our case, it is most reasonable to highlight the difference between x and the counterfactual x̃. We can measure the difference although in an intermediate feature map h. The saliency map of an intermediate layer can be resized to fit the input’s resolution as information remains local in convolutional networks. Per feature map location (i, j), we calculate the similarity measure m(i,j) = |∆hij | cos ( ](∆hij ,hij) ) . The sign of the saliency mapm depends on the alignment of the change ∆h with the feature vector h, i.e. (](∆hij ,hij) > 0). The magnitude is dominated by the length of the change |∆hij |. Figure 1b presents saliency maps for the CELEBA Attractive label. Model Our invertible network follows the Glow architecture (Kingma & Dhariwal, 2018). The network is trained to map the data distribution to a standard normal distribution. We reduce the input dimensionality of (3, 128, 128) down to (786) by fading half of the channels out with each downsampling step. When generating a counterfactual, we reuse the z values out-faded from the lower layers as they correspond to small details and noise. We have 7 downsampling steps and 351 flow layers. The network has 158.769.600 parameters in total. An important design decision is that the final layer’s output is not input to the linear classifier. The PCA would fail to discover meaningful directions as the N (0, I) prior induces equal variance in all directions. The classifier uses the output of layer 321. The layers 322-351 are optimized using the standard unsupervised flow objective. For the first 321 layers, we also train on the classifier’s supervised loss (for details see Appendix A.1)." }, { "heading": "3 EVALUATION", "text": "We evaluated the ability to construct hypotheses about the model’s behavior on three datasets and with a user study. We focused on these aspects as our method is faithful by construction, needing no empirical confirmation. Instead, we use the strong faithfulness guarantees of our model to evaluate gradient-based attribution methods." }, { "heading": "3.1 HYPOTHESIS DISCOVERY", "text": "CelebA A claimed utility of our method is that it allows users to discover hyphotheses about the models features used for prediction. We choose CELEBA (Liu et al., 2015), a popular face dataset, because it is a challenging dataset for feature attribution: how can an abstract concept as attractiveness be linked to pixels? Additionally, it already contains annotations (e.g. make-up, accessories, hair), which makes it easier for us to accept or reject a given hypothesis about feature importance.\nWe especially focus on the Attractive class as it is unclearer what the relevant features are.The CELEBA Dataset in general and the class attractive in particular are ethically questionable. How can a subjective label, which depends on individual or even cultural preferences, be reduced to a binary label? Unfortunately, (Liu et al., 2015) did not state the annotation process (which is considered good practice - (Gebru et al., 2020; Geiger et al., 2020)). Furthermore, the dataset was criticized for lacking diversity (Kärkkäinen & Joo, 2019).\nFigure 1b shows the first 8 principal components at different logit values. We base our investigation on them, as they cover the feature distribution well by construction. At this point, we invite the reader to study the explanations: What are your hypotheses about the model’s used features?\nStudying the counterfactuals in rows (3R, 5L, 6R, 8R), one might hypothesize that glasses influence the prediction of attractiveness negatively. To validate this, we analyzed our model’s predictions on the test set. Since glasses are a labeled feature of CELEBA it is easy to test the hypothesis empirically. Only 3.5% of the portrait photos, which are showing glasses were labeled as attractive by the model. Furthermore, the correlation of the presence of glasses and the logit score was r=-0.35.\nAnother insight noticeable in 1L is that the amount and density of facial hair changes the prediction. The correlation of the absence of facial hair with the attractiveness logit score was r=0.35. At the same time, less head hair seemed to reduce attractiveness predictions in rows 1L, 2R, 4R. Row 6L paints the opposite picture, which illustrates the varying effectw can have on different datapoints. We found a correlation (r = 0.30) of hair-loss (combination of baldness or receding hairline) with attractiveness.\nIndicative of higher attractiveness appear to be a more feminine appearance (e.g. 4R in Figure 1) . This hints to a gender bias, which we confirmed as only 20.0% of men are predicted to be attractive, and the label male was negatively correlated with the prediction (r = −0.59). Further, it is noticeable that counterfactuals for higher attractiveness tend to have redder lips (1R, 2R,4R and 5L). This hypothesis could also be confirmed as the label Wearing Lipstick is also positively correlated (r = 0.64). For age, similar patterns can be found in 1L, 3R, 8L (r = 0.44). Table 4 in the Appendix D lists the correlation of all 40 attributes. Some attributes cannot be found in the principal components because the cropping hides them (double chin, necklace, necktie). Others describe local details such as arched eyebrows, earrings. While earrings do not show up in the counterfactuals, they are correlated with the model’s logit score by r=0.20. This might be because PCA tends to capture global image features while smaller local changes are scattered over many principal components. Another explanation could be that earings are actually not that relevant: if we control for gender using partial correlation the earings are only correlated by r=-0.01.\nDarker skin color seems to influence the network negatively, as in principal components (2R, 3R, 6L) a light skin color suggests high attractiveness. Since CELEBA has no labels for skin color, we annotated 3250 randomly selected images: 249 photos matched the Fitzpatrick skin type V-VI and were labeled as dark skin (Fitzpatrick, 1986). For light skin, the percentage of Attractive was 52.0%. The same bias is contained in the model: r=-0.187 (-0.22, -0.15)95%.\nTwo4Two The TWO4TWO dataset (Anonymous, 2020) is a set of computer-generated images intended to evaluate interpretable ML – to test both humans and algorithms. While the dataset is simple, we control the data generation process and can create arbitrary images to test the model. The dataset contains two abstract animals, Sticky and Stretchy. For Sticky, the right arms are moved inwards and for Stretchy outwards (see Figure 2b). As the arms overlap sometimes, it is beneficial also to use the color which is slightly predictive (blue for Stretchy and red for Sticky). Building\nblocks (cubes or spheres), bending, rotation, and background are sampled independently. For the TWO4TWO dataset, the invertible neural network ϕ was only trained on an unsupervised loss, i.e. the gradients of the classifier were detachted. Probably due to the datasets simplicity, we had problems to align the unsupervised and supervised loss well.\nThe principal components in Figure 2a suggest that the model indeed learned to use the color bias. We an confirm this by resampling only the color and measure how the logit score is correlated: r=0.352. For the arm’s position, we found a correlation with the model’s probability of -0.798. Additionally, Sticky on the top seems to be more rotated, which we can also confirm as only changing the rotation results in a correlation of the logit score with the absolute value of teh rotation of with r=0.136 (0.11, 0.16)95%. At high rotations, the model is more certain that it is a Sticky. Although not intended by the dataset, this bias can be well explained by the fact that ϕ was not trained on the supervised loss.\nBlack Mice We wanted to check our method on a dataset which is not already known to have biases as the CelebA dataset and is harder for a human to understand. The BLACK MICE dataset Andresen et al. (2020) contains images of laboratory mice after different treatments. The label to predict is related to the amount of pain. For a detailed discussion of the dataset, see Appendix ??. The main take-away point is that we find that the yellow bedding material, which is changed by our model’s counterfactuals, is indeed predictive of the label.\n3.2 COMPARISON OF THE GRADIENT OF x AND THE DIRECTIONAL DERIVATIVE dϕ−1/dw\nIn this evaluation, we propose a simple validity check for attribution methods and apply it to our method and gradient-based attribution methods. The idea is to relate saliency maps to counterfactuals. As saliency maps should highlight features most influential for the outcome of a datapoint, amplifying these features should increase the prediction and therefore create a counterfactual. We propose the following test: integrate the raw feature attribution values and then check if (1) the counterfactual increases the logit score and (2) if the changes are into the direction of w or rather into the direction of unrelated properties. We measure (2) by calculating the changes in the directions of the principal components: ξ = Ez where E is the matrix of all ei.\nWe construct an infinitesimal version of our counterfactuals by limα→0 ϕ−1(z+αw)\nα|w| . This gives the directional derivative2 of the input w.r.t. to the classifier weight: ∇wx = ∇wϕ−1 = dϕ−1(z)/dw. Moving the input x into the direction∇wx will result in a move of z into the w direction.3\nWe evaluate the directional derivative against the raw gradient, which serves as a basis for many saliency methods (SmoothGrad, LRP , LRPαβ , γ-rule, and integrated gradients (Smilkov et al., 2017; Bach et al., 2015; Montavon et al., 2019; Sundararajan et al., 2017)).4 Additionally, we include SmoothGrad (sm.g) and build two additional methods by penalizing changes in the unrelated\n2 TCAV (Kim et al., 2018) uses the directional derivative of the networks output w.r.t. a concept vector v: df dv\n. Different to our method, TCAV computes the gradient of the forward model and not on the inverse ϕ−1. 3 A reader familiar with differential geometry might recognize this as the pushforward ofw using ϕ−1. 4 The gradient and the directional derivative have a mathematical similarity which can be seen on the Jacobian:\n∇xf=Jϕ(x)w and ∇wx=Jϕ−1(z)w.\nproperties ξ using a mean squared error with the ξ of the original image (pe.gr. for gradient and for SmoothGrad pe.s.g). The integration is done by iterative steps into the direction of the integrated quantity, e.g. for the gradient we would calculate xt+1 = xt + γ∇xf(xt) where γ is a small step (see Appendix A.2 for all technical details).\nFigure 3b shows exemplary results of the integration for the Eyeglass dimension. While the gradientbased counterfactual increases the logit score by an order of magnitude, the resulting image is hardly different from the original. Only noise patterns appear – similar to adversarial examples. SmoothGrad results in both a lower logit score and even smaller changes to the image. Penalizing changes in unrelated properties only yields amplified noise patterns. At the start of the integration, the difference in ξ0 is zero, which probably results in first moving along ξ . In contrast, integrating the directional derivative adds sunglasses to the astronaut – a meaningful counterfactual.\nWe measure the quality of a counterfactual by measuring how strongly unrelated factors change on 100 random samples and report the results in Figure 3c. Thus, gradient-based counterfactuals do not only explain the increase of the logit score, but also all the other changes too. A user studying the gradient counterfactual could not differentiate between changes done to the prediction and the unrelated factors. The counterfactual based on the directional derivative keeps the independent factors almost unchanged up to numerical imprecision." }, { "heading": "3.3 HUMAN SUBJECT STUDY", "text": "Our aim was to evaluate whether counterfactual interpolations can help lay users to form hypotheses about a models used patterns and potential biases. Evaluating explanation techniques with users is important though a challenging endeavor as it requires mimicking a realistic setting, while avoiding overburdening participants (Doshi-Velez & Kim, 2017; Wortman Vaughan & Wallach, 2020).\nThe choice of the dataset is important for any evaluation. Some datasets introduce participants’ domain knowledge as a cofounding factor (e.g. images of dog breeds). While others like CELEBA introduce subjectivity. Datasets can have many relevant features, creating an enormous amount of possible and valid hypotheses. If participant were allowed to develop hypotheses about them without limitation this would require us to mostly evaluate them manually which would be too labor intensive. Asking participants to reason about pre-selected hypothesis prevents us from assessing their total understanding of the model as there are potentially many relevant features.\nWe chose the TWO4TWO data set (Section 3.1) as it addresses these issues (Anonymous, 2020). The simple scenario enables us to control the available patterns and limit the number of feasible hypotheses, allowing for comparable quantitative analysis. Concretely, we assessed a participant’s judgment about the plausibility of six hypotheses. Three hypotheses were reasonable (sensitivity to spatial compositions, color, and rotation). Two others were not (sensitivity to background and shape of individuals blocks). We also asked them to reason about the model’s maturity and measured their perception of the explanations using applicable statements taken from the Explanation Satisfaction Scale (Hoffman et al., 2018).\nBaseline Selection Many studies in machine learning solely demonstrate their methods feasibility without a baseline comparison (e.g. Ribeiro et al. (2016); Singla et al. (2020)). In contrast, we carefully considered what would be the best alternative method available to allow users to discover hypotheses about a model. As discussed previously in this work, many feature attribution techniques suffer from a lack of faithfulness and fail to provide meaningful counterfactuals. If counterfactuals are meaningful and faithful to the model they can be expected to look similar. Hence, comparing our method to other counterfactual generation methods (e.g. to GANs (Singla et al., 2020)) provides limited insight about their practical usefulness if there are alternative ways of discovering similar hypotheses. As for saliency maps, in addition to concerns about their faithfulness, there are also growing concerns about their practical usefulness. While early works found they can calibrate users’ trust in a model (e.g. Ribeiro et al. (2016)), more recent works cast doubts about this claimed utility (Kaur et al., 2020; Chu et al., 2020). Studies found that while they are useful to direct users’ attention towards relevant features, they facilitate limited insight (Alqaraawi et al., 2020; Chu et al., 2020). Other studies found they may even harm users’ understanding about errors of the model (Shen & Huang, 2020). After all, users often seem to ignore them, relying predominantly on predictions instead when reasoning about a model (Chu et al., 2020; Adebayo et al., 2020).\nWhile we introduce a faithful saliency method, we do not claim that it would not suffer from the same usability problems, especially with lay users (see Figure 7 for examples generated for TWO4TWO). After all our maps would need to be used in conjunction with counterfactuals, potentially adding a dependent variable (presence of saliency map) to experiment. For these reasons, we decided against considering saliency maps in this evaluation.\nWe also did not consider methods based on infilling (e.g. Goyal et al. (2019)), as we expected them to suffer from similar usability problems. For example, as they explain features locally by removing them, paying no attention to overlapping features, they can be expected to remove the entire object from the scene when explaining the model’s bias towards the object’s color. This would leave the user puzzled what feature of the object (shape, position or color) is important.\nA simple alternative is to study the system predictions on exemplary input. Such reasoning on natural images to understand model behavior has surfaced as a strong baseline in another study (Borowski et al., 2020). Hence, we choose example-based explanations as our baseline treatment.\nExplanation Presentation Considering that participants’ attention is limited and to allow for a fair comparison, we wanted to provide the same amount of visual information in both conditions. We choose a 30x5 image grid (3 rows shown in Figure 4). Each column represented a logit range. Ranges were chosen so that high confidence predictions for Stretchy were shown on the far left column and high confidence predictions Sticky on the far right. Less confident predictions were shown in the directly adjoining columns. The remaining middle column represented borderline-cases. This visual design had prevailed throughout numerous iterations and ten pilot studies, as it allows users to quickly scan for similar features in columns and differing features in rows.\nBoth conditions only varied in the images that were used to populate the grid. In the baseline, the grid was filled with images drawn from validation set that matched the corresponding logit ranges. In the counterfactual interpolations conditions, only the diagonal of the grid was filled randomly with such “original” images. They were marked with a golden frame. The remaining cells were filled row-wise with counterfactuals of the original images that matched the corresponding columns score range.\nOur online study was preregistered 5 and followed a between-group design. Participants (N=60) were recruited from Prolific and needed to hold an academic degree with basic mathematical education. Participants were randomly but equally assigned to view either counterfactual interpolations or the baseline. Upon commencing the study on the Qualtrics platform, participants were shown handcrafted video instructions. After that, they studied the image grid while rating their agreement to six statements on a 7-point Likert scale. Participants also rated their agreement to four applicable statements taken from the Explanation Satisfaction Scale (Hoffman et al., 2018).\nStudy Results and Discussion The significance of rating difference was assessed using a KruskalWallis Test. To account for multiple comparisons, we applied Bonferroni correction to all reported p-values. For a detailed assessment of all preregistered hypothesis, please refer to the Appendix (Section E.1). Figure 4a summarizes the responses.\n5see supplementary material\nCounterfactual interpolations allowed users to identify the model’s main pattern: the position of the arms of Stretchy and Sticky. They did this with high certainty, as 83.34% strongly agreed with the corresponding statement. They were more certain about this pattern than with the baseline technique (H(1) = 8.86, p = 0.018), even though the baseline technique also performed well for this task. The large majority (70%) also identified the color bias with counterfactual interpolations, while only 43% identified this bias using the baseline explanations. However, the difference in rating between conditions for the corresponding statement about color bias was not significant (H(1) = 3.21, p = 0.42). Participants who had missed the color bias using our method were later asked to provide their reasoning. A participant stated: “I would think that the color would be relevant if I saw an example where it went from certain to very certain and only the color, brightness or intensity changed.” Such rule-based rather than probabilistic cognitive models of the network may have led users to reject the presence of color bias even though we instructed them clearly that interpolation would only change relevant features.\nTo our surprise, fewer participants noticed the network’s more subtle bias towards object rotation in both conditions. As Figure 4 indicates, participants were somewhat undecided about the relevance, leaning rather to conclude that the network is not sensitive to rotation. As a limitation, we note that participants may not have noticed the rotation bias due to how we had phrased the corresponding statement. When we asked them to explain their reasoning, many explained that they instead focused on the individual blocks’ rotation rather than the whole animal.\nBoth explanation techniques allowed participants to confidently reject statements about irrelevant patterns (sensitivity to the background, sensitivity to the type of blocks). We argue this indicates a high quality of collected responses and good utility of both explanation techniques. Somewhat worrying is participants’ assessment of the system’s maturity. They were very confident that the network has learned the right patterns and is ready to use for both techniques. Such bias towards model deployment has previously surfaced in other studies (Kaur et al., 2020).\nExplanation Satisfaction ratings were very high for both techniques (see Figure 10 in Appendix) underlining that participants perceived both methods very well. While this also means that our method was unable to outperform the baseline, it also shows that our careful visual design and our clear instructions how to use the explanations technique were well received. As a limitation, we note that participants may have found the introductory videos very informative as many reported enjoying the study. This may have led them to more favorable ratings and the conclusion that they understand the system very well regardless of the explanation technique they had used." }, { "heading": "4 RELATED WORK", "text": "Others have suggested methods for counterfactual generation. Chang et al. (2019) identifies relevant regions by optimizing for sufficiency and necessarity for the prediction. The classifier is then probed\non the counterfactuals replacing relevant regions with heuristical or generative infilling. Goyal et al. (2019) find regions in a distractor image that would change the prediction if present. Both works assume that relevant features are localized, but for many datasets these may cover the entire image, e.g. changes due to gender or age in face images. Singla et al. (2020); Liu et al. (2019); Baumgartner et al. (2018) explain a black-box neural network by generating counterfactuals with GANs which can generate counterfactuals of similar or even better visual quality. However, the GANs model does not have to align with the explained model perfectly, e.g. see Figure 3 in (Singla et al., 2020).\nThe TCAV method (Kim et al., 2018) estimates how much manually defined concepts influence the final prediction. Recent work has extended TCAV to discover concepts using super-pixels automatically (Ghorbani et al., 2019). Goyal et al. (2020) extend TCAV to causal effects of concepts and use a VAE as generative model.\nBeing able to interpolate in feature space and inverting these latent representations is one of the advantages of invertible networks (Jacobsen et al., 2018; Kingma & Dhariwal, 2018). Mackowiak et al. (2020) use invertibility to improve the trustworthiness but focus on out-of-distribution and adversarial examples. (Rombach et al., 2020; Esser et al., 2020) employ invertible networks to understand vanilla convolutional networks better.\nOne example of an interpretable model is ProtoPNet (Chen et al., 2019). The feature maps of image patches that correspond to prototypical samples in the dataset are used for the final prediction. This way, a result can be explained by pointing to labeled patches. The method is limited to a fixed patch size and does not allow counterfactual reasoning. Another patch-based interpretable model is proposed in Brendel & Bethge (2018).\nOur combination of PCA and invertible neural networks for interpretability is novel. The finding that the directional derivative corresponds to ideal counterfactuals, whereas the gradient does not, has not been reported before. We are also not aware of a user study has previously demonstrated that visual counterfactual can help users identify biases of a neural network." }, { "heading": "5 DISCUSSION", "text": "A disadvantage of our method is that it requires an invertible network architecture — the weights of an existing CNN cannot be reused. Learning the input distribution entails additional computational costs, when training an invertible neural network. For non-image domains such as natural language or graphs, the construction of an inverse is currently more difficult. However, first works have taken on the challenge (MacKay et al., 2018; Madhawa et al., 2019). Furthermore, learning the input distribution requires a larger network. Given that our method performed similar to the baseline in the user study in all but one category, an obvious question is whether it is worth the additional effort.\nHowever, the same question applies to almost any explanation method and remains largely unanswered. Unfortunately user evaluations that include a reasonable baselines are very rare. An additional finding of this work is that explanation methods should be evaluated for their utility and usability against a reasonable baseline. For image classification our work shows, that studying the raw input and corresponding predictions is such a reasonable baseline.It has the potential to allow lay users to identify, many but not all, high level features used for prediction. Even though we found a strong baseline, the user study also demonstrated that our method is useful to lay users as they found two out of three relevant patterns and rejected two more irrelevant patterns. It also highlights that some more subtle patterns may still go unnoticed even when using our method.\nWe would like to argue that the additonal effort required to implent invertability, may well be justified especially in high-stakes domains. Combining an invertible neural network with a linear classifier enables the use of simple explanation techniques which are otherwise restricted to low complexity models. Here, we can use them on a deep model with much greater predictive power. Counterfactuals can be created by simply using the weight vector of the classifier. In contrast to many other techniques, they are faithful to the model, changing only features relevant for the prediction. Since, they can be inverted back to the input space the high level features they encode are human interpretable. This allows users to discover hypotheses about the models used patterns largely independent of their preconception about feature importance. Using our method we found biases in three datasets including some that have not been previously reported. As we have demonstrated in this work, that invertibility has mayor advantages for interpretability." }, { "heading": "A APPENDIX: TECHNICAL DETAILS", "text": "A.1 NEURAL NETWORK ARCHITECTURE\nOur model follows the Glow model closely (Kingma & Dhariwal, 2018). Similarly, we use a block of actnorm, invertible 1× 1 convolution and affine coupling layer. After 18 blocks, we add an reshuffle operation to reduce the spatial dimensions by a factor of 2 and half of the channels are faded out. The first layer is The classification is done before the final mapping to the prior N (0, 1). As described in section 2, we trained added the classifier after layer 321 before the final layer 351. Let ϕ denote the first 321 layers and µ : Rn 7→ Rn the last. We train ϕ both on a supervised loss from the classifier f(x) and an unsupervised loss from matching the prior distribution N (0, I) and the log determinante of the Jacobian. µ is only trained on the unsupervised loss:\narg min θϕ,θµ,θf\nLun(µ ◦ ϕ(x)) + β Lsup(wTϕ(x) + b, ytrue). (1)\nFor the supervised loss Lsup, we use the binary cross entropy although our method is not restricted to this loss function and could be extend to more complex losses easily. As unsupervised loss Lun, we use the commonly used standard flow loss obtained from the change of variables trick Dinh et al. (2016). The unsupervised loss ensures that inverting the function results in realistic looking images and can also be seen as a regularization.\nIn total, ϕ and µ have 158.769.600 parameters. We use the identical network architecture on all datasets.\nA.2 DETAIL TO INTEGRATION: SECTION 3\nIn section 3.2, we integrated the gradient and the directional dirivative. We used the torchdiffeq package. For figure 3b, we integrated from t=[0, 11] using the midpoint method with 20 steps. Here the integration was done in layer 40. As this was rather slow, we used 5steps and t = [0, 4] to determine the differences in the unrelated factors ξ again in 40, shown in Figure 3c." }, { "heading": "B BLACK MICE", "text": "In this case study, we apply our method on the BLACK MICE dataset (Andresen et al., 2020). In contrast to CELEBA, the images vary more strongly in location, size, posture, and camera angle. The dataset contains a total of 32576 images of 126 individual mice. Andresen et al. (2020) trained a ResNet and reported an accuracy of 88.5±2.6% using 10-fold cross-validation. Our model achieves a similar accuracy of 86.5% tested on a single fold. The images were collected for earlier works (Hohlbaum et al., 2018; 2017). The mice were divided in three groups: castration, only anesthesia, or untreated. A binary label marks any signs of post-surgical/-anesthetic effects. According to (Langford et al., 2010), typical features for pain are squeezed eyes, pulled-back ears, bulged cheeks and nose, and change in whisker position.\nTogether with the authors of (Andresen et al., 2020), we reviewed our model’s explanations. We confirmed that counterfactuals affect different image features accordingly: eyes, nose, ears, whiskers, and head position change in biologically plausible ways. The mice’s eyes seem to be less relevant to the network. For humans, squeezed eyes are a good indicator of pain. However, the counterfactuals only showed slight changes: sometimes the eyes blend into the surroundings. As neural networks perform well on the task using only the eyes (Neurath, 2020), we believe changes to our network architecture could preserve these details. Some other features may co-appear with image artifacts, e.g. the ear’s shape changes may also appear partially blended with the background.\nIntriguingly, the counterfactuals also show contrast changes in the surroundings, see figure ??. The authors of (Andresen et al., 2020) voiced the suspicion that this may be explained how the photos were taken. Since mice after anesthesia or surgery predominantly drop the head and the nose tip points downwards, the camera angle may have been adjusted to get a better view of the animal and, in effect, show more of the yellow wooden bedding material on the cage floor.\nTo verify if wooden bedding material is predictive, we annotated 1000 randomly selected images from our test set. Depending on the image area covered by the wooden bedding material, we assigned each sample to the classes: (0) ≤ 5%, (1) ≤ 20%, (2) > 20% if the bottom of the image showed yellow bedding material. This classification resulted in 346, 258, 396 samples per bin. Of all samples, 44.7% were marked to show post-surgical/-anesthetic effects. Per bin, the label was unevenly distributed: 19.9%, 52.3%, 61.4%. We account for the unequal distribution of labels using partial correlation (see Appendix ??) and obtain the following values between the models’ output probabilities and the bins (95% CI): (1) -0.255 (-0.31, -0.20)95%, (2) 0.026 (-0.04, 0.09)95%, (3) 0.217 (0.16, 0.27)95%.\nThe label “post-surgical/-anesthetic effects” is unequally distributed across the three bins: 346, 258, 396. This can be problematic, when we measure the correlation between sample’s bin and the model logit score. The model has learned to predict lower scores for a negative label and vise versa. To account for this, we calculate the partial correlation between the model’s output probability and the bin class while using the label as a confounding variable. In table 2, we report both full and partial correlations and also the correlations of the bins with the label.\nThese results confirm a connection between the surroundings, label, and logit score. The hints of our explanations to this bias in the data were correct. The surroundings’ changes can be explained probably by mice dropping their head if in pain and by changes to the camera angle. As we could also confirm many characteristic features, the network does not base its decision solely on wooden bedding material. This case study highlights the practicability of our method in a real-world scenario." }, { "heading": "C TWO4TWO", "text": "D CELEBA" }, { "heading": "E USER-STUDY PREGISTRATION AND HYPOTHESIS", "text": "The Study is preregisterd at https://aspredicted.org/ we provide and anonimyzed pdf version in supplemental material. Participants (N=60), of the study were required to fluent in English and needed to have an approval rate of at least 95. Given the demanding nature of the task and\nthe complex of concepts used in the instructions they also needed to have an academic degree in Computer Science, Engineering, Finance, Mathematics, Medicine, Physics or Psychology. Figure 10 summarizes the subjective ratings participants gave about the two explanation technqiues used in the study.\nE.1 EVALUATION OF PREREGISTERED HYPOTHESIS\nH1: The hypothesis “Studying the system’s predictions on the validation set (Baseline Explanation technique - referred to as Baseline) allows users to verify that the neural network (NN) is using the blocks spatial arrangement (Pattern 1) for its predictions of the abstract animals.” is confirmed as 83.33% at least somewhat agree to corresponding statement.\nH2: The hypothesis “Baseline does not allow users to detect the NN bias for colour (Pattern 2) and rotation (Pattern 3).” is confirmed. Only 46.66 % of users at least somewhat disagree with the statement claiming that there is a rotation pattern while only 43.33% at least somewhat agree (the remaining are undecided). For the colour pattern 50% at least somewhat disagree that there is such a pattern and only 43.33% at leat somewhat agree.\nH3: Duplicate of H2 (copy and paste error during preregistration)\nH4: The hypothesis “Studying the system’s predictions with counterfactual interpolations as explanations (referred to as NNWI) allows users to verify that NN is using Pattern 1.” is confirmed as 96.66% at least agree with the corresponding statement.\nH5: The hypothesis “Counterfactual interpolations allows users to detect Pattern 2 and Pattern 3.” is rejected. While 70 % at least somewhat agree with the statement about Pattern 2 only 33.33% at least somewhat agree with the statement about Pattern 3.\nH6: The hypothesis “Counterfactual interpolations allows users to verify that NN is neither using the background of the image (Pattern 4) nor the surface structure of objects (Pattern 5).” is confirmed. The corresponding statement about Pattern 4 and 5 have been at least somewhat disagreed to by 83.33% and 73.33% respectively.\nH7: The hypothesis “Counterfactual interpolations allow users to detect Pattern 1 with higher confidence” is confirmed. Agreement with the corresponding statement was significantly different between conditions (p = 0.003) and on average higher for Counterfactual Interpolations (2.67) compared to the baseline (1.76).\nH8: The hypothesis “Counterfactual interpolations allow users to reject Pattern 4 and Pattern 5 with higher confidence” is rejected. There was no significant difference in the certainty for disagreeing with corresponding statements.\nH9: The hypothesis “Counterfactual interpolations allow users to detect Pattern 2 and Pattern 3 with higher confidence.” is rejected since H5 was rejected. However, it is worth pointing out that for the statement about color 70% of participants at least somewhat agreed to it if they received counterfactual interpolation while only 43.33% at least somewhat agreed to it if they received example based explanations.\nH10: The hypothesis “Counterfactual interpolations leads users to be more confident in the matureness of the system.” is rejected as agreement to the corresponding statement was not significantly different across conditions. In both conditions participants where rather confident in the system.\nH11: The hypothesis “Users are more satisfied with Counterfactual interpolations as explanations.” is rejected. Explanation Satisfaction rating were very high in both conditions but not significantly different.\nF IMAGE SOURCE\nAs the copyright of CELEBA is unclear and includes images under no free license, we decided against showing any original CELEBA images in the paper. We show the following these six images all under permissive license: Obama (CC BY 3.0): https://commons.wikimedia.org/wiki/File: Official_portrait_of_Barack_Obama.jpg\nCommander, Eileen M. Collins (Public Domain): https://www.flickr.com/photos/ nasacommons/16504233985/\nCarl Jacobi (Public Domain): https://de.wikipedia.org/wiki/Datei:Carl_Jacobi.jpg\nGrace Hopper (Public domain): https://de.wikipedia.org/wiki/Datei:Grace_Hopper.jpg\nAlan Touring (CC BY 4.0): https://commons.wikimedia.org/wiki/File:%D0%A2%D1%8C% D1%8E%D1%80%D0%B8%D0%BD%D0%B3.jpg\nLyndsey Scott (CC BY 4.0): https://en.wikipedia.org/wiki/File:Lyndsey_Scott_ being_combed.jpg" } ]
2,020
INTERPRETABILITY THROUGH INVERTIBILITY: A DEEP CONVOLUTIONAL NETWORK WITH IDEAL COUNTERFACTUALS AND ISOSURFACES
SP:0b0a27b56520c182d6cdc92a338695f8a7813b83
[ "The work provided a nice new method with some performance gains by combining several existing techniques. The presentation was clear and organized, with the new method getting both better performance and some improvements in interpretability. It provides a variety of visual analyses that are typical of this area of research and present the contrasts between this work and prior efforts." ]
Procedurally-generated sparse reward environments pose significant challenges for many RL algorithms. The recently proposed impact-driven exploration method (RIDE) by Raileanu & Rocktäschel (2020), which rewards actions that lead to large changes (measured by `2-distance) in the observation embedding, achieves state-of-the-art performance on such procedurally-generated MiniGrid tasks. Yet, the definition of “impact” in RIDE is not conceptually clear because its learned embedding space is not inherently equipped with any similarity measure, let alone `2-distance. We resolve this issue in RIDE via contrastive learning. That is, we train the embedding with respect to cosine similarity, where we define two observations to be similar if the agent can reach one observation from the other within a few steps, and define impact in terms of this similarity measure. Experimental results show that our method performs similarly to RIDE on the MiniGrid benchmarks while learning a conceptually clear embedding space equipped with the cosine similarity measure. Our modification of RIDE also provides a new perspective which connects RIDE and episodic curiosity (Savinov et al., 2019), a different exploration method which rewards the agent for visiting states that are unfamiliar to the agent’s episodic memory. By incorporating episodic memory into our method, we outperform RIDE on the MiniGrid benchmarks.
[]
[ { "authors": [ "Yusuf Aytar", "Tobias Pfaff", "David Budden", "Thomas Paine", "Ziyu Wang", "Nando de Freitas" ], "title": "Playing hard exploration games by watching youtube", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adrià Puigdomènech Badia", "Pablo Sprechmann", "Alex Vitvitskyi", "Daniel Guo", "Bilal Piot", "Steven Kapturowski", "Olivier Tieleman", "Martin Arjovsky", "Alexander Pritzel", "Andrew Bolt", "Charles Blundell" ], "title": "Never give up: Learning directed exploration strategies", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Deepak Pathak", "Amos J. Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos J. Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andres Campero", "Roberta Raileanu", "Heinrich Küttler", "Joshua B Tenenbaum", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "Learning with amigo: Adversarially motivated intrinsic goals", "venue": "arXiv preprint arXiv:2006.12122,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "arXiv preprint arXiv:1912.01588,", "year": 2019 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Rémi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Christian Kauten" ], "title": "Super Mario Bros for OpenAI Gym. GitHub, 2018", "venue": "URL https://github. com/Kautenja/gym-super-mario-bros", "year": 2018 }, { "authors": [ "Heinrich Küttler", "Nantas Nardelli", "Thibaut Lavril", "Marco Selvatici", "Viswanath Sivakumar", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "TorchBeast: A PyTorch Platform for Distributed RL", "venue": "arXiv preprint arXiv:1910.03552,", "year": 2019 }, { "authors": [ "Heinrich Küttler", "Nantas Nardelli", "Alexander H Miller", "Roberta Raileanu", "Marco Selvatici", "Edward Grefenstette", "Tim Rocktäschel" ], "title": "The nethack learning environment", "venue": "arXiv preprint arXiv:2006.13760,", "year": 2020 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Jarryd Martin", "Suraj Narayanan Sasikumar", "Tom Everitt", "Marcus Hutter" ], "title": "Count-based exploration in feature space for reinforcement learning", "venue": "In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "How can we define intrinsic motivation", "venue": "In Proc. of the 8th Conf. on Epigenetic Robotics,", "year": 2008 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Roberta Raileanu", "Tim Rocktäschel" ], "title": "Ride: Rewarding impact-driven exploration for procedurally-generated environments", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Aravind Rajeswaran", "Kendall Lowrey", "Emanuel V Todorov", "Sham M Kakade" ], "title": "Towards generalization and simplicity in continuous control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sebastian Risi", "Julian Togelius" ], "title": "Increasing generality in machine learning through procedural content generation", "venue": "Nature Machine Intelligence,", "year": 2020 }, { "authors": [ "Nikolay Savinov", "Anton Raichuk", "Raphaël Marinier", "Damien Vincent", "Marc Pollefeys", "Timothy Lillicrap", "Sylvain Gelly" ], "title": "Episodic curiosity through reachability", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A possibility for implementing curiosity and boredom in model-building neural controllers", "venue": "In Proc. of the international conference on simulation of adaptive behavior: From animals to animats,", "year": 1991 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Rmsprop: Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "Amy Zhang", "Nicolas Ballas", "Joelle Pineau" ], "title": "A dissection of overfitting and generalization in continuous reinforcement learning", "venue": "arXiv preprint arXiv:1806.07937,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) algorithms aim to learn an optimal policy that maximizes expected reward from the environment. The search for better RL algorithms is motivated by the fact that many complex real-world problems can be formulated as RL problems. Yet, environments with sparse rewards, which often occur in the real-world, pose a significant challenge for RL algorithms that rely on random actions for exploration. Sparsity of the reward can make it extremely unlikely for the agent to stumble upon any positive feedback by chance. The agent may spend a long time simply exploring and not receiving a single positive reward.\nTo overcome this issue of exploration, several previous works have used intrinsic rewards (Schmidhuber, 1991; Oudeyer et al., 2007; 2008; Oudeyer & Kaplan, 2009). Intrinsic rewards, as the name suggests, are reward signals generated by the agent which can make RL algorithms more sample efficient by encouraging exploratory behavior that is more likely to encounter rewards. Previous works have used state novelty in the form of state visitation counts (Strehl & Littman, 2008) for tabular states, pseudo-counts for high-dimensional state spaces (Bellemare et al., 2016; Ostrovski et al., 2017; Martin et al., 2017), prediction error of random networks (Burda et al., 2019b), and curiosity about environment dynamics (Stadie et al., 2015; Pathak et al., 2017) as intrinsic rewards.\nAlthough such advances in exploration methods have enabled RL agents to achieve high rewards in notoriously difficult sparse reward environments such as Montezuma’s Revenge and Pitfall (Bellemare et al., 2013), many existing exploration methods use the same environment for training and testing (Bellemare et al., 2016; Pathak et al., 2017; Aytar et al., 2018; Ecoffet et al., 2019). As a result, agents trained in this fashion do not generalize to new environments. Indeed, several recent papers point out that deep RL agents overfit to the environment they were trained on (Rajeswaran et al., 2017; Zhang et al., 2018; Machado et al., 2018), leading to the creation of new benchmarks consisting of procedurally-generated environments (Cobbe et al., 2019; Risi & Togelius, 2020; Küttler et al., 2020).\nIn practice, agents often have to act in environments that are similar, but different from the environments they were trained on. Hence, it is crucial that the agent learns a policy that generalizes across diverse (but similar) environments. This adds another layer of difficulty, the diversity of environment layout for each episode, to the already challenging sparse reward structure. To tackle this challenge head-on, Raileanu & Rocktäschel (2020) focus on exploration in procedurally-generated environments and propose RIDE, an intrinsic rewarding scheme based on the “impact” of a new observation. Denoting the observation embedding function by φ, RIDE measures the impact of observation o′ by computing ‖φ(o′)− φ(o)‖2, where o is the previous observation. Similarly, Savinov et al. (2019) propose episodic curiosity (EC), an intrinsic rewarding scheme which rewards visiting states that are dis-similar to states in the agent’s episodic memory.\nRIDE uses forward and inverse dynamics prediction (Pathak et al., 2017) to train the observation embedding φ in a self-supervised manner. Hence, a question that one might ask is:\nWhat is the `2-distance in this embedding space measuring?\nWe address this question by modifying the embedding training procedure, thereby changing the definition of impact. That is, we modify RIDE so that impact corresponds to an explicitly trained similarity measure in the embedding space, where we define two observations to be similar if they are reachable from each other within a few steps. The original definition of “impact” in RIDE is not conceptually clear because the learned embedding space is not inherently equipped with a similarity measure, let alone `2-distance. It is still possible that RIDE’s measure of impact based on `2-distance may implicitly correspond to some similarity measure in the embedding space, but we leave this investigation for future work.\nOur main contributions are 1) proposing a conceptually clear measure of impact by training observation embeddings explicitly with the cosine similarity objective instead of forward and inverse dynamics prediction, 2) providing a new perspective which connects RIDE and EC, and 3) outperforming RIDE via episodic memory extensions. We use SimCLR (Chen et al., 2020) to train the embedding function and propose a novel intrinsic rewarding scheme, which we name RIDESimCLR. As in EC, the positive pairs used in the contrastive learning component of RIDE-SimCLR correspond to pairs of observations which are within k-steps of each other (referred to as “k-step reachability” in their work).\nFollowing the experimental setup of Raileanu & Rocktäschel (2020), we use MiniGrid (ChevalierBoisvert et al., 2018) to evaluate our method as it provides a simple, diverse suite of tasks that allows us to focus on the issue of exploration instead of other issues such as visual perception. We focus on the comparison of our approach to RIDE since Raileanu & Rocktäschel (2020) report that RIDE achieves the best performance on all their MiniGrid benchmarks among other exploration methods such as intrinsic curiosity (ICM) by Pathak et al. (2017) and random network distillation (RND) by Burda et al. (2019b). We note that MiniGrid provides a sufficiently challenging suite of tasks for RL agents despite its apparent simplicity, as ICM and RND fail to learn any effective policies for some tasks due to the difficulty posed by procedurally-generated environments. Our experimental results show that RIDE-SimCLR performs similarly to RIDE on these benchmarks with the added benefit of having a conceptually clear similarity measure for the embedding space.\nOur qualitative analysis shows interesting differences between RIDE and RIDE-SimCLR. For instance, RIDE highly rewards interactions with controllable objects such as opening a door, which is not the case in RIDE-SimCLR. We also observe that our episodic memory extension improves the quantitative performance of both methods, which demonstrates the benefit of establishing a connection between RIDE and EC. The Never Give Up (NGU) agent by Badia et al. (2020) can be seen as a close relative of our memory extension of RIDE since it uses `2 distance in embedding space trained with the same inverse dynamics objective to compute approximate counts of states and aggregates episodic memory to compute a novelty bonus.\nOur work is different from EC because we do not explicitly sample negative pairs for training the observation embedding network, and we use cosine similarity, instead of a separately trained neural network, to output similarity scores for pairs of observations. We note that Campero et al. (2020) report state-of-the-art results on even more challenging MiniGrid tasks by training an adversarially motivated teacher network to generate intermediate goals for the agent (AMIGo), but we do not compare against this method since their agent receives full observations of the environment. Both RIDE and RIDE-SimCLR agents only receive partial observations." }, { "heading": "2 BACKGROUND", "text": "We consider the standard episodic RL setting in which an agent interacts with the environment to maximize its expected scalar reward for each episode. The interaction proceeds in discrete time steps and terminates at some fixed time T unless the goal is achieved earlier. At time step t, the agent receives observation ot and samples an action from its policy π(ot). The environment then provides the agent with a scalar reward rt, the next observation ot+1, and an end-of-episode indicator. The goal of the agent is to maximize the expected discounted reward R = ∑T t=1 γ\ntrt, where rt is the (extrinsic) reward given by the environment at time step t.\nWhen extrinsic reward is sparse, standard RL algorithms such as PPO (Schulman et al., 2017) or IMPALA (Espeholt et al., 2018) often fail to learn a good policy. To overcome this challenge, previous works have proposed augmenting the reward function by r̂t = rt + wirit, where r i t is the intrinsic reward and wi is a hyperparameter which controls the relative importance of rit. The intrinsic reward rit is typically designed to be a dense reward function which pushes the policy towards exploratory behavior more likely to encounter positive extrinsic rewards.\nNote that intrinsic rewards can be used with any existing RL algorithms without altering their underlying network architecture or training procedure. It suffices to simply replace the extrinsic reward rt with the augmented reward r̂t. The observation embedding used to compute intrinsic rewards can be trained either 1) offline with respect to a uniformly random policy or 2) online with respect to agent’s current policy. For a fair comparison with Raileanu & Rocktäschel (2020), we use the embedding trained online for computing intrinsic rewards." }, { "heading": "2.1 IMPACT-DRIVEN EXPLORATION (RIDE)", "text": "RIDE (Raileanu & Rocktäschel, 2020) is an intrinsic reward based on the magnitude of change in the observation embedding produced by the agent’s action (See Figure 1). More precisely, RIDE is defined as\nRIDE ≡ rit(st, at) = ‖φ(ot+1)− φ(ot)‖2√\nNep(st+1) , (1)\nwhere φ is the observation embedding function and Nep(s) is the number of times state s is visited within the current episode. The purpose of this discount by Nep(s) is to prevent the agent from going back and forth a sequence of states with large `2 differences.\nThe embedding function φ(o) used in RIDE is parametrized by a neural network and trained by minimizing forward and inverse dynamics prediction error (Pathak et al., 2017). Note that the policy network π has its own observation embedding network ψ, which is trained separately from φ. The embedding φ is only used to compute the intrinsic reward and never used for control, and the opposite holds for ψ. The purpose of using forward and inverse dynamics models to train the embedding is to store information that is useful for predicting the agent’s action or effects actions have on the environment. This leads to learning an action-focused embedding space.\nRIDE builds upon the intrinsic curiosity module (ICM) by Pathak et al. (2017), which uses the forward dynamics prediction error as intrinsic reward. The novelty in RIDE is using the `2 distance between two different observations as a measure of qualitative change between the states. Indeed, visualizations of RIDE by Raileanu & Rocktäschel (2020) show that RIDE assigns higher intrinsic rewards for actions that qualitatively “change the dynamics”, such as opening doors or interacting with objects in MiniGrid. However, RIDE introduces conceptual difficulties as the embedding space is not explicitly trained with any similarity measure. That is, the forward and inverse dynamics objective does not explicitly pull together or push apart embeddings of different observations. In ICM, the `2 distance between φ(ot+1) and the prediction φ̂(ot+1) has a clear interpretation as the forward prediction “error”, since the forward dynamics objective explicitly minimizes this `2 distance. RIDE, on the other hand, uses the `2 distance between different observations ot and ot+1 as the intrinsic reward. Yet, the forward and inverse dynamics prediction objective does not specify which pairs of observation embeddings (oi,oj) should be pulled closer together and which should be pushed apart (in `2 distance).\nThis does not mean that RIDE fails to capture qualitative changes in the dynamics. In fact, our visualizations (Figure 7) corroborate the findings of Raileanu & Rocktäschel (2020) by demonstrating that RIDE assigns higher rewards for actions such as opening doors. However, it is difficult to precisely define what having ”different dynamics” means, let alone giving a quantitative definition of it. Moreover, the question of why the `2 distance is larger for pairs of observations corresponding to such actions is not well-understood, and thus, requires further investigation. Without understanding why, we cannot guarantee that RIDE will always assign higher rewards to actions we perceive as “significantly changing the dynamics”." }, { "heading": "2.2 EPISODIC CURIOSITY (EC)", "text": "EC (Savinov et al., 2019) is an intrinsic reward based on how “far” the next observation is from observations in the agent’s current episodic memory (See Figure 2). More precisely, EC is defined as\nREC ≡ rit(st, at) = β − C(Mt, φ(ot+1)) , (2)\nwhere β ∈ R is a scalar hyperparameter, C is a learned comparator network, and Mt is the agent’s episodic memory at time t.\nIntuitively, the comparator network C measures how “far” a given observation ot+1 is from the agent’s current episodic memory Mt. The episodic memory Mt = {φ(ot′)}t′≤t is the set of observation embeddings previously encountered in the current episode.1 The comparator network C predicts whether o and o′ are reachable from each other within k steps, i.e., ŷ = C(φ(o), φ(o′)) ∈ [0, 1] where the true label y = 1 if the agent can reach o′ from o in k steps and y = 0 otherwise. The reachability threshold k is a hyperparameter. The parameters of C and φ are trained to minimize\n1The definition of episodic memory used in Savinov et al. (2019) is more general. For memory efficiency, they add φ(ot+1) to episodic memory only if it is sufficiently different from the observations already in Mt.\nCrossEntropy(ŷ, y). The data used in the contrastive training is generated by taking a sequence of the agent’s observations o1, . . . , oN , and labeling a pair (oi, oj) positive if |i− j| ≤ k and negative if |i− j| ≥ γk. Here, γ is an additional hyperparameter needed to create a gap between positive and negative examples.\nWith slight abuse of notation, we write\nC(Mt, φ(ot+1)) = A(c1, . . . , c|Mt|) ∈ [0, 1] , (3)\nwhere ci = C(φ(oi), φ(ot+1)) andA is an aggregation function. The aggregation function pools the individual reachability scores ci and outputs an aggregate reachability score for ot+1 and Mt. One simple example of the aggregation function is A = max." }, { "heading": "3 OUR METHOD", "text": "We propose a modification of RIDE to remedy the lack of a conceptually clear similarity measure in the embedding space. Instead of forward and inverse dynamics prediction, we use SimCLR (Chen et al., 2020) to train the observation embeddings with respect to a cosine similarity objective. This has the benefit of equipping the embedding space with a natural similarity measure. We present this direct modification of RIDE in Section 3.1. Our modification opens up a perspective through which we can view RIDE as a special case of EC. From this viewpoint, it is natural to extend RIDE with episodic memory. We present this extension in Section 3.2. A summary of all the methods considered in this work can be found in Table 1." }, { "heading": "3.1 RIDE WITH CONTRASTIVE UNSUPERVISED REPRESENTATIONS", "text": "We propose RIDE-SimCLR, which modifies RIDE by replacing the embedding training involving forward and inverse dynamics models with SimCLR. Denote by φ the embedding network learned via SimCLR. We define RIDE-SimCLR as\nRSimCLR ≡ rit(st, at) = 1− cos(φ(ot), φ(ot+1)) 2 √ Nep(st+1) ∈ [0, 1] . (4)\nSimCLR trains representations by maximizing agreement between different perturbations of the same sample (referred to as the “anchor” sample). Given a sequence of consecutive agent observations o1, . . . , oN , we perturb each state in two different ways by taking random number of steps sampled from {1, . . . , k} into the future and past (See Figure 3). This gives us 2N samples o′1, . . . , o ′ 2N among which there are N positive pairs. Note that reachability is dependent on the current behavior policy π. Unlike EC, however, SimCLR does not explicitly sample negative pairs. Instead, the 2(N − 1) remaining samples are treated as negative examples. The loss function, referred to as NT-Xent (normalized temperature-scaled cross entropy) in Chen et al. (2020), for a positive pair (o′i, o ′ j) is defined as\n`(i, j) = − log exp(cos(φ(o′i), φ(o ′ j))/τ)∑\nm 6=i exp(cos(φ(o ′ i), φ(o ′ m))/τ)\n,\nwhere τ is the hyperparameter temperature. The total contrastive loss is\nL = 1\n2N N∑ m=1 `(2m− 1, 2m) + `(2m, 2m− 1) .\nWe minimize L with respect to the parameters of φ online. Because φ is explicitly trained to maximize the cosine similarity between positive pairs, cosine similarity becomes a natural similarity\nmeasure for the learned embedding space. We set the temperature scale τ = 0.1, the reachability parameter k = 2, and batch size N = L/(4k) where L is the unroll length of the policy gradient algorithm used to train the policy π." }, { "heading": "3.2 EPISODIC MEMORY EXTENSIONS", "text": "RIDE-SimCLR can be seen as a special case of EC by settingC(oi, oj) = (1−cos(φ(oi), φ(oj)))/2, A(c1, . . . , ct) = ct/ √ Nep(st+1) in Eq. (3).2 Note that Nep(st+1) = |{i ∈ [t] | ci = 0}|. This perspective allows us to consider variations of RIDE and RIDE-SimCLR that use different aggregation methods on the episodic memory. Hence, we propose RIDE-MinAgg and EC-SimCLR which aggregate values from the episodic memory with A = min instead of the discount by visitation count. RIDE-MinAgg is RIDE with A(c1, . . . , ct) = mini∈[t] ci, and φ trained with forward and inverse dynamics prediction, and EC-SimCLR is RIDE-SimCLR with A(c1, . . . , ct) = mini∈[t] ci." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our methods on procedurally-generated environments from MiniGrid. We compare our methods against RIDE, which achieves previous state-of-the-art performance on the MiniGrid benchmarks. We report the median and max/min of the average episode return across 3 random seeds. The average episode return is computed as a rolling mean of the previous 100 episodes. We also evaluate RIDE and RIDE-SimCLR on the first level of Mario (Kauten, 2018) using only intrinsic rewards.\n4.1 ENVIRONMENT\nIn all our MiniGrid environments, the dynamics is deterministic and agent observation is partial. The agent’s view (the highlighted part in Figure 4) is limited to a 7 × 7 square centered at the current location, and the agent cannot see through walls or closed doors. There are seven actions the agent can choose from: turn left or right, move forward, pick up or drop an object, toggle, and done.\nWe evaluate our methods on the following 5 benchmarks. KeyCorridorS3R3, KeyCorridorS4R3, KeyCorridorS5R3, MultiRoomN7S8, MultiRoomN12S10, and MultiRoomN7S4NoisyTV. The NoisyTV variant implements a “noisy TV” in the MultiRoom environment with a ball that changes color whenever the special “switch channel” action is executed by the agent. Noisy TVs are known to cause problems for curiosity-based exploration methods by turning the agent into a “couch potato” (Burda et al., 2019a). The purpose of this task is to demonstrate that our methods do not suffer from this issue. Further details on the MiniGrid tasks can be found in Appendix B.\n2Here, we are considering a slightly more general formulation of EC where the comparator network C “absorbs” the constant hyperparameter β and the negative sign in Eq. (2). In this case, the value output by C is proportional to dis-similarity rather than similarity." }, { "heading": "4.2 EVALUATION", "text": "We evaluate the following four methods: RIDE, RIDE-SimCLR, RIDE-MinAgg, and EC-SimCLR. We use torchbeast (Küttler et al., 2019), an open-source implementation of IMPALA (Espeholt et al., 2018), as the base RL algorithm common to all the methods we evaluate and RMSProp (Tieleman & Hinton, 2012) as the optimizer. All methods we evaluate use the same policy and value network architecture. Their only difference is the intrinsic reward. More details about our network architecture and hyperparameter settings can be found in Appendix A and C." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 QUANTITATIVE RESULTS", "text": "Figure 5 shows our results on the MiniGrid benchmarks. RIDE-SimCLR performs similarly to RIDE on all the tasks except MultiRoomN12S10, in which it requires more frames to learn the optimal policy. We note that other exploration methods such as ICM and RND fail to learn any useful policies on this task (Raileanu & Rocktäschel, 2020). Hence, RIDE-SimCLR retains the quantitative performance of RIDE while having a conceptually clear measure for “impact”. Moreover, RIDEMinAgg and EC-SimCLR are more sample-efficient than RIDE on more challenging tasks such as MultiRoomN7S8 and MultiRoomN12S10. In fact, EC-SimCLR is the only method that learns a good policy for KeyCorridorS4R3 and KeyCorridorS5R3. Note, however, that we only run ECSimCLR on KeyCorridorS5R3 since all other methods have failed in the easier KeyCorridorS4R3 environment and Campero et al. (2020) report that RIDE fails to learn any useful policy on KeyCorridorS4R3 and KeyCorridorS5R3. This demonstrates the benefit of establishing a connection between RIDE and EC. Viewing RIDE through the lens of episodic memory leads to variants of the intrinsic reward that are more sample-efficient." }, { "heading": "5.2 LEARNED EMBEDDING SPACE", "text": "We compare the observation embeddings learned using RIDE-SimCLR and RIDE. Denote by d(oi, oj) the dis-similarity measure used in each method. d corresponds to the `2 distance between φ(oi) and φ(oj) in RIDE, and to the cosine dis-similarity 1− cos(φ(oi), φ(oj)) in RIDE-SimCLR. We analyze how predictive d(oi, oj) is for the temporal distance between these two observations. To this end, we generate a balanced labelled dataset consisting of close observation pairs (≤ k steps) and far pairs (> γk steps) by running the trained RIDE agent policy for 100 episodes. The parameters were set to k = 2 and γ = 5. More details on the dataset can be found in Appendix D.\nFigure 6 shows the ROC curve of RIDE-SimCLR and RIDE. From the figure, we can see that the cosine similarity measure in RIDE-SimCLR is predictive of the temporal distance between observations. On the other hand, the `2 distance between representations from RIDE is not aligned with the temporal distance. This is not surprising since the embedding space in RIDE-SimCLR is explicitly trained to respect temporal distance between observations and RIDE is not. However, this shows that the structure of learned embeddings in RIDE-SimCLR is conceptually clear: temporally close observation are closer in cosine distance. Furthermore, it demonstrates that the intrinsic rewarding schemes of RIDE and RIDE-SimCLR, despite its apparent similarity, are qualitatively different." }, { "heading": "5.3 INTRINSIC REWARD VISUALIZATION", "text": "To understand the qualitative difference between RIDE-SimCLR and RIDE, we analyze which actions are encouraged by each method. We take RIDE-SimCLR and RIDE agents trained on procedurally-generated MultiRoomN7S8 environments and roll-out the learned policies on a fixed environment. Table 2 shows the average intrinsic reward received by each action and Figure 7 shows a heatmap of how the actions were rewarded in the agents’ trajectories. We also plot how the average intrinsic rewards change during training in Figure 10 of Appendix G. In Figure 11 of Appendix H, we provide a heatmap that shows how actions were rewarded after RIDE was trained on KeyCorridorS4R3 for 30M frames. Note that RIDE failed to learn a good policy for this hard environment. Hence, Figure 11 visualizes a failure mode of RIDE’s embedding training.\nThe results in Table 2 and Figure 7 demonstrate qualitative differences between RIDE-SimCLR and RIDE. As observed by Raileanu & Rocktäschel (2020), RIDE gives higher intrinsic reward for interaction with objects such as opening doors, whereas RIDE-SimCLR gives higher rewards for turning left or right. An intuitive explanation for RIDE is that actions such as “open door” significantly change the dynamics, which in turn leads to a substantial change in RIDE’s actionfocused embedding. On the other hand, RIDE-SimCLR rewards agents for moving away from where it has been, so actions that move the agent into a new room (which substantially changes the ego-centric partial view of the agent), are given higher rewards. Further investigations into why RIDE and RIDE-SimCLR assign high rewards to these actions is an interesting direction left for future work." }, { "heading": "5.4 EXPLORATION WITH NO EXTRINSIC REWARD", "text": "We analyze the exploration behavior of the RIDE-SimCLR agent in the absence of extrinsic reward. We train a RIDE-SimCLR agent on procedurally-generated MultiRoomN10S6 environments for 50M frames with only intrinsic reward as the signal. The agent is allowed to take 200 steps in every episode. We observe that even without any extrinsic reward, the RIDE-SimCLR agent learns a policy which explores all the rooms in the map, similar to the RIDE agent. Other agents trained with intrinsic rewards such as Count, ICM and RND are known to fall short of reaching the final room within the given amount of steps (Raileanu & Rocktäschel, 2020). The state visitation heatmap on a random instance of MultiRoomN10S6 can be found in Appendix E.\nMoreover, we compare RIDE-SimCLR to RIDE on the first level of Mario without extrinsic reward to determine its relative performance on environments different from MiniGrid. The results can be found in Appendix F. Our results show that RIDE-SimCLR matches the performance of RIDE on Mario. Note, however, that this singleton environment is not very challenging since even vanilla IMPALA is able to learn similarly good policies without any intrinsic rewards, although IMPALA does use extrinsic rewards (Raileanu & Rocktäschel, 2020)." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We identify a conceptual issue in RIDE and remedy it by learning an observation embedding space naturally equipped with the cosine similarity measure. By training embeddings with SimCLR, we retain the strong performance of RIDE on procedurally-generated MiniGrid benchmarks while getting a conceptually clear similarity measure for the embedding space. Moreover, we make a connection between RIDE and EC. As a result, we outperform both RIDE and RIDE-SimCLR by changing the episodic memory aggregation function, which demonstrates the benefit of this novel perspective.\nDespite the apparent similarity between RIDE and RIDE-SimCLR, our analysis shows that these methods are qualitatively different. The `2 distance in RIDE, perhaps unsurprisingly, is not predictive of temporal distance between observations, unlike the cosine similarity in RIDE-SimCLR. In addition, actions that are encouraged by each intrinsic rewarding scheme is different. It is possible that `2 distance in the embedding space learned with forward and inverse dynamics prediction corresponds to some notion of similarity. An interesting future work would be to theoretically and empirically investigate what the `2 distance captures in this embedding space. For instance, what does a large `2 distance in this space correspond to? Other interesting directions include making intrinsic reward computation from episodic memory time and space efficient using techniques such as k-nearest neighbors, as was done in Badia et al. (2020), and proposing variants of EC by training embedding networks with an objective that implicitly trains a known similarity measure." }, { "heading": "A NETWORK ARCHITECTURE", "text": "All models have the same architecture for the policy and value network. The policy and value network share the same observation embedding network ψ. The embedding network consists of 3 convolutional layers with 3x3 kernels, stride of 2, padding of 1, 32 channels, and uses ELU non-linearity (Clevert et al., 2016). The output of ψ is then fed into an LSTM cell (Hochreiter & Schmidhuber, 1997) with 256 units. Two separate fully-connected layers are used on top of the LSTM output to predict the value and action, respectively.\nThe other embedding network φ uses the same architecture as ψ. Note that φ is only used to compute the intrinsic reward and its output is never given as an input to the policy network. The opposite is true for ψ." }, { "heading": "B MINIGRID", "text": "MiniGrid is a simple and lightweight gridworld gym environment. For all the tasks considered in this work, the dynamics is deterministic and agent observation is partial. The agent’s view is limited to a 7x7 square centered at its current location, and the agent cannot see through walls or closed doors. An observation is given as a 7x7x3 tensor. Note that these values are not pixels. It is a partially observable view of the environment using a compact encoding, with 3 input values per visible grid cell. There are seven actions the agent can choose from: turn left or right, move forward, pick up or drop an object, toggle, and done. The agent can pick up and carry exactly one object (e.g., ball or key). To open a locked door, the agent has to be carrying a key matching the door’s color. The extrinsic reward given by the environment when goal is reached is rt = 1 − 0.9 · (t/tmax), where the maximum episode length tmax depends on the task.\nThe following are descriptions of MiniGrid tasks we use for experiments.\n• KeyCorridorSXRY: the agent has to pick up an object behind a locked door. The key is hidden in another room and the agent has to explore the map to find it. The maximum episode length is tmax = 30 ·X2.\n• MultiRoomNXSY: the agent must navigate through a procedurally-generated map consisting of X many rooms, each of size at most Y , to reach the goal. The maximum episode length is tmax = 20 ·X .\n• MultiRoomNXSY-NoisyTV: the agent must navigate through a procedurally-generated map consisting of X many rooms, each of size at most Y , and a “noisy TV” to reach the goal. The “noisy TV” is implemented with a ball that changes color whenever the special “switch channel” action is executed by the agent. The maximum episode length is tmax = 20 ·X ." }, { "heading": "C HYPERPARAMETERS", "text": "The hyperparameters in Table 3 are common to all the experiments with one exception. For RIDESimCLR on MultiRoomN12S10, we used learning rate 0.00005 because the network did not learn any useful policies with learning rate 0.0001. We decay the learning rate linearly so that it becomes 0 after the final epoch.\nRIDE/RIDE-MinAgg We used the same set of hyperparameters for RIDE and RIDE-MinAgg. As was done in Raileanu & Rocktäschel (2020), we used an intrinsic reward coefficient 0.1 and entropy coefficient 0.0005 for MultiRoomN7S4-NoisyTV and KeyCorridorS3R3. We used an intrinsic reward coefficient 0.5 and entropy coefficient 0.001 for MultiRoomN7S8 and MultiRoomN12S10. For all tasks, we used a batch size of 32.\nRIDE-SimCLR/EC-SimCLR For both methods, we ran a grid search over intrinsic reward coefficient ∈ {0.5, 0.1, 0.05, 0.025, 0.01} and entropy coefficient ∈ {0.001, 0.0005, 0.0001} for all tasks. We used a batch size of 8 for both methods on all tasks.\nFor RIDE-SimCLR, we used intrinsic reward coefficient 0.01 and entropy coefficient 0.0005 for MultiRoomN7S4-NoisyTV and KeyCorridorS3R3, intrinsic reward coefficient 0.05 and entropy coefficient 0.0005 for MultiRoomN7S8, and intrinsic reward coefficient 0.01 and entropy coefficient 0.0001 for MultiRoomN12S10. For KeyCorridorS4R3, we used intrinsic reward coefficient 0.025 and entropy coefficient 0.0005.\nFor EC-SimCLR, we used intrinsic reward coefficient 0.01 for MultiRoomN7S4-NoisyTV and KeyCorridorS3R3, 0.05 for MultiRoomN7S8, 0.025 for MultiRoomN12S10, KeyCorridorS4R3, and KeyCorridorS5R3. We used entropy coefficient 0.0005 for all tasks." }, { "heading": "D ROC CURVE DATA GENERATION", "text": "We first construct a balanced dataset of size 2000 by sampling 20 pairs per policy roll-out for a total of 100 roll-outs. where each roll-out is performed on a random instance of MultiRoomN7S8. We set hyperparameters k = 2 and γ = 5, and use a policy trained with RIDE on MultiRoomN7S8. We repeat the following procedure 10 times for each roll-out.\n1. Given a roll-out τ = {o1, . . . , oT }, randomly sample an anchor observation ot. 2. Generate a positive pair (ot, op) by randomly sampling op where p ∈ [t− k, t+ k]. 3. Generate a negative pair (ot, on) by randomly sampling on where n /∈ [t− γk, t+ γk]. 4. Compute data points xp = d(ot, op) and xn = d(ot, on). Assign label 0 to xp and 1 to xn.\nWe note that this positive/negative pair generation procedure was used by Savinov et al. (2019) to train their reachability network in EC. The parameter γ in Step 3 can be thought of a gap which separates the positive samples and negative samples." }, { "heading": "E NO EXTRINSIC REWARD HEATMAP", "text": "RIDE-SimCLR is able to efficiently explore the state space without any extrinsic reward. Note the stark contrast with purely random exploration, which fails to even leave the first room within the given amount of time steps." }, { "heading": "F NO EXTRINSIC REWARD ON MARIO", "text": "We also compare RIDE-SimCLR to RIDE on the first level of Mario without extrinsic reward to see if RIDE-SimCLR can match RIDE’s performance on environments different from MiniGrid. As we can observe from Figure 9, RIDE-SimCLR matches the performance of RIDE on Mario. Note, however, that this singleton environment is not very challenging since even vanilla IMPALA is able to learn similarly good policies without any intrinsic rewards, although it does use extrinsic rewards (Raileanu & Rocktäschel, 2020).\nThe common hyperparameters we used for Mario are the same as ones reported in Table 3, except for the unroll length which we set to 20. For both RIDE and RIDE-SimCLR, we used intrinsic reward coefficient of 0.05 and entropy coefficient of 0.0001." }, { "heading": "G MEAN INTRINSIC REWARD DURING TRAINING", "text": "Figure 10 shows the changes in mean intrinsic rewards during training for MultiRoomN7S8, MultiRoomN12S10, KeyCorridorS3R3, KeyCorridorS4R3.\nH VISUALIZATION OF RIDE ON KEYCORRIDORS4R3\nWe visualize the intrinsic reward per action of RIDE for KeyCorridorS4R3 in Figure 11. Note that RIDE failed to learn a useful policy for this environment (See Figure 5). The purpose of Figure 11 is to visualize the embeddings learned via forward and inverse dynamics when the policy is far from optimal. We can see that the “open door” actions are rewarded less compared to Figure 7 (when RIDE learned a good policy for the environment), and unnecessary turning actions are highly rewarded." } ]
2,020
null
SP:a25b3465107fa32de50e4457b87eba134792e07b
[ "This work investigates federated optimization considering data heterogeneity, communication and computation limitations, and partial client participation. In contrast to past works, this paper focuses on deeper understanding of the effect of partial client participation on the convergence rate by considering biased client participation. The paper provides convergence analysis for any biased selection strategy, showing that the rate is composed of vanishing error term and non-vanishing bias term. The obtained rates explicitly show the effect of client selection strategy and the trade-off between convergence speed and the solution bias. Then it proposes a parametric family of biased selection strategy, called power-of-choice, which aims to speed up the convergence of the error term at the cost of possibly bigger bias term. Experiments are provided to highlight the benefits of the proposed pow-d strategy over the standard unbiased selection strategies." ]
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first convergence analysis of federated optimization for biased client selection strategies, and quantify how the selection skew affects convergence speed. We reveal that biasing client selection towards clients with higher local loss achieves faster error convergence. Using this insight, we propose POWER-OF-CHOICE, a communicationand computation-efficient client selection framework that can flexibly span the trade-off between convergence speed and solution bias. We also propose an extension of POWER-OF-CHOICE that is able to maintain convergence speed improvement while diminishing the selection skew. Our experiments demonstrate that POWER-OF-CHOICE strategies can converge up to 3× faster and give 10% higher test accuracy than the baseline random selection.
[]
[ { "authors": [ "Debraj Basu", "Deepesh Data", "Can Karakus", "Suhas Diggavi" ], "title": "Qsparse-local-sgd: Distributed sgd with quantization, sparsification, and local computations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Keith Bonawitz", "Hubert Eichner", "Wolfgang Grieskamp", "Dzmitry Huba", "Alex Ingerman", "Vladimir Ivanov", "Chloe Kiddon", "Jakub Konecny", "Stefano Mazzocchi", "H. Brendan McMahan", "Timon Van Overveldt", "David Petrou", "Daniel Ramage", "Jason Roselander" ], "title": "Towards Federated Learning at Scale: System", "venue": "URL https://www.sysml.cc/doc/2019/ 193.pdf", "year": 2019 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "EMNIST: an extension of MNIST to handwritten letters", "venue": "arXiv preprint arXiv:1702.05373,", "year": 2017 }, { "authors": [ "Jeffrey Dean", "Greg S. Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Quoc V. Le", "Mark Z. Mao", "Marc’Aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Proceedings of the International Conference on Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Farzin Haddadpour", "Mehrdad Mahdavi" ], "title": "On the convergence of local descent methods in federated learning", "venue": "arXiv preprint arXiv:1910.14425,", "year": 2019 }, { "authors": [ "Samuel Horváth", "Peter Richtárik" ], "title": "A better alternative to error feedback for communicationefficient distributed learning", "venue": "arXiv preprint arXiv:2006.11077,", "year": 2020 }, { "authors": [ "Tzu-Ming Harry Hsu", "Hang Qi", "Matthew Brown" ], "title": "Measuring the effects of non-identical data distribution for federated visual classification", "venue": null, "year": 2019 }, { "authors": [ "Zhouyuan Huo", "Qian Yang", "Bin Gu", "Lawrence Carin", "Heng Huang" ], "title": "Faster on-device training using new federated momentum algorithm", "venue": "arXiv preprint arXiv:2002.02090,", "year": 2020 }, { "authors": [ "Angela H. Jiang", "Daniel L.K. Wong", "Giulio Zhou", "David G. Andersen", "Jeffrey Dean", "Gregory R. Ganger", "Gauri Joshi", "Michael Kaminksy", "Michael Kozuch", "Zachary C. Lipton", "Padmanabhan Pillai" ], "title": "Accelerating deep learning by focusing on the biggest losers, 2019", "venue": null, "year": 2019 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "SCAFFOLD: Stochastic controlled averaging for on-device federated learning", "venue": null, "year": 1910 }, { "authors": [ "A. Katharopoulos", "F. Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "A Khaled", "K Mishchenko", "P Richtárik" ], "title": "Tighter theory for local SGD on identical and heterogeneous data", "venue": "In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS", "year": 2020 }, { "authors": [ "Anastasia Koloskova", "Nicolas Loizou", "Sadra Boreiri", "Martin Jaggi", "Sebastian U Stich" ], "title": "A unified theory of decentralized SGD with changing topology and local updates", "venue": null, "year": 2003 }, { "authors": [ "Yassine Laguel", "Krishna Pillutla", "Jérôme Malick", "Zaid Harchaoui" ], "title": "Device heterogeneity in federated learning: A superquantile approach", "venue": "ArXiv,", "year": 2020 }, { "authors": [ "Tian Li", "Maziar Sanjabi", "Virginia Smith" ], "title": "Fair resource allocation in federated learning", "venue": "arXiv preprint arXiv:1905.10497,", "year": 2019 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Lingjuan Lyu", "Jiangshan Yu", "Karthik Nandakumar", "Yitong Li", "Xingjun Ma", "Jiong Jin", "Han Yu", "Kee Siong Ng" ], "title": "Towards Fair and Privacy-Preserving Federated Deep Models", "venue": "IEEE Transactions on Parallel and Distributed Systems,", "year": 2020 }, { "authors": [ "Grigory Malinovsky", "Dmitry Kovalev", "Elnur Gasanov", "Laurent Condat", "Peter Richtárik" ], "title": "From local SGD to local fixed point methods for federated learning", "venue": "arXiv preprint arXiv:2004.01442,", "year": 2020 }, { "authors": [ "H. Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agøura y Arcas" ], "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data", "venue": "International Conference on Artificial Intelligenece and Statistics (AISTATS),", "year": 2017 }, { "authors": [ "M. Mitzenmacher" ], "title": "The power of two choices in randomized load balancing", "venue": "PhD thesis, University of California Berkeley,", "year": 1996 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Takayuki Nishio", "Ryo Yonetani" ], "title": "Client selection for federated learning with heterogeneous resources in mobile edge", "venue": "In IEEE International Conference on Communications,", "year": 2019 }, { "authors": [ "Reese Pathak", "Martin J Wainwright" ], "title": "FedSplit: An algorithmic framework for fast federated optimization", "venue": "arXiv preprint arXiv:2005.05238,", "year": 2020 }, { "authors": [ "Krishna Pillutla", "Sham M. Kakade", "Zaid Harchaoui" ], "title": "Robust aggregation for federated learning", "venue": "arXiv preprint 1912.13445,", "year": 2019 }, { "authors": [ "Sashank Reddi", "Zachary Charles", "Manzil Zaheer", "Zachary Garrett", "Keith Rush", "Jakub Konečnỳ", "Sanjiv Kumar", "H Brendan McMahan" ], "title": "Adaptive federated optimization", "venue": "arXiv preprint arXiv:2003.00295,", "year": 2020 }, { "authors": [ "Mónica Ribero", "Haris Vikalo" ], "title": "Communication-efficient federated learning via optimal client sampling", "venue": "ArXiv,", "year": 2020 }, { "authors": [ "Yichen Ruan", "Xiaoxi Zhang", "Shu-Che Liang", "Carlee Joe-Wong" ], "title": "Towards flexible device participation in federated learning for non-iid data", "venue": "ArXiv,", "year": 2020 }, { "authors": [ "Anit Kumar Sahu", "Tian Li", "Maziar Sanjabi", "Manzil Zaheer", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization for heterogeneous networks. https://arxiv.org/abs/1812.06127", "venue": null, "year": 2019 }, { "authors": [ "Farnood Salehi", "Patrick Thiran", "Elisa Celis" ], "title": "Coordinate descent with bandit sampling, 2018", "venue": null, "year": 2018 }, { "authors": [ "Vatsal Shah", "Xiaoxia Wu", "Sujay Sanghavi" ], "title": "Choosing the sample with lowest loss makes sgd robust", "venue": "In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020),", "year": 2020 }, { "authors": [ "Sebastian U Stich" ], "title": "Local SGD converges fast and communicates little", "venue": "arXiv preprint arXiv:1805.09767,", "year": 2018 }, { "authors": [ "Sebastian U Stich", "Sai Praneeth Karimireddy" ], "title": "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication", "venue": null, "year": 1909 }, { "authors": [ "Jianyu Wang", "Gauri Joshi" ], "title": "Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms", "venue": "arXiv preprint arXiv:1808.07576,", "year": 2018 }, { "authors": [ "Jianyu Wang", "Gauri Joshi" ], "title": "Adaptive Communication Strategies for Best Error-Runtime Trade-offs in Communication-Efficient Distributed SGD", "venue": "In Proceedings of the SysML Conference,", "year": 2019 }, { "authors": [ "Jianyu Wang", "Qinghua Liu", "Hao Liang", "Gauri Joshi", "H. Vincent Poor" ], "title": "Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization", "venue": "URL https://arxiv.org/abs/2007.07481", "year": 2020 }, { "authors": [ "Blake Woodworth", "Kumar Kshitij Patel", "Sebastian U Stich", "Zhen Dai", "Brian Bullins", "H Brendan McMahan", "Ohad Shamir", "Nathan Srebro" ], "title": "Is local SGD better than minibatch SGD", "venue": null, "year": 2002 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747, aug 2017", "venue": null, "year": 2017 }, { "authors": [ "Han Yu", "Zelei Liu", "Yang Liu", "Tianjian Chen", "Mingshu Cong", "Xi Weng", "Dusit Niyato", "Qiang Yang" ], "title": "A fairness-aware incentive scheme for federated learning", "venue": "Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2020 }, { "authors": [ "Hao Yu", "Sen Yang", "Shenghuo Zhu" ], "title": "Parallel restarted SGD for non-convex optimization with faster convergence and less communication", "venue": "arXiv preprint arXiv:1807.06629,", "year": 2018 }, { "authors": [ "Xinwei Zhang", "Mingyi Hong", "Sairaj Dhople", "Wotao Yin", "Yang Liu" ], "title": "FedPD: A federated learning framework with optimal rates and adaptivity to non-IID data", "venue": null, "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Until recently, machine learning models were largely trained in the data center setting (Dean et al., 2012) using powerful computing nodes, fast inter-node communication links, and large centrally available training datasets. The future of machine learning lies in moving both data collection as well as model training to the edge. The emerging paradigm of federated learning (McMahan et al., 2017; Kairouz et al., 2019; Bonawitz et al., 2019) considers a large number of resource-constrained mobile devices that collect training data from their environment. Due to limited communication capabilities and privacy concerns, these data cannot be directly sent over to the cloud. Instead, the nodes locally perform a few iterations of training using local-update stochastic gradient descent (SGD) (Yu et al., 2018; Stich, 2018; Wang & Joshi, 2018; 2019), and only send model updates periodically to the aggregating cloud server. Besides communication limitations, the key scalability challenge faced by the federated learning framework is that the client nodes can have highly heterogeneous local datasets and computation speeds. The effect of data heterogeneity on the convergence of local-update SGD is analyzed in several recent works (Reddi et al., 2020; Haddadpour & Mahdavi, 2019; Khaled et al., 2020; Stich & Karimireddy, 2019; Woodworth et al., 2020; Koloskova et al., 2020; Huo et al., 2020; Zhang et al., 2020; Pathak & Wainwright, 2020; Malinovsky et al., 2020; Sahu et al., 2019) and methods to overcome the adverse effects of data and computational heterogeneity are proposed in (Sahu et al., 2019; Wang et al., 2020; Karimireddy et al., 2019), among others.\nPartial Client Participation. Most of the recent works described above assume full client participation, that is, all nodes participate in every training round. In practice, only a small fraction of client nodes participate in each training round, which can exacerbate the adverse effects of data heterogeneity. While some existing convergence guarantees for full client participation and methods to tackle heterogeneity can be generalized to partial client participation (Li et al., 2020), these generalizations are limited to unbiased client participation, where each client’s contribution to the expected global objective optimized in each round is proportional to its dataset size. In Ruan et al. (2020), the authors analyze the convergence with flexible device participation, where devices can freely join or leave the\ntraining process or send incomplete updates to the server. However, adaptive client selection that is cognizant of the training progress at each client has not been understood yet.\nIt is important to analyze and understand biased client selection strategies since they can sharply accelerate error convergence, and hence boost communication efficiency in heterogeneous environments by preferentially selecting clients with higher local loss values, as we show in our paper. This idea has been explored in recent empirical studies (Goetz et al., 2019; Laguel et al., 2020; Ribero & Vikalo, 2020). Nishio & Yonetani (2019) proposed grouping clients based on hardware and wireless resources in order to save communication resources. Goetz et al. (2019) (which we include as a benchmark in our experiments) proposed client selection with local loss, and Ribero & Vikalo (2020) proposed utilizing the progression of clients’ weights. But these schemes are limited to empirical demonstration without a rigorous analysis of how selection skew affects convergence speed.\nAnother relevant line of work (Jiang et al., 2019; Katharopoulos & Fleuret, 2018; Shah et al., 2020; Salehi et al., 2018) employs biased selection or importance sampling of data to speed-up convergence of classic centralized SGD – they propose preferentially selecting samples with highest loss or highest gradient norm to perform the next SGD iteration. In contrast, Shah et al. (2020) proposes biased selection of lower loss samples to improve robustness to outliers. Generalizing such strategies to the federated learning setting is a non-trivial and open problem because of the large-scale distributed and heterogeneous nature of the training data.\nOur Contributions. In this paper, we present the first (to the best of our knowledge) convergence analysis of federated learning with biased client selection that is cognizant of the training progress at each client. We discover that biasing the client selection towards clients with higher local losses increases the rate of convergence compared to unbiased client selection. Using this insight, we propose the POWER-OF-CHOICE client selection strategy and show by extensive experiments that POWER-OF-CHOICE yields up to 3× faster convergence with 10% higher test performance than the standard federated averaging with random selection. POWER-OF-CHOICE is designed to incur minimal communication and computation overhead, enhancing resource efficiency in federated learning. In fact, we show that even with 3× less clients participating in each round as compared to random selection, POWER-OF-CHOICE gives 2× faster convergence and 5% higher test accuracy." }, { "heading": "2 PROBLEM FORMULATION", "text": "Consider a cross-device federated learning setup with total K clients, where client k has a local dataset Bk consisting |Bk| = Dk data samples. The clients are connected via a central aggregating server, and seek to collectively find the model parameter w that minimizes the empirical risk:\nF (w) = 1∑K\nk=1Dk K∑ k=1 ∑ ξ∈Bk f(w, ξ) = K∑ k=1 pkFk(w) (1)\nwhere f(w, ξ) is the composite loss function for sample ξ and parameter vector w. The term pk = Dk/ ∑K k=1Dk is the fraction of data at the k-th client, and Fk(w) = 1 |Bk| ∑ ξ∈Bk f(w, ξ) is the local objective function of client k. In federated learning, the vectors w∗, and w∗k for k = 1, . . . ,K that minimize F (w) and Fk(w) respectively can be very different from each other. We define F ∗ = minw F (w) = F (w∗) and F ∗k = minw Fk(w) = Fk(w ∗ k).\nFederated Averaging with Partial Client Participation. The most common algorithm to solve (1) is federated averaging (FedAvg) proposed in McMahan et al. (2017). The algorithm divides the training into communication rounds. At each round, to save communication cost at the central server, the global server only selects a fraction C of m = CK clients to participate in the training. Each selected/active client performs τ iterations of local SGD (Stich, 2018; Wang & Joshi, 2018; Yu et al., 2018) and sends its locally updated model back to the server. Then, the server updates the global model using the local models and broadcasts the global model to a new set of active clients.\nFormally, we index the local SGD iterations with t ≥ 0. The set of active clients at iteration t is denoted by S(t). Since active clients performs τ steps of local update, the active set S(t) also remains constant for every τ iterations. That is, if (t+ 1) mod τ = 0, then S(t+1) = S(t+2) = · · · = S(t+τ).\nAccordingly, the update rule of FedAvg can be written as follows:\nw (t+1) k =\n{ w\n(t) k − ηtgk(w (t) k , ξ (t) k ) for (t+ 1) mod τ 6= 0\n1 m ∑ j∈S(t) ( w (t) j − ηtgj(w (t) j , ξ (t) j ) ) , w(t+1) for (t+ 1) mod τ = 0 (2)\nwhere w(t+1)k denotes the local model parameters of client k at iteration t, ηt is the learning rate, and gk(w (t) k , ξ (t) k ) = 1 b ∑ ξ∈ξ(t)k ∇f(w(t)k , ξ) is the stochastic gradient over mini-batch ξ (t) k of size b that is randomly sampled from client k’s local dataset Bk. Moreover, w(t+1) denotes the global model at server. Although w(t) is only updated after every τ iterations, for the purpose of convergence analysis we consider a virtual sequence of w(t) that is updated at each iteration as follows\nw(t+1) = w(t) − ηtg(t) = w(t) − ηt 1 m ∑ k∈S(t) gk(w (t) k , ξ (t) k ) (3) with g(t) = 1m ∑ k∈S(t) gk(w (t) k , ξ (t) k ). Note that in (2) and (3) we do not weight the client models by their dataset fractions pk because pk is considered in the client selection scheme used to decide the set S(t). Our convergence analysis can be generalized to when the global model is a weighted average instead of a simple average of client models, and we show in Appendix E that our convergence analysis also covers the sampling uniformly at random without replacement scheme proposed by Li et al. (2020). The set S(t) can be sampled either with or without replacement. For sampling with replacement, we assume that multiple copies of the same client in the set S(t) behave as different clients, that is, they perform local updates independently.\nClient Selection Strategy. To guarantee FedAvg converges to the stationary points of the objective function (1), most current analysis frameworks (Li et al., 2020; Karimireddy et al., 2019; Wang et al., 2020) consider a strategy that selects the set S(t) by sampling m clients at random (with replacement) such that client k is selected with probability pk, the fraction of data at that client. This sampling scheme is unbiased since it ensures that in expectation, the update rule (3) is the same as full client participation. Hence, it enjoys the same convergence properties as local-update SGD methods (Stich, 2018; Wang & Joshi, 2018). We denote this unbiased random client selection strategy as πrand.\nIn this paper, we consider a class of biased client selection strategies that is cognizant of the global training progress which (to the best of our knowledge) has not been worked on before. Note that for any aggregation scheme and sampling scheme with partial client participation, if the expectation over the sampling scheme for the update rule of the global model is equal to the case of the update rule for full client participation, we distinguish this as an unbiased client participation scheme. For example in Horváth & Richtárik (2020), even with a biased sampling scheme, with the normalizing aggregation\nthe update rule is unbiased. Henceforth, we state that our paper encompasses both biased and unbiased update rules. In the two-client example in Figure 1, we set S(t+1) = arg maxk∈[K] Fk(w(t)), a single client with the highest local loss at the current global model. In this toy example, the selection strategy cannot guarantee the updates (3) equals to the full client participation case in expectation. Nevertheless, it gives faster convergence to the global minimum than the random one. Motivated by this observation, we define a client selection strategy π as a function that maps the current global model w to a selected set of clients S(π,w)." }, { "heading": "3 CONVERGENCE ANALYSIS", "text": "In this section we analyze the convergence of federated averaging with partial device participation for any client selection strategy π as defined above. This analysis reveals that biased client selection can give faster convergence, albeit at the risk of having a non-vanishing gap between the true optimum w∗ = arg minF (w) and limt→∞w(t). We use this insight in Section 4 to design client selection strategies that strike a balance between convergence speed and bias." }, { "heading": "3.1 ASSUMPTIONS AND DEFINITIONS", "text": "First we introduce the assumptions and definitions utilized for our convergence analysis.\nAssumption 3.1. F1, ..., Fk are all L−smooth, i.e., for all v and w, Fk(v) ≤ Fk(w) + (v − w)T∇Fk(w) + L2 ‖v −w‖ 2 2. Assumption 3.2. F1, ..., Fk are all µ−strongly convex, i.e., for all v and w, Fk(v) ≥ Fk(w) + (v −w)T∇Fk(w) + µ2 ‖v −w‖ 2 2. Assumption 3.3. For the mini-batch ξk uniformly sampled at random from Bk from user k, the resulting stochastic gradient is unbiased, that is, E[gk(wk, ξk)] = ∇Fk(wk). Also, the variance of stochastic gradients is bounded: E‖gk(wk, ξk)−∇Fk(wk)‖2 ≤ σ2 for all k = 1, ...,K. Assumption 3.4. The stochastic gradient’s expected squared norm is uniformly bounded, i.e., E‖gk(wk, ξk)‖2 ≤ G2 for k = 1, ...,K.\nThe above assumptions are common in related literature, see (Stich, 2018; Basu et al., 2019; Li et al., 2020; Ruan et al., 2020). Next, we introduce two metrics, the local-global objective gap and the selection skew, which feature prominently in the convergence analysis presented in Theorem 3.1.\nDefinition 3.1 (Local-Global Objective Gap). For the global optimum w∗ = arg minw F (w) and local optimum w∗k = arg minw Fk(w) we define the local-global objective gap as\nΓ , F ∗ − K∑ k=1 pkF ∗ k = K∑ k=1 pk(Fk(w ∗)− Fk(w∗k)) ≥ 0. (4)\nNote that Γ is an inherent property of the local and global objective functions, and it is independent of the client selection strategy. This definition was introduced in previous literature by Li et al. (2020). A larger Γ implies higher data heterogeneity. If Γ = 0 then it implies that the local and global optimal values are consistent, and there is no solution bias due to the client selection strategy (see Theorem 3.1). Next, we define another metric called selection skew, which captures the effect of the client selection strategy on the local-global objective gap.\nDefinition 3.2 (Selection Skew). For any k ∈ S(π,w) we define,\nρ(S(π,w),w′) = ES(π,w)[ 1m\n∑ k∈S(π,w)(Fk(w ′)− F ∗k )]\nF (w′)− ∑K k=1 pkF ∗ k\n≥ 0, (5)\nwhich reflects the skew of a client selection strategy π. The first w in ρ(S(π,w),w′) is the parameter vector that governs the client selection and w′ is the point at which Fk and F in the numerator and denominator respectively are evaluated. Note, ES(π,w)[·] is the expectation over the randomness from the selection strategy π, since there can be multiple sets S that π can map from a specific w.\nSince ρ(S(π,w),w′) is a function of versions of the global model w and w′, which change during training, we define two related metrics that are independent of w and w′. These metrics enable us to obtain a conservative error bound in the convergence analysis.\nρ , min w,w′ ρ(S(π,w),w′), ρ̃ , max w ρ(S(π,w),w∗) (6)\nwhere w∗ = arg minw F (w). From (6), we have ρ ≤ ρ̃ for any client selection strategy π.\nEffect of the Client Selection Strategy on ρ and ρ̃. For the unbiased client selection strategy πrand we have ρ(S(πrand,w),w′) = 1 for all w and w′ since the numerator and denominator of (5) become equal, and ρ = ρ̃ = 1. For a client selection strategy π that chooses clients with higher Fk(w) more often, ρ and ρ̃ will be larger (and ≥ 1). In the convergence analysis we show that a larger ρ implies faster convergence, albeit with a potential error gap, which is proportional to (ρ̃/ρ− 1). Motivated by this, in Section 4 we present an adaptive client selection strategy that prefers selecting clients with higher loss Fk(w) and achieves faster convergence speed with low solution bias." }, { "heading": "3.2 MAIN CONVERGENCE RESULT", "text": "Here, we present the convergence results for any client selection strategy π for federated averaging with partial device participation in terms of local-global objective gap Γ, and selection skew ρ, ρ̃.\nTheorem 3.1 (Convergence with Decaying Learning Rate). Under Assumptions 3.1 to 3.4, for learning rate ηt = 1µ(t+γ) with γ = 4L µ , and any client selection strategy π, the error after T iterations of federated averaging with partial device participation satisfies\nE[F (w(T ))]− F ∗ ≤\n1\n(T + γ)\n[ 4L(32τ2G2 + σ2/m)\n3µ2ρ +\n8L2Γ µ2 + Lγ‖w(0) −w∗‖2 2 ] ︸ ︷︷ ︸\nVanishing Error Term\n+ 8LΓ\n3µ\n( ρ̃ ρ − 1 )\n︸ ︷︷ ︸ Non-vanishing bias,Q(ρ,ρ̃) (7)\nTo the best of our knowledge, Theorem 3.1 provides the first convergence analysis of federated averaging with a biased client selection strategy π. We also show the results for fixed learning rate in Appendix A. The proof is presented in Appendix C. The first part of our proof follows techniques presented by Li et al. (2020). Then we introduce the novel concept of selection skew to the proof, and analyze the effect of biased client selection strategies that has not been seen before in previous literature. We highlight that our convergence result is a general analysis that is applicable for any selection strategy π that is cognizant of the training progress. In the following paragraphs, we discuss the effects of the two terms in (7) in detail.\nLarge ρ and Faster Convergence. A key insight from Theorem 3.1 is that a larger selection skew ρ results in faster convergence at the rate O( 1Tρ ). Note that since we obtain ρ (defined in (6)) by taking a minimum of the selection skew ρ(S(π,w),w′) over w,w′, this is a conservative bound on the true convergence rate. In practice, since the selection skew ρ(S(π,w),w′) changes during training depending on the current global model w and the local models w′, the true convergence rate can be improved by a factor larger than and at least equal to ρ.\nNon-vanishing Bias Term. The second term Q(ρ, ρ̃) = 8LΓ3µ ( ρ̃ ρ − 1 ) in (7) denotes the solution bias, which is dependent on the selection strategy. By the definitions of ρ and ρ̃, it follows that ρ̃ ≥ ρ, which implies that Q(ρ, ρ̃) ≥ 0. For an unbiased selection strategy, we have ρ = ρ̃ = 1, Q(ρ, ρ̃) = 0, and hence (7) recovers previous bound for unbiased selection strategy as (Li et al., 2020). For ρ > 1, while we gain faster convergence rate by a factor of ρ, we cannot guarantee Q(ρ, ρ̃) = 0. Thus, there is a trade-off between the convergence speed and the solution bias. Later in the experimental results, we show that even with biased selection strategies, the term ρ̃ρ − 1 in Q(ρ, ρ̃) can be close to 0, and hence Q(ρ, ρ̃) has a negligible effect on the final error floor." }, { "heading": "4 PROPOSED POWER-OF-CHOICE CLIENT SELECTION STRATEGY", "text": "From (5) and (6) we discover that a selection strategy π that prefers clients with larger Fk(w)− F ∗k will result in a larger ρ, yielding faster convergence. Using this insight, a naive client selection strategy can be choosing the clients with highest local loss Fk(w). However, a larger selection skew ρ may result in a larger ρ/ρ̃, i.e., a larger non-vanishing error term. This naive selection strategy has another drawback – to find the current local loss Fk(w), it requires sending the current global model to all K clients and having them evaluate Fk and sending it back. This additional communication and computation cost can be prohibitively high because the number of clients K is typically very large, and these clients have limited communication and computation capabilities.\nIn this section, we use these insights regarding the trade-off between convergence speed, solution bias and communication/computation overhead to propose the POWER-OF-CHOICE client selection strategy. POWER-OF-CHOICE is based on the power of d choices load balancing strategy (Mitzenmacher, 1996), which is extensively used in queueing systems. In the POWER-OF-CHOICE client selection strategy (denoted by πpow-d), the central server chooses the active client set S(t) as follows:\n1. Sample the Candidate Client Set. The central server samples a candidate set A of d (m ≤ d ≤ K) clients without replacement such that client k is chosen with probability pk, the fraction of data at the k-th client for k = 1, . . .K.\n2. Estimate Local Losses. The server sends the current global model w(t) to the clients in set A, and these clients compute and send back to the central server their local loss Fk(w(t)).\n3. Select Highest Loss Clients. From the candidate set A, the central server constructs the active client set S(t) by selecting m = max(CK, 1) clients with the largest values Fk(w), with ties broken at random. These S(t) clients participate in the training during the next round, consisting of iterations t+ 1, t+ 2, . . . t+ τ .\nVariations of πpow-d. The three steps of πpow-d can be flexibly modified to take into account practical considerations. For example, intermittent client availability can be accounted for in step 1 by constructing set A only from the set of available clients in that round. We demonstrate the performance of πpow-d with intermittent client availability in Appendix G.3. The local computation cost and server-client communication cost in step 2 can be reduced or eliminated by the following proposed variants of πpow-d (see Appendix F for their pseudo-codes).\n• Computation-efficient Variant πcpow-d: To save local computation cost, instead of evaluating the Fk(w) by going through the entire local dataset Bk, we use an estimate ∑ ξ∈ξ̂k f(w, ξ)/|ξ̂k|,\nwhere ξ̂k is the mini-batch of b samples sampled uniformly at random from Bk. • Communication- and Computation-efficient Variant πrpow-d: To save both local computation\nand communication cost, the selected clients for each round sends their accumulated averaged loss over local iterations, i.e., 1 τ |ξ(l)k | ∑t l=t−τ+1 ∑ ξ∈ξ(l)k f(w (l) k , ξ) when they send their local models to the server. The server uses the latest received value from each client as a proxy for Fk(w) to select the clients. For the clients that have not been selected yet, the latest value is set to∞.\n• Adaptive Selection Skew Variant πadapow-d: To minimize the non-vanishing bias term in Theorem 3.1 while simultaneously gaining the benefit of convergence speed from ρ, we gradually reduce d until d = m1. This enables convergence speed up in the initial training phase, while eventually diminishing the non-vanishing bias term when d = m. Which d to start with and how gradually we decrease d to m is flexible, analogous to setting the environment and hyper-parameters.\nSelection Skew of POWER-OF-CHOICE Strategy. The size d of the candidate client set A is an important parameter which controls the trade-off between convergence speed and solution bias. With d = m we have random sampling without replacement in proportion of pk. As d increases, the selection skew ρ increases, giving faster error convergence at the risk of a higher error floor. However, note that the convergence analysis replaces ρ(w,w′) with ρ to get a conservative error bound. In\n1d = m makes our proposed POWER-OF-CHOICE strategy to become analogous to an unbiased sampling strategy, which has no non-vanishing bias term.\npractice, the convergence speed and the solution bias is dictated by ρ(w(τbt/τc),w(t)) which changes during training. With πpow-d which is biased towards higher local losses, we expect the selection skew ρ(w,w′) to reduce through the course of training. We conjecture that this is why πpow-d gives faster convergence as well as little or no solution bias in our experiments presented in Section 5." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We evaluate our proposed πpow-d and its practical variants πcpow-d, πrpow-d, and πadapow-d by three sets of experiments: (1) quadratic optimization, (2) logistic regression on a synthetic federated dataset, Synthetic(1,1) (Sahu et al., 2019), and (3) DNN trained on a non-iid partitioned FMNIST dataset (Xiao et al., 2017). We also benchmark the selection strategy proposed by Goetz et al. (2019), active federated learning, denoted as πafl. Details of the experimental setup are provided in Appendix F, and the code for all experiments are shared in the supplementary material. To validate consistency in our results, we present additional experiments with DNN trained on a non-iid partitioned EMNIST (Cohen et al., 2017) dataset sorted by digits with K = 500 clients. We present the results in Appendix G.4.\nQuadratic and Synthetic Simulation Results. In Figure 2(a), even with few clients (K = 30), πpow-d converges faster than πrand with nearly negligible solution bias for small d. The convergence speed increases with the increase in d, at the cost of higher error floor due to the solution bias. For K = 100 in Figure 2(b), πpow-d shows convergence speed-up as with K = 30, but the bias is smaller. Figure 3 shows the theoretical values ρ and ρ̃/ρ which represents the convergence speed and the solution bias respectively in our convergence analysis. Compared to πrand, πpow-d has higher ρ for all d implying higher convergence speed than πrand. By varying d we can span different points on the trade-off between the convergence speed and bias. For d = 15 and K = 100, ρ̃/ρ of πpow-d and πrand are approximately identical, but πpow-d has higher ρ, implying that πpow-d can yield higher convergence speed with negligible solution bias. In Appendix G.1, we present\nthe clients’ selected frequency ratio for πpow-d and πrand which gives novel insights regarding the difference between the two strategies. For the synthetic dataset simulations, we present the global losses in Figure 4 for πrand and πpow-d for different d and m. We show that πpow-d converges approximately 3× faster to the global loss ≈ 0.7 than πrand when d = 10m, with a slightly higher error floor. Even with d = 2m, we get 2× faster convergence to global loss ≈ 0.7 than πrand. Elimination of Selection Skew with πadapow-d. For πpow-d, the selection skew is the trade-off for the convergence speed gain in Figure 2 and Figure 4. For both simulations, πpow-d converges slightly above the global minimum value due to the selection skew. We eliminate this selection skew while maintaining the benefit of convergence speed with πadapow-d. In Figure 2(a)-(b), πadapow-d shows a\nconvergence speed similar to πpow-d, d = K, but has no selection skew, converging to the same minimum as πrand (see Figure 2(c)). In Figure 4, πadapow-d again shows a convergence speed similar to πpow-d, d = 10m, but has no adversarial selection skew. As a matter of fact, πadapow-d converges to the minimum global loss value at least 3 × faster than πrand. Hence πadapow-d gains the benefit of both worlds from biased client selection: convergence speed and elimination of selection skew.\nExperiments with Heterogeneously Distributed FMNIST. As elaborated in Appendix F, α determines the data heterogeneity across clients. Smaller α indicates larger data heterogeneity. In Figure 5, we present the test accuracy and training losses for the different sampling strategies from the FMNIST experiments with α = 0.3 and α = 2. Observe that πpow-d achieves approximately 10% and 5% higher test accuracy than πrand and πafl respectively for both α = 2 and α = 0.3. For higher α (less data heterogeneity) larger d (more selection skew) performs better than smaller d.\nFigure 5(a) shows that this performance improvement due to the increase of d eventually converges. For smaller α, as in Figure 5(b), smaller d = 6 performs better than larger d which shows that too much solution bias is adversarial to the performance in the presence of large data heterogeneity. The observations on training loss are consistent with the test accuracy results.\nPerformance of the Communication- and Computation-Efficient variants. Next, we evaluate πcpow-d and πrpow-d which were introduced in Section 4. In Figure 6, for α = 2, πrpow-d and πcpow-d each yields approximately 5% and 6% higher accuracy than πrand, but both yield lower accuracy than πpow-d that utilizes the highest computation and communication resources. For α = 0.3, πcpow-d and πrpow-d perform as well as πpow-d and give a 10% accuracy improvement over πrand. Moreover, πpow-d, πrpow-d and πcpow-d all have higher accuracy and faster convergence than πafl.\nWe evaluate the communication and computation efficiency of POWER-OF-CHOICE by comparing different strategies in terms of R60, the number of communication rounds required to reach test accuracy 60%, and tcomp, the average computation time (in seconds) spent per round. The computation time includes the the time taken by the central server to select the clients (including the computation\ntime for the d clients to compute their local loss values) and the time taken by selected clients to perform local updates. In Table 1, with only C = 0.03 fraction of clients, πpow-d, πcpow-d, and πrpow-d have about 5% higher test accuracy than (πrand, C = 0.1). The R60 for πpow-d, πcpow-d, πrpow-d is 0.52, 0.47, 0.57 times that of (πrand, C = 0.1) respectively. This implies that even for πrpow-d which does not incur any additional communication cost for client selection, we can get a 2× reduction in the number of communication rounds using 1/3 of clients compared to (πrand, C = 0.1) and still get higher test accuracy performance. Note that the computation time tcomp for πcpow-d and πrpow-d with C = 0.03 is smaller than that of πrand with C = 0.1. In Appendix G.2, we show that the results for α = 2 are consistent with the α = 0.3 case shown in Table 1. In Appendix G.5, we also show that for C = 0.1, the results are consistent with the C = 0.03 case.\nEffect of Mini-batch Size and Local Epochs. We evaluate the effect of mini-batch size b and local epochs τ on the FMNIST experiments with different sets of hyper-parameters: (b, τ) ∈ {(128, 30), (64, 100)}. Note that (b, τ) = (64, 30) is the default hyper-parameter setting for the previous results. The figures are presented in Appendix G.6. For b = 128, we observe that the performance improvement of πpow-d over πrand and πafl is consistent with b = 64 (see Figure 12). In Figure 14, for τ = 100, with smaller data heterogeneity, the performance gap between πrand and πpow-d is consistent with that of τ = 30. For larger data heterogeneity, however, increasing the local epochs results in πrand and πpow-d performing similarly. This shows that with larger data heterogeneity, larger τ results in increasing the selection skew towards specific clients, and weakens generalization." }, { "heading": "6 CONCLUDING REMARKS", "text": "In this work, we present the convergence guarantees for federated learning with partial device participation with any biased client selection strategy. We discover that biasing client selection can speed up the convergence at the rate O( 1Tρ ) where ρ is the selection skew towards clients with higher local losses. Motivated by this insight, we propose the adaptive client selection strategy POWER-OF-CHOICE. Extensive experiments validate that POWER-OF-CHOICE yields 3× faster convergence and 10% higher test accuracy than the baseline federated averaging with random selection. Even with using fewer clients than random selection, POWER-OF-CHOICE converges 2 × faster with high test performance. An interesting future direction is to improve the fairness (Li et al., 2019; Yu et al., 2020; Lyu et al., 2020; Mohri et al., 2019) and robustness (Pillutla et al., 2019) of the POWER-OF-CHOICE strategy by modifying step 3 of the POWER-OF-CHOICE algorithm to use a different metric such as the clipped loss or the q-fair loss proposed Li et al. (2019) instead of Fk(w)." }, { "heading": "A ADDITIONAL THEOREM", "text": "Theorem A.1 (Convergence with Fixed Learning Rate). Under Assumptions 3.1 to 3.4, a fixed learning rate η ≤ min{ 12µB , 1 4L} where B = 1 + 3ρ 8 , and any client selection strategy π as defined above, the error after T iterations of federated averaging with partial device participation satisfies\nF (w(T ))− F ∗\n≤ L µ\n[ 1− ηµ ( 1 + 3ρ\n8\n)]T F (w(0))− F ∗ − 4 [ η ( 32τ2G2 + σ 2 m + 6ρLΓ ) + 2Γ(ρ̃− ρ) ]\n8 + 3ρ ︸ ︷︷ ︸\nVanishing Term\n+ 4Lη\n( 32τ2G2 + σ 2 m + 6ρLΓ )\nµ(8 + 3ρ) + 8LΓ(ρ̃− ρ) µ(8 + 3ρ)︸ ︷︷ ︸\nNon-vanishing bias\n(8)\nAs T → ∞ the first term in (8) goes to 0 and the second term becomes the bias term for the fixed learning rate case. For a small η, we have that the bias term for the fixed learning rate case in Theorem A.1 is upper bounded by 8LΓ3µ ( ρ̃ ρ − 1 ) which is identical to the decaying-learning rate case. The proof is presented in Appendix D." }, { "heading": "B PRELIMINARIES FOR PROOF OF THEOREM 3.1 AND THEOREM A.1", "text": "We present the preliminary lemmas used for proof of Theorem 3.1 and Theorem A.1. We will denote the expectation over the sampling random source S(t) as ES(t) and the expectation over all the random sources as E. Lemma B.1. Suppose Fk is L−smooth with global minimum at w∗k, then for any wk in the domain of Fk, we have that\n‖∇Fk(wk)‖2 ≤ 2L(Fk(wk)− Fk(w∗k)) (9)" }, { "heading": "Proof.", "text": "Fk(wk)− Fk(w∗k)− 〈∇Fk(w∗k),wk −w∗k〉 ≥ 1 2L ‖∇Fk(wk)−∇Fk(w∗k)‖2 (10)\nFk(wk)− Fk(w∗k) ≥ 1\n2L ‖∇Fk(wk)‖2 (11)\nLemma B.2 (Expected average discrepancy between w(t) and w(t)k for k ∈ S(t)). 1 m E[ ∑ k∈S(t) ‖w(t) −w(t)k ‖ 2] ≤ 16η2t τ2G2 (12)\nProof.\n1\nm ∑ k∈S(t) ‖w(t) −w(t)k ‖ 2 = 1 m ∑ k∈S(t) ‖ 1 m ∑ k′∈S(t) (w (t) k′ −w (t) k )‖ 2 (13)\n≤ 1 m2 ∑ k∈S(t) ∑ k′∈S(t) ‖w(t)k′ −w (t) k ‖ 2 (14)\n= 1\nm2 ∑ k 6=k′,\nk,k′∈S(t)\n‖w(t)k′ −w (t) k ‖ 2 (15)\nObserve from the update rule that k, k′ are in the same set S(t) and hence the terms where k = k′ in the summation in (14) will be zero resulting in (15). Moreover for any arbitrary t there is a t0 such that 0 ≤ t− t0 < τ that w(t0)k′ = w (t0) k since the selected clients are updated with the global model at every τ . Hence even for an arbitrary t we have that the difference between ‖w(t)k′ −w (t) k ‖2 is upper bounded by τ updates. With non-increasing ηt over t and ηt0 ≤ 2ηt, (15) can be further bounded as,\n1\nm2 ∑ k 6=k′,\nk,k′∈S(t)\n‖w(t)k′ −w (t) k ‖ 2 ≤ 1 m2 ∑ k 6=k′,\nk,k′∈S(t)\n‖ t0+τ−1∑ i=t0 ηi(gk′(w (i) k′ , ξ (i) k′ )− gk(w (i) k , ξ (i) k ))‖ 2 (16)\n≤ η2t0τ\nm2 ∑ k 6=k′,\nk,k′∈S(t)\nt0+τ−1∑ i=t0 ‖(gk′(w(i)k′ , ξ (i) k′ )− gk(w (i) k , ξ (i) k ))‖ 2 (17)\n≤ η2t0τ\nm2 ∑ k 6=k′,\nk,k′∈S(t)\nt0+τ−1∑ i=t0 [2‖gk′(w(i)k′ , ξ (i) k′ )‖ 2 + 2‖gk(w(i)k , ξ (i) k )‖ 2] (18)\nBy taking expectation over (18),\nE[ 1\nm2 ∑ k 6=k′,\nk,k′∈S(t)\n‖w(t)k′ −w (t) k ‖\n2] ≤ 2η2t0τ\nm2 E[ ∑ k 6=k′,\nk,k′∈S(t)\nt0+τ−1∑ i=t0 (‖gk′(w(i)k′ , ξ (i) k′ )‖ 2 + ‖gk(w(i)k , ξ (i) k )‖ 2)]\n(19)\n≤ 2η2t0τ\nm2 ES(t) [ ∑ k 6=k′,\nk,k′∈S(t)\nt0+τ−1∑ i=t0 2G2] (20)\n= 2η2t0τ\nm2 ES(t) [ ∑ k 6=k′,\nk,k′∈S(t)\n2τG2] (21)\n≤ 16η 2 t (m− 1)τ2G2\nm (22)\n≤ 16η2t τ2G2 (23)\nwhere (22) is because there can be at most m(m− 1) pairs such that k 6= k′ in S(t).\nLemma B.3 (Upper bound for expectation over ‖w(t)−w∗‖2 for any selection strategy π). With E[·], the total expectation over all random sources including the random source from selection strategy we have the upper bound:\nE[‖w(t) −w∗‖2] ≤ 1 m E[ ∑ k∈S(t) ‖w(t)k −w ∗‖2] (24)" }, { "heading": "Proof.", "text": "E[‖w(t) −w∗‖2] = E[‖ 1 m ∑ k∈S(t) w (t) k −w ∗‖2] = E[‖ 1 m ∑ k∈S(t) (w (t) k −w ∗)‖2] (25)\n≤ 1 m E[ ∑ k∈S(t) ‖w(t)k −w ∗‖2] (26)" }, { "heading": "C PROOF OF THEOREM 3.1", "text": "With g(t) = 1m ∑ k∈S(t) gk(w (t) k , ξ (t) k ) as defined in Section 2, we have that\n‖w(t+1) −w∗‖2 =‖w(t) − ηtg(t) −w∗‖2 (27)\n=‖w(t) − ηtg(t) −w∗ − ηt m ∑ k∈S(t) ∇Fk(w(t)k ) + ηt m ∑ k∈S(t) ∇Fk(w(t)k )‖ 2 (28)\n=‖w(t) −w∗ − ηt m ∑ k∈S(t) ∇Fk(w(t)k )‖ 2 + η2t ‖ 1 m ∑ k∈S(t) ∇Fk(w(t)k )− g (t)‖2\n+ 2ηt〈w(t) −w∗ − ηt m ∑ k∈S(t) ∇Fk(w(t)k ), 1 m ∑ k∈S(t) ∇Fk(w(t)k )− g (t)〉 (29)\n=‖w(t) −w∗‖2−2ηt〈w(t) −w∗, 1\nm ∑ k∈S(t)\n∇Fk(w(t)k )〉︸ ︷︷ ︸ A1\n+ 2ηt〈w(t) −w∗ − ηt m ∑ k∈S(t) ∇Fk(w(t)k ), 1 m ∑ k∈S(t) ∇Fk(w(t)k )− g (t)〉\n︸ ︷︷ ︸ A2\n+ η2t ‖ 1\nm ∑ k∈S(t) ∇Fk(w(t)k )‖ 2\n︸ ︷︷ ︸ A3\n+ η2t ‖ 1\nm ∑ k∈S(t) ∇Fk(w(t)k )− g (t)‖2\n︸ ︷︷ ︸ A4\n(30)\nFirst let’s bound A1.\n− 2ηt〈w(t) −w∗, 1\nm ∑ k∈S(t) ∇Fk(w(t)k )〉 = − 2ηt m ∑ k∈S(t) 〈w(t) −w∗,∇Fk(w(t)k )〉 (31)\n= −2ηt m ∑ k∈S(t) 〈w(t) −w(t)k ,∇Fk(w (t) k )〉 − 2ηt m ∑ k∈S(t) 〈w(t)k −w ∗,∇Fk(w(t)k )〉 (32)\n≤ ηt m ∑ k∈S(t) ( 1 ηt ‖w(t) −w(t)k ‖ 2 + ηt‖∇Fk(w(t)k )‖ 2 ) − 2ηt m ∑ k∈S(t) 〈w(t)k −w ∗,∇Fk(w(t)k )〉\n(33)\n= 1\nm ∑ k∈S(t) ‖w(t) −w(t)k ‖ 2 + η2t m ∑ k∈S(t) ‖∇Fk(w(t)k )‖ 2 − 2ηt m ∑ k∈S(t) 〈w(t)k −w ∗,∇Fk(w(t)k )〉\n(34)\n≤ 1 m ∑ k∈S(t) ‖w(t) −w(t)k ‖ 2 + 2Lη2t m ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )\n−2ηt m ∑ k∈S(t) 〈w(t)k −w ∗,∇Fk(w(t)k )〉\n(35)\n≤ 1 m ∑ k∈S(t) ‖w(t) −w(t)k ‖ 2 + 2Lη2t m ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )\n−2ηt m ∑ k∈S(t) [ (Fk(w (t) k )− Fk(w ∗)) + µ 2 ‖w(t)k −w ∗‖2 ] (36)\n≤ 16η2t τ2G2 − ηtµ\nm ∑ k∈S(t) ‖w(t)k −w ∗‖2 + 2Lη 2 t m ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )\n−2ηt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗)) (37)\nwhere (33) is due to the AM-GM inequality and Cauchy–Schwarz inequality, (35) is due to Lemma B.1, (36) is due to the µ-convexity of Fk, and (37) is due to Lemma B.2. Next, in expectation, E[A2] = 0 due to the unbiased gradient. Next again with Lemma B.1 we bound A3 as follows:\nη2t ‖ 1\nm ∑ k∈S(t) ∇Fk(w(t)k )‖ 2 = η2t m ∑ k∈S(t) ∥∥∥∇Fk(w(t)k )∥∥∥2 (38) ≤ 2Lη 2 t\nm ∑ k∈S(t) (Fk(w (t) k )− F ∗ k ) (39)\nLastly we can bound A4 using the bound of variance of stochastic gradients as,\nE[η2t ‖ 1\nm ∑ k∈S(t) ∇Fk(w(t)k )− g (t)‖2] = η2tE[‖ ∑ k∈S(t) 1 m (gk(w (t) k , ξ (t) k )−∇Fk(w (t) k ))‖ 2] (40)\n= η2t m2 ES(t) [ ∑ k∈S(t) E‖gk(w(t)k , ξ (t) k )−∇Fk(w (t) k )‖ 2] (41)\n≤ η 2 t σ 2\nm (42)\nUsing the bounds ofA1, A2, A3, A4 above we have that the expectation of the LHS of (27) is bounded as\nE[‖w(t+1) −w∗‖2]\n≤E[‖w(t) −w∗‖2]− ηtµ m E[ ∑ k∈S(t) ‖w(t)k −w ∗‖2] + 16η2t τ2G2\n+ η2t σ 2\nm + 4Lη2t m E[ ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )]− 2ηt m E[ ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗))] (43)\n≤(1− ηtµ)E[‖w(t) −w∗‖2] + 16η2t τ2G2\n+ η2t σ 2\nm + 4Lη2t m E[ ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )]− 2ηt m E[ ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗))]\n︸ ︷︷ ︸ A5\n(44)\nwhere (44) is due to Lemma B.3. Now we aim to bound A5 in (44). First we can represent A5 in a different form as:\nE[ 4Lη2t m ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )− 2ηt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗))]\n=E[ 4Lη2t m ∑ k∈S(t) Fk(w (t) k )− 2ηt m ∑ k∈S(t) Fk(w (t) k )− 2ηt m ∑ k∈S(t) (F ∗k − Fk(w∗))\n+ 2ηt m ∑ k∈S(t) F ∗k − 4Lη2t m ∑ k∈S(t) F ∗k ] (45)\n=E[ 2ηt(2Lηt − 1)\nm\n∑ k∈S(t) (Fk(w (t) k )− F\n∗ k )︸ ︷︷ ︸\nA6\n] + 2ηtE[ 1\nm ∑ k∈S(t) (Fk(w ∗)− F ∗k )] (46)\nNow with ηt < 1/(4L) and νt = 2ηt(1− 2Lηt), we have that A6 can be rewritten and bounded as\n− νt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w (t)) + Fk(w (t))− F ∗k )\n=− νt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w (t)))− νt m ∑ k∈S(t) (Fk(w (t))− F ∗k ) (47) ≤− νt m ∑ k∈S(t) [ 〈∇Fk(w(t)),w(t)k −w (t)〉+ µ 2 ‖w(t)k −w (t)‖2 ] − νt m ∑ k∈S(t) (Fk(w (t))− F ∗k )\n(48)\n≤νt m ∑ k∈S(t) [ ηtL(Fk(w (t))− F ∗k ) + ( 1 2ηt − µ 2 ) ‖w(t)k −w (t)‖2 ] − νt m ∑ k∈S(t) (Fk(w (t))− F ∗k )\n(49)\n=− νt m (1− ηtL) ∑ k∈S(t) (Fk(w (t))− F ∗k ) + ( νt 2ηtm − νtµ 2m ) ∑ k∈S(t) ‖w(t)k −w (t)‖2 (50)\n≤− νt m (1− ηtL) ∑ k∈S(t) (Fk(w (t))− F ∗k ) + 1 m ∑ k∈S(t) ‖w(t)k −w (t)‖2 (51)\nwhere (48) is due to µ−convexity, (49) is due to Lemma B.1 and the AM-GM inequality and Cauchy–Schwarz inequality, and (51) is due to the fact that νt(1−ηtµ)2ηt ≤ 1. Hence using this bound of A6 we can upper bound A5 as\nE[ 4Lη2t m ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )− 2ηt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗))]\n≤ 1 m E[ ∑ k∈S(t) ‖w(t)k −w (t)‖2]− νt m (1− ηtL)E[ ∑ k∈S(t) (Fk(w (t))− F ∗k )]\n+ 2ηt m E[ ∑ k∈S(t) (Fk(w ∗)− F ∗k )] (52)\n≤16η2t τ2G2 − νt m (1− ηtL)E[ ∑ k∈S(t) (Fk(w (t))− F ∗k )] + 2ηt m E[ ∑ k∈S(t) (Fk(w ∗)− F ∗k )] (53)\n=16η2t τ 2G2 − νt(1− ηtL)E[ρ(S(π,w(τbt/τc)),w(t))(F (w(t))− K∑ k=1 pkF ∗ k )]\n+ 2ηtE[ρ(S(π,w(τbt/τc)),w∗)(F ∗ − K∑ k=1 pkF ∗ k )] (54)\n≤16η2t τ2G2−νt(1− ηtL)ρ(E[F (w(t))]− K∑ k=1 pkF ∗ k )︸ ︷︷ ︸\nA7\n+2ηtρ̃Γ (55)\nwhere (54) is due to the definition of ρ(S(π,w),w′) in Definition 3.2 and (55) is due to the definition of Γ in Definition 3.1 and the definitions of ρ, ρ̃ in Definition 3.2. We can expand A7 in (55) as\n− νt(1− ηtL)ρ(E[F (w(t))]− K∑ k=1 pkF ∗ k ) (56)\n=− νt(1− ηtL)ρ K∑ k=1 pk(E[Fk(w(t)]− F ∗ + F ∗ − F ∗k ) (57)\n=− νt(1− ηtL)ρ K∑ k=1 pk(E[Fk(w(t)]− F ∗)− νt(1− ηtL)ρ K∑ k=1 pk(F ∗ − F ∗k ) (58) =− νt(1− ηtL)ρ(E[F (w(t))]− F ∗)− νt(1− ηtL)ρΓ (59)\n≤− νt(1− ηtL)µρ 2 E[‖w(t) −w∗‖2]− νt(1− ηtL)ρΓ (60) ≤− 3ηtµρ 8 E[‖w(t) −w∗‖2]− 2ηt(1− 2Lηt)(1− ηtL)ρΓ (61) ≤− 3ηtµρ 8 E[‖w(t) −w∗‖2]− 2ηtρΓ + 6η2t ρLΓ (62)\nwhere (60) is due to the µ−convexity, (61) is due to −2ηt(1− 2Lηt)(1− ηtL) ≤ − 34ηt, and (62) is due to −(1− 2Lηt)(1− ηtL) ≤ −(1− 3Lηt). Hence we can finally bound A5 as\n4Lη2t m E[ ∑ k∈S(t) (Fk(w (t) k )− F ∗ k )− 2ηt m ∑ k∈S(t) (Fk(w (t) k )− Fk(w ∗))]\n≤− 3ηtµρ 8 E[‖w(t) −w∗‖2] + 2ηtΓ(ρ̃− ρ) + η2t (6ρLΓ + 16τ2G2) (63)\nNow we can bound E[‖w(t+1) −w∗‖2] as E[‖w(t+1) −w∗‖2] ≤ [ 1− ηtµ ( 1 + 3ρ\n8\n)] E[‖w(t) −w∗‖2]\n+ η2t\n( 32τ2G2 + σ2\nm + 6ρLΓ\n) + 2ηtΓ(ρ̃− ρ)\n(64)\nBy defining ∆t+1 = E[‖w(t+1)−w∗‖2], B = 1+ 3ρ8 , C = 32τ 2G2 + σ 2\nm +6ρLΓ, D = 2Γ(ρ̃−ρ), we have that\n∆t+1 ≤ (1− ηtµB)∆t + η2tC + ηtD (65)\nBy setting ∆t ≤ ψt+γ , ηt = β t+γ and β > 1 µB , γ > 0 by induction we have that\nψ = max { γ‖w(0) −w∗‖2, 1 βµB − 1 ( β2C +Dβ(t+ γ) )} (66)\nThen by the L-smoothness of F (·), we have that\nE[F (w(t))]− F ∗ ≤ L 2 ∆t ≤ L 2 ψ γ + t (67)" }, { "heading": "D PROOF OF THEOREM A.1", "text": "With fixed learning rate ηt = η, we can rewrite (65) as\n∆t+1 ≤ (1− ηµB)∆t + η2C + ηD (68)\nand with η ≤ min{ 12µB , 1 4L} using recursion of (68) we have that\n∆t ≤ (1− ηµB)t∆0 + η2C + ηD\nηµB (1− (1− ηµB)t) (69)\nUsing ∆t ≤ 2µ (F (w (t))− F ∗) and L-smoothness, we have that\nF (w(t))− F ∗ ≤ L µ (1− ηµB)t(F (w(0))− F ∗) + L(ηC +D) 2µB (1− (1− ηµB)t) (70)\n= L\nµ\n[ 1− ηµ ( 1 + 3ρ\n8\n)]t (F (w(0))− F ∗) + 4L(ηC +D)\nµ(8 + 3ρ)\n[ 1− [ 1− ηµ ( 1 + 3ρ\n8\n)]t] (71)" }, { "heading": "E EXTENSION: GENERALIZATION TO DIFFERENT AVERAGING SCHEMES", "text": "While we considered a simple averaging scheme where w(t+1) = 1m ∑ k∈S(t) ( w (t) k − ηtgk(w (t) k ) )\n, we can extend the averaging scheme to any scheme q such that the averaging weights qk are invariant in time and satisfies ∑ k∈S(t) qk = 1 for any t. Note that q includes the random sampling without replacement scheme introduced by Li et al. (2020) where the clients are sampled uniformly at random without replacement with the averaging coefficients qk = pkK/m. With such averaging scheme q, we denote the global model for the averaging scheme qk as ŵ(t), where ŵ(t+1) ,∑ k∈S(t) qk ( w (t) k − ηtgk(w (t) k ) ) , and the update rule changes to\nŵ(t+1) = ŵ(t) − ηtĝ(t) = ŵ(t) − ηt ∑ k∈S(t) qkgk(w (t) k , ξ (t) k ) (72) where ĝ(t) = ∑ k∈S(t) qkgk(w (t) k , ξ (t) k ). We show that the convergence analysis for the averaging scheme q is consistent with Theorem 3.1. In the case of the averaging scheme q, we have that Lemma B.2 and Lemma B.3 shown in Appendix B, each becomes\n1 m E[ ∑ k∈S(t) ‖ŵ(t) −w(t)k ‖ 2] ≤ 16η2tm(m− 1)τ2G2 (73)\nE[‖ŵ(t) −w∗‖2] ≤ mE[ ∑ k∈S(t) qk‖w(t)k −w ∗‖2] (74)\nThen, using the same method we used for the proof of Theorem 3.1, we have that E[‖ŵ(t+1) −w∗‖2] ≤ (\n1− ηtµ m\n) E[‖ŵ(t) −w∗‖2] + η2t σ2m+ 16m2(m− 1)η2t τ2G2+\nE 2Lη2t (1 +m) ∑ k∈S(t) qk(Fk(w (t) k )− F ∗ k )− 2ηt ∑ k∈S(t) qk(Fk(w (t) k )− Fk(w ∗)) ︸ ︷︷ ︸\nM\n(75)\nBy defining the selection skew for averaging scheme q similar to Definition 5 as\nρq(S(π,w),w′) = ES(π,w)[\n∑ k∈S(π,w) qk(Fk(w ′)− F ∗k )]\nF (w′)− ∑K k=1 pkF ∗ k\n≥ 0, (76)\nand ρq , min\nw,w′ ρq(S(π,w),w′) (77)\nρ̃q , max w\nρq(S(π,w),w∗) = maxw ES(π,w)[\n∑ k∈S(π,w) qk(Fk(w\n∗)− F ∗k )] Γ\n(78)\nWith ηt < 1/(2L(1 +m)), using the same methodology for proof of Theorem 3.1 we have that M becomes upper bounded as\nE 2Lη2t (1 +m) ∑ k∈S(t) qk(Fk(w (t) k )− F ∗ k )− 2ηt ∑ k∈S(t) qk(Fk(w (t) k )− Fk(w ∗)) (79) ≤ −\nηtµρq 2 E[‖ŵ(t) −w∗‖2] + 2ηtΓ(ρ̃q − ρq) + 16m2(m− 1)η2t τ2G2 + 2Lη2t (2 +m)ρqΓ (80)\nFinally we have that E[‖ŵ(t+1) −w∗‖2] ≤ [ 1− ηtµ ( 1\nm + ρq 2\n)] E[‖ŵ(t) −w∗‖2] + 2ηtΓ(ρ̃q − ρq)\n+η2t [32m 2(m− 1)τ2G2 + σ2m+ 2L(2 +m)ρqΓ]\n(81)\nBy defining ∆̂t+1 = E[‖ŵ(t+1)−w∗‖2], B̂ = 1m + ρq 2 , Ĉ = 32m 2(m− 1)τ2G2 +σ2m+ 2L(2 + m)ρqΓ, D̂ = 2Γ(ρ̃q − ρq), we have that\n∆̂t+1 ≤ (1− ηtµB̂)∆̂t + η2t Ĉ + ηtD̂ (82)\nAgain, by setting ∆̂t ≤ ψt+γ , ηt = β t+γ and β > 1 µB̂ , γ > 0 by induction we have that\nψ = max { γ‖w(0) −w∗‖2, 1\nβµB̂ − 1\n( β2Ĉ + D̂β(t+ γ) )} (83)\nThen by the L-smoothness of F (·), we have that\nE[F (w(t))]− F ∗ ≤ L 2 ∆̂t ≤ L 2 ψ γ + t (84)\nWith β = mµ , γ = 4m(1+m)L µ and ηt = β t+γ , we have that\nE[F (ŵ(T ))]− F ∗ ≤\n1\n(T + γ)\n[ Lm2(32m(m− 1)τ2G2 + σ2)\nµ2ρq +\n2L2m(m+ 2)Γ µ2 + Lγ‖w(0) −w∗‖2 2 ] ︸ ︷︷ ︸\nVanishing Error Term\n+ 2LΓ\nρqµ ( ρ̃q ρq − 1 )\n︸ ︷︷ ︸ Non-vanishing bias\n(85)\nwhich is consistent with Theorem 3.1." }, { "heading": "F EXPERIMENT DETAILS", "text": "Quadratic Model Optimization. For the quadratic model optimization, we set each local objective function as strongly convex as follows:\nFk(w) = 1 2 w>Hkw − e>k w + 1 2 e>kH −1 k ek (86)\nHk ∈ Rv×v is a diagonal matrix Hk = hkI with hk ∼ U(1, 20) and ek ∈ Rv is an arbitrary vector. We set the global objective function as F (w) = ∑K k=1 pkFk(w), where the data size pk follows the power law distribution P (x; a) = axa−1, 0 ≤ x ≤ 1, a = 3. We can easily show that the optimum for Fk(w) and F (w) is w∗k = H −1 k ek and w ∗ = ( ∑K k=1 pkHk) −1( ∑K k=1 pkek) respectively. The gradient descent update rule for the local model of client k in the quadratic model optimization is\nw (t+1) k = w (t) k − η(Hkw (t) k − ek) (87)\nwhere the global model is defined as w(t+1) = 1m ∑ k∈S(t) w (t+1) k . We sample m = KC clients for every round where for each round the clients perform τ gradient descent local iterations with fixed learning rate η and then these local models are averaged to update the global model. For the implementation of πadapow-d, d was decreased half from d = K for every 5000 rounds. For all simulations we set τ = 2, v = 5, η = 2× 10−5. For the estimation of ρ and ρ̃ for the quadratic model, we get the estimates of the theoretical ρ, ρ̃ values by doing a grid search over a large range of possible w,w′ for ρ(S(π,w),w′) and\nρ(S(π,w),w∗) respectively. The distribution of S(π,w) is estimated by simulating 10000 iterations of client sampling for each π and w.\nLogistic Regression on Synthetic Dataset. We conduct simulations on synthetic data which allows precise manipulation of heterogeneity. Using the methodology constructed in (Sahu et al., 2019), we use the dataset with large data heterogeneity, Synthetic(1,1). We assume in total 30 devices where the local dataset sizes for each device follows the power law. For the implementation of πadapow-d, d was decreased to d = m from d = K at half the entire communication rounds. We set the mini batch-size to 50 with τ = 30, and η = 0.05, where η is decayed to η/2 every 300 and 600 rounds.\nDNN on FMNIST Dataset. We train a deep multi-layer perceptron network with two hidden layers on the FMNIST dataset (Xiao et al., 2017). We construct the heterogeneous data partition amongst clients using the Dirichlet distribution DirK(α) (Hsu et al., 2019), where α determines the degree of the data heterogeneity across clients (the data size imbalance and degree of label skew across clients). Smaller α indicates larger data heterogeneity. For all experiments we use mini-batch size of b = 64, with τ = 30 and η = 0.005, where η is decayed by half for every 150, 300 rounds. We experiment with three different seeds for the randomness in the dataset partition across clients and present the averaged results.\nAll experiments are conducted with clusters equipped with one NVIDIA TitanX GPU. The number of clusters we use vary by C, the fraction of clients we select. The machines communicate amongst each other through Ethernet to transfer the model parameters and information necessary for client selection. Each machine is regarded as one client in the federated learning setting. The algorithms are implemented by PyTorch.\nPseudo-code of the variants of pow-d: cpow-d and rpow-d. We here present the pseudo-code for πcpow-d and πrpow-d. Note that the pseudo-code for πcpow-d in Algorithm 1 can be generalized to the algorithm for πpow-d, by changing 1|ξ̂k| ∑ ξ∈ξ̂k f(w, ξ) to Fk(w).\nAlgorithm 1 Pseudo code for cpow-d: computation efficient variant of pow-d 1: Input: m, d, pk for k ∈ [K], mini-batch size b = |ξ̂k| for computing 1|ξ̂k| ∑ ξ∈ξ̂k f(w, ξ)\n2: Output: S(t) 3: Initialize: empty sets S(t) and A 4: Global server do 5: Get A = {d indices sampled without replacement from [K] by pk} 6: Send the global model w(t) to the d clients in A 7: Receive 1|ξ̂k| ∑ ξ∈ξ̂k f(w, ξ) from all clients in A\n8: Get S(t) = {m clients with largest 1|ξ̂k| ∑ ξ∈ξ̂k f(w, ξ) (break ties randomly)}\n9: Clients in A in parallel do 10: Create mini-batch ξ̂k from sampling b samples uniformly at random from Bk and compute\n1 |ξ̂k| ∑ ξ∈ξ̂k f(w, ξ) and send it to the server\n11: return S(t)\nAlgorithm 2 Pseudo code for rpow-d: computation & communication efficient variant of pow-d 1: Input: m, d, pk for k ∈ [K] 2: Output: S(t) 3: Initialize: empty sets S(t) and A, and list Atmp with K elements all equal to inf 4: All client k ∈ S(t−1) do 5: For t mod τ = 0, send 1τb ∑t l=t−τ+1 ∑ ξ∈ξ(l)k f(w (l) k , ξ) to the server with its local model\n6: Global server do 7: Receive and update Atmp[k] = 1τb ∑t l=t−τ+1 ∑ ξ∈ξ(l)k f(w (l) k , ξ) for k ∈ S(t−1) 8: Get A = {d indices sampled without replacement from [K] by pk} 9: Get S(t) = {m clients with largest values in [Atmp[i] for i ∈ A], (break ties randomly)}\n10: return S(t)" }, { "heading": "G ADDITIONAL EXPERIMENT RESULTS", "text": "" }, { "heading": "G.1 SELECTED CLIENT PROFILE", "text": "We further visualize the difference between our proposed sampling strategy πpow-d and the baseline scheme πrand by showing the selected frequency ratio of the clients for K = 30, C = 0.1 for the quadratic simulations in Figure 7. Note that the selected ratio for πrand reflects each client’s dataset size. We show that the selected frequencies of clients for πpow-d are not proportional to the data size of the clients, and we are selecting clients frequently even when they have relatively low data size like client 6 or 22. We are also not necessarily frequently selecting the clients that have the highest data size such as client 26. This aligns well with our main motivation of POWER-OF-CHOICE that weighting the clients’ importance based on their data size does not achieve the best performance, and rather considering their local loss values along with the data size better represents their importance. Note that the selected frequency for πrand is less biased than πpow-d." }, { "heading": "G.2 COMMUNICATION AND COMPUTATION EFFICIENCY WITH LARGER DATA HETEROGENEITY", "text": "In Table 2, we show the communication and computation efficiency of POWER-OF-CHOICE for α = 2, as we showed for α = 0.3 in Table 1 in Section 5. With C = 0.03 fraction of clients, πpow-d, πcpow-d, and πrpow-d have better test accuracy of at least approximately 10% higher test accuracy performance than (πrand, C = 0.1). R60 for πpow-d, πcpow-d, πrpow-d is 0.61, 0.66, 0.73 times that of (πrand, C = 0.1) respectively. This indicates that we can reduce the number of communication rounds by at least 0.6 using 1/3 of clients compared to (πrand, C = 0.1) and still get higher test accuracy performance. The computation time tcomp for πcpow-d and πrpow-d with C = 0.03 is smaller than that of (πrand, C = 0.1).\nG.3 INTERMITTENT CLIENT AVAILABILITY\nIn real world scenarios, certain clients may not be available due to varying availability of resources such as battery power or wireless connectivity. Hence we experiment with a virtual scenario, where amongst K clients, for each communication round, we select clients alternately from one group out of two fixed groups, where each group has 0.5K clients. This altering selection reflects a more realistic client selection scenario where, for example, we have different time zones across clients. For each communication round, we select 0.1 portion of clients from the corresponding group uniformly at random and exclude them from the client selection process. This random exclusion of certain clients represents the randomness in the client availability within that group for cases such as low battery power or wireless connectivity. In Figure 8 we show that πpow-d and πrpow-d achieves 10% and 5% test accuracy improvement respectively compared to πrand for α = 2. For α = 3, both πpow-d and πrpow-d shows 10% improvement. Therefore, we demonstrate that POWER-OF-CHOICE also performs well in a realistic scenario where clients are available intermittently." }, { "heading": "G.4 RESULTS FOR DNN ON NON-IID PARTITIONED EMNIST DATASET", "text": "To provide further validation of the consistency in our results of πpow-d and its variants on the FMNIST dataset, we present additional experiment results on the EMNIST dataset sorted by digits with K = 500, C = 0.03. We train a deep multi-layer perceptron network with two hidden layers on the dataset partitioned heterogeneously across the clients in the same way as for the FMNIST dataset. For all experiments, we use b = 64, τ = 30, and η = 0.005 where η is decayed by half at round 300.\nIn Figure 9, we show that πpow-d performs with significantly higher test accuracy than πrand for varying d for both α = 2 and 0.3. For α = 2, πafl is able to follow the performance of πpow-d in the later communication rounds, but is slower in achieving the same test accuracy than πpow-d. Moreover, in Figure 10, we show that πcpow-d works as good as πpow-d for both large and small data heterogeneity. The performance of πrpow-d falls behind πpow-d and πcpow-d for smaller data heterogeneity, whereas for larger data heterogeneity, πrpow-d is able to perform similarly with πpow-d and πcpow-d." }, { "heading": "G.5 EFFECT OF THE FRACTION OF SELECTED CLIENTS", "text": "In Figure 11, for larger C = 0.1 with α = 2, the test accuracy improvement for πpow-d is even higher than the case of C = 0.03 with approximately 15% improvement. πcpow-d performs slightly lower in test accuracy than πpow-d but still performs better than πrand and πafl. πrpow-d performs as well as πafl. For α = 0.3, πpow-d, πcpow-d, and πrpow-d have approximately equal test accuracy performance, higher than πrand by 5%. The POWER-OF-CHOICE strategies all perform slightly better than πafl. Therefore we show that POWER-OF-CHOICE performs well for selecting a larger fraction of clients, i.e., when we have larger C = 0.1 > 0.03." }, { "heading": "G.6 EFFECT OF THE LOCAL EPOCHS AND MINI-BATCH SIZE", "text": "We present the experiment results elaborated in Section 5 for the different hyper-parameter settings (b, τ) ∈ {(128, 30), (64, 100)} in Figure 12, 13, 14, and 15 below." } ]
2,020
null
SP:40f435881d361a57f68c000a5cf06d868acbcda8
[ "This paper proposes Generalized Variational Continual Learning (GVCL). It is shown that Online EWC and VCL are special cases of GVCL, along with other theoretical contributions. Further, GVCL is augmented with FiLM to alleviate weaknesses of VCL and GVCL. GVCL and GVCL-F are applied to a number of continual learning tasks and demonstrate competitive performance. " ]
Continual learning deals with training models on new tasks and datasets in an online fashion. One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL). VCL employs variational inference, which in other settings has been improved empirically by applying likelihood-tempering. We show that applying this modification to VCL recovers Online EWC as a limiting case, allowing for interpolation between the two approaches. We term the general algorithm Generalized VCL (GVCL). In order to mitigate the observed overpruning effect of VI, we take inspiration from a common multi-task architecture, neural networks with task-specific FiLM layers, and find that this addition leads to significant performance gains, specifically for variational methods. In the small-data regime, GVCL strongly outperforms existing baselines. In larger datasets, GVCL with FiLM layers outperforms or is competitive with existing baselines in terms of accuracy, whilst also providing significantly better calibration.
[ { "affiliations": [], "name": "CONTINUAL LEARNING" }, { "affiliations": [], "name": "Noel Loo" }, { "affiliations": [], "name": "Siddharth Swaroop" } ]
[ { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2Vec: Task Embedding for MetaLearning. arXiv:1902.03545 [cs, stat], February 2019", "venue": "URL http://arxiv.org/abs/ 1902.03545", "year": 1902 }, { "authors": [ "Alessandro Achille", "Giovanni Paolini", "Stefano Soatto" ], "title": "Where is the information in a deep neural network", "venue": null, "year": 2020 }, { "authors": [ "Tameem Adel", "Han Zhao", "Richard E. Turner" ], "title": "Continual learning with adaptive weights (claw)", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hongjoon Ahn", "Sungmin Cha", "Donggyu Lee", "Taesup Moon" ], "title": "Uncertainty-based continual learning with adaptive regularization", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A. Saurous", "Kevin Murphy" ], "title": "Fixing a broken ELBO", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "A. Kitamoto", "A. Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese", "venue": "literature. ArXiv,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": null, "year": 2017 }, { "authors": [ "Robert M. French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in Cognitive Sciences,", "year": 1999 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK", "venue": null, "year": 2017 }, { "authors": [ "J. Kirkpatrick", "Razvan Pascanu", "Neil C. Rabinowitz", "J. Veness", "G. Desjardins", "Andrei A. Rusu", "K. Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "C. Clopath", "D. Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "David Lopez-Paz", "Marc Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "James Martens" ], "title": "New insights and perspectives on the natural gradient method", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Manfred Opper", "Cedric Archambeau" ], "title": "The variational gaussian approximation revisited", "venue": "Neural computation, 21:786–92,", "year": 2008 }, { "authors": [ "Kazuki Osawa", "Siddharth Swaroop", "Mohammad Emtiyaz E Khan", "Anirudh Jain", "Runa Eschenhagen", "Richard E Turner", "Rio Yokota" ], "title": "Practical deep learning with bayesian principles", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Pingbo Pan", "Siddharth Swaroop", "Alexander Immer", "Runa Eschenhagen", "Richard E. Turner", "Mohammad Emtiyaz Khan" ], "title": "Continual Deep Learning by Functional Regularisation of Memorable Past", "venue": "URL http://arxiv.org/abs/2004", "year": 2020 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Learning multiple visual domains with residual adapters", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "James Requeima", "Jonathan Gordon", "John Bronskill", "Sebastian Nowozin", "Richard E Turner" ], "title": "Fast and flexible multi-task classification using conditional neural adaptive processes", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online structured laplace approximations for overcoming catastrophic forgetting", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Andrei A. Rusu", "Neil C. Rabinowitz", "Guillaume Desjardins", "Hubert Soyer", "James Kirkpatrick", "Koray Kavukcuoglu", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progressive Neural Networks", "venue": "[cs],", "year": 2016 }, { "authors": [ "Jonathan Schwarz", "Wojciech Czarnecki", "Jelena Luketina", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Joan Serra", "Didac Suris", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Siddharth Swaroop", "Cuong V. Nguyen", "Thang D. Bui", "Richard E. Turner" ], "title": "Improving and Understanding Variational Continual Learning. arXiv:1905.02099 [cs, stat], May 2019", "venue": "URL http://arxiv.org/abs/1905.02099", "year": 2099 }, { "authors": [ "Martin Thoma" ], "title": "The HASYv2 dataset. arXiv:1701.08380 [cs], January 2017", "venue": "URL http:// arxiv.org/abs/1701.08380", "year": 2017 }, { "authors": [ "Brian Trippe", "Richard Turner" ], "title": "Overpruning in Variational Bayesian Neural Networks. arXiv:1801.06230 [stat], January 2018", "venue": "URL http://arxiv.org/abs/1801.06230", "year": 2018 }, { "authors": [ "R. Turner", "M. Sahani" ], "title": "Two problems with variational expectation maximisation for time-series models", "venue": null, "year": 2011 }, { "authors": [ "Florian Wenzel", "Kevin Roth", "Bastiaan S. Veeling", "Jakub Swiatkowski", "Linh Tran", "Stephan Mandt", "Jasper Snoek", "Tim Salimans", "Rodolphe Jenatton", "Sebastian Nowozin" ], "title": "How good is the bayes posterior in deep neural networks really", "venue": "URL https: //arxiv.org/abs/2002.02405", "year": 2002 }, { "authors": [ "Dong Yin", "Mehrdad Farajtabar", "Ang Li" ], "title": "SOLA: Continual Learning with Second-Order Loss Approximation", "venue": "URL http://arxiv.org/abs/ 2006.10974", "year": 2020 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Guodong Zhang", "Shengyang Sun", "David Duvenaud", "Roger Grosse" ], "title": "Noisy natural gradient as variational inference", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "AS HT" ], "title": "SOLA SOLA approximates the Hessian with a rank-restricted matrix H̃ (Yin et al., 2020). We first consider a relaxation of this problem with full rank, then consider the limit when we reduce this relaxation", "venue": null, "year": 2020 }, { "authors": [ "Lee" ], "title": "IMM, which is an extension to EWC which merges posteriors based on their Fisher information matrices", "venue": null, "year": 2019 }, { "authors": [ "Serra" ], "title": "Mixture. The number of epochs was changed so that the number of gradient steps for each task was roughly equal. For Easy-CHASY, Hard-CHASY and Split-CIFAR, this means that later tasks are run for more epochs, since the largest training sets are at the start. For Mixture, we ran 180 equivalents epochs for Facescrub. For how many epochs this equates to in the other datasets, we refer the reader to Appendix A", "venue": null, "year": 2018 }, { "authors": [ "Serra" ], "title": "For all algorithms on Easy-CHASY, Hard-CHASY, Split-MNIST and Split-CIFAR, hyperparameter selection was done by selecting the combination which produced the best average accuracy on the first 3 tasks. The algorithms were then run on the full number of tasks. For the Mixed Vision tasks, the best hyperparameters for the baselines were taken from the HAT Github repository", "venue": "For GVCL,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual learning methods enable learning when a set of tasks changes over time. This topic is of practical interest as many real-world applications require models to be regularly updated as new data is collected or new tasks arise. Standard machine learning models and training procedures fail in these settings (French, 1999), so bespoke architectures and fitting procedures are required.\nThis paper makes two main contributions to continual learning for neural networks. First, we develop a new regularization-based approach to continual learning. Regularization approaches adapt parameters to new tasks while keeping them close to settings that are appropriate for old tasks. Two popular approaches of this type are Variational Continual Learning (VCL) (Nguyen et al., 2018) and Online Elastic Weight Consolidation (Online EWC) (Kirkpatrick et al., 2017; Schwarz et al., 2018). The former is based on a variational approximation of a neural network’s posterior distribution over weights, while the latter uses Laplace’s approximation. In this paper, we propose Generalized Variational Continual Learning (GVCL) of which VCL and Online EWC are two special cases. Under this unified framework, we are able to combine the strengths of both approaches. GVCL is closely related to likelihood-tempered Variational Inference (VI), which has been found to improve performance in standard learning settings (Zhang et al., 2018; Osawa et al., 2019). We also see significant performance improvements in continual learning.\nOur second contribution is to introduce an architectural modification to the neural network that combats the deleterious overpruning effect of VI (Trippe & Turner, 2018; Turner & Sahani, 2011). We analyze pruning in VCL and show how task-specific FiLM layers mitigate it. Combining this architectural change with GVCL results in a hybrid architectural-regularization based algorithm. This additional modification results in performance that exceeds or is within statistical error of strong baselines such as HAT (Serra et al., 2018) and PathNet (Fernando et al., 2017).\nThe paper is organized as follows. Section 2 outlines the derivation of GVCL, shows how it unifies many continual learning algorithms, and describes why it might be expected to perform better than them. Section 3 introduces FiLM layers, first from the perspective of multi-task learning, and then through the lens of variational over-pruning, showing how FiLM layers mitigate this pathology of VCL. Finally, in Section 5 we test GVCL and GVCL with FiLM layers on many standard bench-\nmarks, including ones with few samples, a regime that could benefit more from continual learning. We find that GVCL with FiLM layers outperforms existing baselines on a variety of metrics, including raw accuracy, forwards and backwards transfer, and calibration error. In Section 5.4 we show that FiLM layers provide a disproportionate improvement to variational methods, confirming our hypothesis in Section 31." }, { "heading": "2 GENERALIZED VARIATIONAL CONTINUAL LEARNING", "text": "In this section, we introduce Generalized Variational Continual Learning (GVCL) as a likelihoodtempered version of VCL, with further details in Appendix C. We show how GVCL recovers Online EWC. We also discuss further links between GVCL and the Bayesian cold posterior in Appendix D." }, { "heading": "2.1 LIKELIHOOD-TEMPERING IN VARIATIONAL CONTINUAL LEARNING", "text": "Variational Continual Learning (VCL). Bayes’ rule calculates a posterior distribution over model parameters θ based on a prior distribution p(θ) and some dataset DT = {XT , yT }. Bayes’ rule naturally supports online and continual learning by using the previous posterior p(θ|DT−1) as a new prior when seeing new data (Nguyen et al., 2018). Due to the intractability of Bayes’ rule in complicated models such as neural networks, approximations are employed, and VCL (Nguyen et al., 2018) uses one such approximation, Variational Inference (VI). This approximation is based on approximating the posterior p(θ|DT ) with a simpler distribution qT (θ), such as a Gaussian. This is achieved by optimizing the ELBO for the optimal qT (θ),\nELBOVCL = Eθ∼qT (θ)[log p(DT |θ)]−DKL(qT (θ)‖qT−1(θ)), (1)\nwhere qT−1(θ) is the approximation to the previous task posterior. Intuitively, this refines a distribution over weight samples that balances good predictive performance (the first expected prediction accuracy term) while remaining close to the prior (the second KL-divergence regularization term).\nLikelihood-tempered VCL. Optimizing the ELBO will recover the true posterior if the approximating family is sufficiently rich. However, the simple families used in practice typically lead to poor test-set performance. Practitioners have found that performance can be improved by downweighting the KL-divergence regularization term by a factor β, with 0 < β < 1. Examples of this are seen in Zhang et al. (2018) and Osawa et al. (2019), where the latter uses a “data augmentation factor” for down-weighting. In a similar vein, sampling from “cold posteriors” in SG-MCMC has also been shown to outperform the standard Bayes posterior, where the cold posterior is given by pT (θ|D) ∝ p(θ|D) 1 T , T < 1 (Wenzel et al., 2020). Values of β > 1 have also been used to improve the disentanglement variational autoencoder learned models (Higgins et al., 2017). We down-weight the KL-divergence term in VCL, optimizing the β-ELBO2,\nβ-ELBO = Eθ∼qT (θ)[log p(DT |θ)]− βDKL(qT (θ)‖qT−1(θ)).\nVCL is trivially recovered when β = 1. We will now show that surprisingly as β → 0, we recover a special case of Online EWC. Then, by modifying the term further as required to recover the full version of Online EWC, we will arrive at our algorithm, Generalized VCL." }, { "heading": "2.2 ONLINE EWC IS A SPECIAL CASE OF GVCL", "text": "We analyze the effect of KL-reweighting on VCL in the case where the approximating family is restricted to Gaussian distributions over θ. We will consider training all the tasks with a KLreweighting factor of β, and then take the limit β → 0, recovering Online EWC. Let the approximate posteriors at the previous and current tasks be denoted as qT−1(θ) = N (θ;µT−1,ΣT−1) and qT (θ) = N (θ;µT ,ΣT ) respectively, where we are learning {µT ,ΣT }. The optimal ΣT under the β-ELBO has the form (see Appendix C),\nΣ−1T = 1\nβ ∇µT∇µTEqT (θ)[− log p(DT |θ)] + Σ −1 T−1. (2)\n1Code is available at https://github.com/yolky/gvcl 2We slightly abuse notation by writing the likelihood as p(DT |θ) instead of p(yT |θ,XT ).\nNow take the limit β → 0. From Equation 2, ΣT → 0, so qT (θ) becomes a delta function, and\nΣ−1T = − 1 β ∇µT∇µT log p(DT |θ = µT ) + Σ−1T−1 = 1 β HT + Σ −1 T−1 = 1 β T∑ t=1 Ht + Σ −1 0 , (3)\nwhere HT is the T th task Hessian3. Although the learnt distribution qT (θ) becomes a delta function (and not a full Gaussian distribution as in Laplace’s approximation), we will see that a cancellation of β factors in the β-ELBO will lead to the eventual equivalence between GVCL and Online EWC. Consider the terms in the β-ELBO that only involve µT :\nβ-ELBO = Eθ∼qT (θ)[log p(DT |θ)]− β 2 (µT − µT−1)>Σ−1T−1(µT − µT−1)\n= log p(DT |θ = µT )− 1\n2 (µT − µT−1)> ( T−1∑ t=1 Ht + βΣ −1 0 ) (µT − µT−1), (4)\nwhere we have set the form of ΣT−1 to be as in Equation 3. Equation 4 is an instance of the objective function used by a number of continual learning methods, most notably Online EWC4 (Kirkpatrick et al., 2017; Schwarz et al., 2018), Online-Structured Laplace (Ritter et al., 2018), and SOLA (Yin et al., 2020). These algorithms can be recovered by changing the approximate posterior class Q to Gaussians with diagonal, block-diagonal Kronecker-factored covariance matrices, and low-rank precision matrices, respectively (see Appendices C.4 and C.5).\nBased on this analysis, we see that β can be seen as interpolating between VCL, with β = 1, and continual learning algorithms which use point-wise approximations of curvature as β → 0. In Appendix A we explore how β controls the scale of the quadratic curvature approximation, verifying with experiments on a toy dataset.. Small β values learn distributions with good local structure, while higher β values learn distributions with a more global structure. We explore this in more detail in Appendices A and B, where we show the convergence of GVCL to Online-EWC on a toy experiment.\nInference using GVCL. When performing inference with GVCL at test time, we use samples from the unmodified q(θ) distribution. This means that when β = 1, we recover the VCL predictive, and as β → 0, the posterior collapses as described earlier, meaning that the weight samples are effectively deterministic. This is in line with the inference procedure given by Online EWC and its variants. In practice, we use values of β = 0.05 − 0.2 in Section 5, meaning that some uncertainty is retained, but not all. We can increase the uncertainty at inference time by using an additional tempering step, which we describe, along with further generalizations in Appendix D.\n2.3 REINTERPRETING λ AS COLD POSTERIOR REGULARIZATION\nAs described above, the β-ELBO recovers instances of a number of existing second-order continual learning algorithms including Online EWC as special cases. However, the correspondence does not recover a key hyperparameter λ used by these methods that up-weights the quadratic regularization term. Instead, our derivation produces an implicit value of λ = 1, i.e. equal weight between tasks of equal sample count. In practice it is found that algorithms such as Online EWC perform best when λ > 1, typically 10 − 1000. In this section, we view this λ hyperparameter as a form of cold posterior regularization.\nIn the previous section, we showed that β controls the length-scale over which we approximate the curvature of the posterior. However, the magnitude of the quadratic regularizer stays the same, because theO(β−1) precision matrix and the β coefficient in front of the KL-term cancel out. Taking inspiration from cold posteriors (Wenzel et al., 2020), which temper both the likelihood and the prior and improve accuracy with Bayesian neural networks, we suggest tempering the prior in GVCL.\nTherefore, rather than measuring the KL divergence between the posterior and prior, qT and qT−1, respectively, we suggest regularizing towards tempered version of the prior, qλT−1. However, this\n3The actual Hessian may not be positive semidefinite while Σ is, so here we refer to a positive semidefinite approximation of the Hessian.\n4EWC uses the Fisher information, but our derivation results in the Hessian. The two matrices coincide when the model has near-zero training loss, as is often the case (Martens, 2020).\nform of regularization has a problem: in continual learning, over the course of many tasks, old tasks will be increasingly (exponentially) tempered. In order to combat this, we also use the tempered version of the posterior in the KL divergence, qλT . This should allow us to gain benefits from tempering the prior while being stable over multiple tasks in continual learning.\nAs we now show, tempering in this way recovers the λ hyperparameter from algorithms such as Online EWC. Note that raising the distributions to the power λ is equivalent to tempering by τ = λ−1. For Gaussians, tempering a distribution by a temperature τ = λ−1 is the same as scaling the covariance by λ−1. We can therefore expand our new KL divergence,\nDKL ( qλT ‖qλT−1 ) = 1\n2\n( (µT − µT−1)>λΣ−1T−1(µT − µT−1) + Tr(λΣ −1 T−1λ −1ΣT ) + log |ΣT−1|λ−d |ΣT |λ−d − d )\n= 1 2 ( (µT − µT−1)>λΣ−1T−1(µT − µT−1) + Tr(Σ −1 T−1ΣT ) + log |ΣT−1| |ΣT | − d ) = DKLλ(qT ‖qT−1).\nIn the limit of β → 0, our λ coincides with Online EWC’s λ, if the tasks have the same number of samples. However, this form of λ has a slight problem: it increases the regularization strength of the initial prior Σ0 on the mean parameter update. We empirically found that this negatively affects performance. We therefore propose a different version of λ, which only up-weights the “datadependent” parts of ΣT−1, which can be viewed as likelihood tempering the previous task posterior, as opposed to tempering both the initial prior and likelihood components. This new version still converges to Online EWC as β → 0, since the O(1) prior becomes negligible compared to the O(β−1) Hessian terms. We define,\nΣ̃−1T,λ := λ\nβ T∑ t=1 Ht + Σ −1 0 = λ(Σ −1 T − Σ −1 0 ) + Σ −1 0 .\nIn practice, it is necessary to clip negative values of Σ−1T −Σ −1 0 to keep Σ̃ −1 T,λ positive definite. This is only required because of errors during optimization. We then use a modified KL-divergence,\nDKLλ̃(qT ‖qT−1) = 1 2 ( (µT − µT−1)>Σ̃−1T−1,λ(µT − µT−1) + Tr(Σ −1 T−1ΣT ) + log |ΣT−1| |ΣT | − d ) .\nNote that in Online EWC, there is another parameter γ, that down-weights the previous Fisher matrices. As shown in Appendix C, we can introduce this hyperparameter by taking the KL divergence priors and posteriors at different temperatures: qλT−1 and q γλ T . However, we do not find that this approach improves performance. Combining everything, we have our objective for GVCL,\nEθ∼qT (θ)[log p(DT |θ)]− βDKLλ̃(qT (θ)‖qT−1(θ))." }, { "heading": "3 FILM LAYERS FOR CONTINUAL LEARNING", "text": "The Generalized VCL algorithm proposed in Section 2 is applicable to any model. Here we discuss a multi-task neural network architecture that is especially well-suited to GVCL when the task ID is known at both training and inference time: neural networks with task-specific FiLM layers." }, { "heading": "3.1 BACKGROUND TO FILM LAYERS", "text": "The most common architecture for continual learning is the multi-headed neural network. A shared set of body parameters act as the feature extractor. For every task, features are generated in the same way, before finally being passed to separate head networks for each task. This architecture does not allow for task-specific differentiation in the feature extractor, which is limiting (consider, for example, the different tasks of handwritten digit recognition and image recognition). FiLM layers (Perez et al., 2018) address this limitation by linearly modulating features for each specific task so that useful features can be amplified and inappropriate ones ignored. In fully-connected layers, the transformation is applied element-wise: for a hidden layer with width W and activation values hi, 1 ≤ i ≤ W , FiLM layers perform the transformation h′i = γihi + bi, before being passed on to the remainder of the network. For convolutional layers, transformations are applied filter-wise. Consider a layer withN filters of sizeK×K, resulting in activations hi,j,k, 1 ≤ i ≤ N, 1 ≤ j ≤W, 1 ≤ k ≤ H , where W and H are the dimensions of the resulting feature map. The transformation has the\nform h′i,j,k = γi ∗ hi,j,k + bi. The number of required parameters scales with the number of filters, as opposed to the full activation dimension, making them computationally cheap and parameterefficient. FiLM layers have previously been shown to help with fine-tuning for transfer learning (Rebuffi et al., 2017), multi-task meta-learning (Requeima et al., 2019), and few-shot learning (Perez et al., 2018). In Appendix F, we show how FiLM layer parameters are interpretable, with similarities between FiLM layer parameters for similar tasks in a multi-task setup." }, { "heading": "3.2 COMBINING GVCL AND FILM LAYERS", "text": "It is simple to apply GVCL to models which utilize FiLM layers. Since these layers are specific to each task they do not need a distributional treatment or regularization as was necessary to support continual learning of the shared parameters. Instead, point estimates are found by optimising the GVCL objective function. This has a well-defined optimum unlike joint MAP training when FiLM layers are added (see Appendix E for a discussion). We might expect an improved performance for continual learning by introducing task-specific FiLM layers as this results in a more suitable multi-task model. However, when combined with GVCL, there is an additional benefit.\nWhen applied to multi-head networks, VCL tends to prune out large parts of the network (Trippe & Turner, 2018; Turner & Sahani, 2011) and GVCL inherits this behaviour. This occurs in the following way: First, weights entering a node revert to their prior distribution due to the KL-regularization term in the ELBO. These weights then add noise to the network, affecting the likelihood term of the ELBO. To avoid this, the bias concentrates at a negative value so that the ReLU activation effectively shuts off the node. In the single task setting, this is often relatively benign and can even facilitate compression (Louizos et al., 2017; Molchanov et al., 2017). However, in continual learning the effect is pathological: the bias remains negative due to its low variance, meaning that the node is effectively shut off from that point forward, preventing the node from re-activating. Ultimately, large sections of the network can be shut off after the first task and cannot be used for future tasks, which wastes network capacity (see Figure 1a).\nIn contrast, when using task-specific FiLM layers, pruning can be achieved by either setting the FiLM layer scale to 0 or the FiLM layer bias to be negative. Since there is no KL-penalty on these parameters, it is optimal to prune in this way. Critically, both the incoming weights and the bias of a pruned node can then return to the prior without adding noise to the network, meaning that the node can be re-activated in later tasks. The increase in the number of unpruned units can be seen in Figure 1b. In Appendix G we provide more evidence of this mechanism." }, { "heading": "4 RELATED WORK", "text": "Regularization-based continual learning. Many algorithms attempt to regularize network parameters based on a metric of importance. Section 2 shows how some methods can be seen as special cases of GVCL. We now focus on other related methods. Lee et al. (2017) proposed IMM, which is an extension to EWC which merges posteriors based on their Fisher information matrices. Ahn et al.\n(2019), like us, use regularizers based on the ELBO, but also measure importance on a per-node basis rather than a per-weight one. SI (Zenke et al., 2017) measures importance using “Synaptic Saliency,” as opposed to methods based on approximate curvature.\nArchitectural approaches to continual learning. This family of methods modifies the standard neural architecture by adding components to the network. Progressive Neural Networks (Rusu et al., 2016) adds a parallel column network for every task, growing the model size over time. PathNet (Fernando et al., 2017) fixes the model size while optimizing the paths between layer columns. Architectural approaches are often used in tandem with regularization based approaches, such as in HAT (Serra et al., 2018), which uses per-task gating parameters alongside a compression-based regularizer. Adel et al. (2020) propose CLAW, which also uses variational inference alongside pertask parameters, but requires a more complex meta-learning based training procedure involving multiple splits of the dataset. GVCL with FiLM layers adds to this list of hybrid architecturalregularization based approaches. See Appendix H for a more comprehensive related works section." }, { "heading": "5 EXPERIMENTS", "text": "We run experiments in the small-data regime (Easy-CHASY and Hard-CHASY) (Section 5.1), on Split-MNIST (Section 5.1), on the larger Split CIFAR benchmark (Section 5.2), and on a much larger Mixed Vision benchmark consisting of 8 different image classification datasets (Section 5.3). In order to compare continual learning performance, we compare final average accuracy, forward transfer (the improvement on the current task as number of past tasks increases (Pan et al., 2020)) and backward transfer (the difference in accuracy between when a task is first trained and its accuracy after the final task (Lopez-Paz & Ranzato, 2017)). We compare to many baselines, but due to space constraints, only report the best-performing baselines in the main text. We also compare to two offline methods: an upper-bound “joint” version trained on all tasks jointly, and a lower-bound “separate” version with each task trained separately (no transfer). Further baseline results are in Appendix J. The combination of GVCL on task-specific FiLM layers (GVCL-F) outperforms baselines on the smaller-scale benchmarks and outperforms or performs within statistical error of baselines on the larger Mixed Vision benchmark. We also report calibration curves, showing that GVCL-F is well-calibrated. Full experimental protocol and hyperparameters are reported in Appendix I.\n5.1 CHASY AND Split-MNIST\nFor our Split-MNIST experiment, in addition to the standard 5 binary classification tasks for SplitMNIST, we add 5 more binary classification tasks by taking characters from the KMNIST dataset (Clanuwat et al., 2018). For these experiments we used a 2-layer fully-connected network, as in common in continual learning literature (Nguyen et al., 2018; Zenke et al., 2017).\nFigure 2 shows the raw accuracy results. As the CHASY datasets have very few samples per class (16 per class, resulting in the largest task having a training set of 320 samples), it is easy to overfit. This few-sample regime is a key practical use case for continual learning as it is essential to transfer information between tasks. In this regime, continual learning algorithms based on MAP-inference overfit, resulting in poor performance. As GVCL-F is based on a Bayesian framework, it is not as adversely affected by the low sample count, achieving 90.9% accuracy on Easy-CHASY compared to 82.6% of the best performing MAP-based CL algorithm, HAT. Hard-CHASY tells a similar story, 69.1% compared to PathNet’s 64.8%. Compared to the full joint training baselines, GVCLF achieves nearly the same accuracy (Figure 3). The gap between GVCL-F and GVCL is larger for Easy-CHASY than for Hard-CHASY, as the task-specific adaptation that FiLM layers provide is more beneficial when tasks require contrasting features, as in Hard-CHASY. With Split-MNIST, GVCL-F also reaches the same performance as joint training, however it is difficult to distinguish approaches on this benchmark as many achieve near maximal accuracy.\n5.2 Split-CIFAR\nThe popular Split-CIFAR dataset, introduced in Zenke et al. (2017), has CIFAR10 as the first task, and then 5 tasks as disjoint 10-way classifications from the first 50 classes of CIFAR100, giving a\ntotal of 6 tasks. We use the same architecture as in other papers (Zenke et al., 2017; Pan et al., 2020). Like with Easy-CHASY, jointly learning these tasks significantly outperforms networks separately trained on the tasks, indicating potential for forward and backward transfer in a continual learning algorithm. Results are in Figure 4. GVCL-F is able to achieve the same final accuracy as joint training with FiLM layers, achieving 80.0±0.5%, beating all baseline algorithms by at least 2%. This confirms that our algorithm performs well in larger settings as well as the previous smallerscale benchmarks, with minimal forgetting. While the backwards transfer metric for many of the best performing continual learning algorithms is near 0, GVCL-F has the highest forward transfer, achieving 8.5%.\nGVCL consistently outperforms VCL, but unlike in the CHASY experiments, it does not outperform Online EWC. This also occurs in the Mixed Vision tasks considered next. Theoretically this should not happen, but GVCL’s hyperparameter search found β = 0.2 which is far from Online EWC. We believe this is because optimizing the GVCL cost for small β is more challenging (see Appendix B). However, since intermediate β settings result in more pruning, FiLM layers then bring significant improvement." }, { "heading": "5.3 MIXED VISION TASKS", "text": "We finally test on a set of mixed vision datasets, as in Serra et al. (2018). This benchmark consists of 8 image classification datasets with 10-100 classes and a range of dataset sizes, with the order of tasks randomly permuted between different runs. We use the same AlexNet architecture as in Serra et al. (2018). Average accuracies of the 8 tasks after continual training are shown in Figure 5.\nGVCL-F’s final accuracy matches that of HAT, with similar final performances of 80.0±1.2% and 80.3±1.0% for the two methods, respectively. Figure 5b shows the relative accuracy of the model after training on intermediate tasks compared to its final accuracy. A positive relative accuracy after t tasks means that the method performs better on the tasks seen so far than it does on the same tasks after seeing all 8 tasks (Appendix I contains a precise definition). HAT achieves its continual learning performance by compressing earlier tasks, hindering their performance in order to reserve capacity for later tasks. In contrast, GVCL-F attempts to maximize the performance for early tasks, but allows performance to gradually decay, as shown by the gradually decreasing relative accuracy in Figure 5b. While both strategies result in good final accuracy, one could argue that pre-compressing a network in anticipation of future tasks which may or may not arrive is an impractical real-world strategy, as the number of total tasks may be unknown a priori, and therefore one does not know how much to compress the network. The approach taken by GVCL-F is then more desirable, as it ensures good performance after any number of tasks, and frees capacity by “gracefully forgetting”.\nUncertainty calibration. As GVCL-F is based on a probabilistic framework, we expect it to have good uncertainty calibration compared to other baselines. We show this for the Mixed Vision tasks in Figure 6. Overall, the average Expected Calibration Error for GVCL-F (averaged over tasks) is 0.32%, compared to HAT’s 1.69%, with a better ECE on 7 of the 8 tasks. These results demonstrate that GVCL-F is generally significantly better calibrated than HAT, which can be extremely important in decision critical problems where networks must know when they are likely to be uncertain." }, { "heading": "5.4 RELATIVE GAIN FROM ADDING FILM LAYERS", "text": "In Section 3, we suggested that adding FiLM layers to VCL in particular would result in the largest gains, since it addresses issues specific to VI, and that FiLM parameter values were automatically best allocated based on the prior. In Section 5.4, we compare the relative gain of adding FiLM layers to VI-based approaches and Online EWC. We omitted HAT, since it already has per-task gating mechanisms, so FiLM layers would be redundant. We see that the gains from adding FiLM layers to Online EWC are limited, averaging 2.2% compared to over 10% for both VCL and GVCL. This suggests that the strength of FiLM layers is primarily in how they interact with variational methods for continual learning. As described in Section 3, with VI we do not need any special algorithm to encourage pruning and how to allocate resources, as they are done automatically by VI. This contrasts HAT, where specific regularizers and gradient modifications are necessary to encourage the use of FiLM parameters." }, { "heading": "6 CONCLUSIONS", "text": "We have developed a framework, GVCL, that generalizes Online EWC and VCL, and we combined it with task-specific FiLM layers to mitigate the effects of variational pruning. GVCL with FiLM layers outperforms strong baselines on a number of benchmarks, according to several metrics. Future research might combine GVCL with memory replay methods, or find ways to use FiLM layers when task ID information is unavailable." }, { "heading": "A LOCAL VS GLOBAL CURVATURE IN GVCL", "text": "In this section, we look at the effect of β on the approximation of local curvature found from optimizing the β-ELBO by analyzing its effect on a toy dataset. In doing so, we aim to provide intuition why different values of β might outperform β = 1. We start by looking at the equation of the fixed point of Σ.\nΣ−1T = 1\nβ ∇µT∇µTEqT (θ)[− log p(DT |θ)] + Σ −1 T−1. (5)\nWe consider the T = 1 case. We can interpret this as roughly measuring the curvature of log p(DT |θ) at different samples of θ drawn from the distribution qT (θ). Based on this equation, we know Σ−1T increases as β decreases, so samples from qT (θ) are more localized, meaning that the curvature is measured closer to the mean, forming a local approximation of curvature. Conversely, if β is larger, Σ−1T broadens and the approximation of curvature is on a more global scale. For simplicity, we write∇µT∇µTEqT (θ)[− log p(DT |θ)] as H̃T . To test this explanation of β, we performed β-VI on a simple toy dataset.\nWe have a true data generative distribution X ∼ N (0, 1), and we sample 1000 points forming the dataset, D. Our model is a generative model withX ∼ N (f(θ), σ20 = 30), with θ being the model’s only parameter and f(θ) an arbitrary fixed function. With β-VI, we aim to approximate p(θ|D) with q(θ) = N (θ;µ, σ2) with a prior p(θ) = N (θ; 0, 1). We choose three different equations for f(θ):\n1. f1(θ) = |θ|1.6 2. f2(θ) = 4 √ |θ|\n3. f3(θ) = 3 √ (|θ| − 0.5)3 + 0.4\nWe visualize log p(D|θ) for each of these three functions in Figure 7. Here, we see that the data likelihoods have very distinct shapes. f1 results in a likelihood that is flat locally but curves further away from the origin. f2 is the opposite: there is a cusp at 0 then flattens out. f3 is a mix, where at a very small scale it has high curvature, then flattens, then curves again. Now, we perform β-VI to get µ and σ2, for β ∈ {0.1, 1, 10}. We then have values for σ2, which acts as Σ−1T in Equation 5. We want to extract H̃T −1 from these values, so we perform the operation σ̃2 = β1\nσ2 −1 , which represents\nour estimate of the curvature of log p(D|θ) at the mean. This operation also “cancels” the scaling effect of β. We then plot these approximate log-likelihood functions log p̃(D|θ) = N (θ;µ, σ̃2) in Figure 8.\nFrom these figures, we see a clear trend: small values of β cause the approximate curvature to be measured locally while larger values cause it to be measured globally, confirming our hypothesis. Most striking is Figure 8c, where the curvature is not strictly increasing or decreasing further from the origin. Here, we see that the curvature first is high for β = 0.1, then flattens out for β = 1 then becomes high again for β = 10. Now imagine in continual learning our posterior for a parameter whose posterior looks like Figure 8a. Here, the parameter would be under-regularized with β = 1,\nso the parameter will drift far away, significantly affecting performance. Equally, if the posterior was like Figure 8b, then values of β = 1 would cause the parameter to be over-regularized, limiting model capacity than in practice could be freed. In practice we found that β values of 0.05 − 0.2 worked the best. We leave finding better ways of quantifying the posterior’s variable curvature and ways of selecting appropriate values of β as future work." }, { "heading": "B CONVERGENCE TO ONLINE-EWC ON A TOY EXAMPLE", "text": "Here, we demonstrate convergence of GVCL to Online-EWC for small β. In this problem, we deal with 2d logistic regression on a toy dataset consisting of separated clusters. The clusters are shown in Figure 9. The first set of tasks is separating the red/blue clusters, then the second is the yellow/green clusters. Blue and green are the first class and red and yellow are the second. Or model is given by the equation\np(yi = 1|w, b, xi) = σ(w>xi + b)\nWhere xi are our datapoints and w and b are our parameters. yi = 1 means class 2 (and yi = 0 means class 1). x is 2-dimensional so we have a total of 3 parameters.\nNext, we ran GVCL with decreasing values of β and compared the resulting values of w and b after the second task to solution generated by Online-EWC. For both cases, we set λ = 1. For our prior, we used the unit normal prior on both w and b, our approximating distribution was a fully factorized Gaussian. We ran this experiment for 5 random seeds (of the parameters, not the clusters) and plotted the results.\nFigure 10 shows the result. Evidently, the values of the parameters approach those of Online-EWC as we decrease β, in line with our theory. However, it is worth noting that to get this convergent behaviour, we had to run this experiment for very long. For the lowest β value, it took 17 minutes to\nconverge compared to 1.7 for β = 1. A small learning rate of 1e-4 with 100000 iteration steps was necessary for the smallest β =1e-4. If the optimization process was run for shorter, or too large a learning rate was used, we would observe convergent behaviour for the first few values of β, but the smallest values of β would result in completely different values.\nThis shows that while in theory, for small β, GVCL should approach Online-EWC, it is extremely hard to achieve in practice. Given that it takes so long to achieve convergent behaviour on a model with 3 parameters, it is unsurprising that we were not able to achieve the same performance as Online-EWC for our neural networks, and explains why despite GVCL, in theory, encompassing Online-EWC, can sometimes perform worse." }, { "heading": "C FURTHER DETAILS ON RECOVERING ONLINE EWC", "text": "Here, we show the full derivation to recover Online EWC from GVCL, as β → 0. First, we expand the β-ELBO which for Gaussian priors and posteriors has the form:\nβ-ELBO = Eθ∼qT (θ)log p(DT |θ)− βDKL(qT (θ)||qT−1(θ))\n= Eθ∼qT (θ)[log p(DT |θ)]− β\n2\n( log |ΣT−1| − log |ΣT | − d\n+ Tr(Σ−1T−1ΣT ) + (µT − µT−1) >Σ−1T (µT − µT−1)\n) ,\nwhere qT (θ) is our approximate distribution with means and covariance µT and ΣT , and our prior distribution qT−1(θ) has mean and covariance µT−1 and ΣT−1. DT refers to the T th dataset and d the dimension of µ. Next, take derivatives wrt ΣT and set to 0:\n∇ΣT β-ELBO = ∇ΣTEθ∼qT (θ)[log p(DT |θ)] + β 2 Σ−1T − β 2 Σ−1T−1 (6)\n0 = 1\n2 ∇µ∇µEqT (θ)[log p(DT |θ)] +\nβ 2 Σ−1T − β 2 Σ−1T−1 (7)\n⇒ Σ−1T = 1\nβ ∇µT∇µTEqT (θ)[− log p(DT |θ)] + Σ −1 T−1. (8)\nWe move from Equation 6 to Equation 7 using Equation 19 in Opper & Archambeau (2008). From Equation 8, we see that as β → 0, the precision grows indefinitely, so qT (θ) approaches a delta function centered at its mean. We give a more precise explanation of this argument in Appendix C.1.\nWe have\nΣ−1T = − 1 β ∇µT∇µT log p(DT |θ = µT ) + Σ−1T−1\nΣ−1T = 1\nβ HT + Σ\n−1 T−1, (9)\nwhere HT is the Hessian of the T th dataset log-likelihood. This recursion of Σ−1T gives\nΣ−1T = 1\nβ T∑ t=1 Ht + Σ −1 0 .\nNow, optimizing the β-ELBO for µT (ignoring terms that do not depend on µT ):\nβ-ELBO = Eθ∼q(θ)[log p(D|θ)]− β 2 (µT − µT−1)>Σ−1T−1(µT − µT−1) (10)\n= log p(D|θ = µT )− 1\n2 (µT − µT−1)> ( T−1∑ t=1 Ht + βΣ −1 0 ) (µT − µT−1). (11)\nWhich is the exact optimization problem for Laplace Propagation (Smola et al., 2003). If we note that HT ≈ NTFT (Martens, 2020), where NT is the number of samples in the T th dataset and FT is the Fisher information matrix, we recover Online EWC with λ = 1 when N1 = N2 = ... = NT (with γ = 1).\nC.1 CLARIFICATION OF THE DELTA-FUNCTION ARGUMENT\nIn C, we argued,\nΣ−1T = 1\nβ ∇µT∇µTEqT (θ)[− log p(DT |θ)] + Σ −1 T−1\n≈ 1 β HT + Σ −1 T−1\nfor small β. We argued that for small β, q(θ) collapsed to its mean and it is safe to treat the expectation as sampling only from the mean. In this section, we show that this argument is justified. Lemma 1. If q(θ) has mean and covariance parameters µ and Σ, and Σ−1 = 1β∇µ∇µEθ∼q(θ)[f(θ)] + C, C = O( 1 β ), then for small β, Σ\n−1 ≈ 1βHµ + C, where Hµ is the Hessian of f(θ) evaluated at µ, assuming Hµ = O(1)\nProof. We first assume that f(θ) admits a Taylor expansion around µ. For notational purposes, we define,\nTk1,...,kn ∣∣∣ θ=µ = ∂f ∂θ(k1) . . . ∂θ(kn) ∣∣∣ θ=µ\nFor our notation, upper indices in brackets indicate vector components (not powers), and lower indices indicate covector components. Note that, Hµ,i,j = Ti,j ∣∣∣ θ=µ . 5\nThen, a Taylor expansion centered at µ has the form\nf(θ) = f(µ) + ∞∑ n=1 1 n! Tk1,...,kn ∣∣∣ θ=µ (θ − µ)(k1) . . . (θ − µ)(kn)\n5In this case, the µ in Hµ,i,j refers to the Hessian evaluated at µ, while i, j refers to the indices\nWhere we use Einstein notation, so\nTk1,...,kn ∣∣∣ θ=µ (θ − µ)(k1) . . . (θ − µ)(kn) = D∑\nk1,...,kn=1\nTk1,...,kn ∣∣∣ θ=µ (θ − µ)(k1) . . . (θ − µ)(kn)\n(12)\nWith D the dimension of θ. To denote the central moments of q(θ), we define µ̃(k1,...,kn) := Eθ∼q(θ) [ (θ − µ)(k1) . . . (θ − µ)(kn) ] These moments can be computed using Isserlis’ theorem. Notably, for a Gaussian, if n is odd, µ̃(k1,...,kn) = 0\nNow, we can compute our expectation as an infinite sum:\n∇µ∇µEθ∼q(θ)[f(θ)] = ∇µ∇µEθ∼q(θ) [ f(µ) +\n∞∑ n=1 1 n! Tk1,...,kn ∣∣∣ θ=µ (θ − µ)(k1) . . . (θ − µ)(kn) ]\n= ∇µ∇µ [ f(µ) +\n∞∑ n=1 1 n! Tk1,...,kn ∣∣∣ θ=µ µ̃(k1,...,kn) ]\n= ∇µ∇µ [ f(µ) +\n∞∑ n=1 1 2n! Tk1,...,k2n ∣∣∣ θ=µ µ̃(k1,...,k2n) ] (odd moments are 0)\n= A for notational simplicity\nWe can look at individual components of A:\nAi,j = ∂ ∂µ(i) ∂ ∂µ(j)\n[ f(µ) +\n∞∑ n=1 1 2n! Tk1,...,k2n ∣∣∣ θ=µ µ̃(k1,...,k2n)\n]\n= Ti,j ∣∣∣ θ=µ + ∞∑ n=1 1 2n! Ti,j,k1,...,k2n ∣∣∣ θ=µ µ̃(k1,...,k2n)\nNow we can insert this into our original equation.\nΣ−1 = 1\nβ ∇µ∇µEθ∼q(θ)[f(θ)] + C\nΣ−1 = 1\nβ A+ C\nΣ−1i,j = 1\nβ Ai,j + Ci,j looking at individual indices\nΣ−1i,j︸︷︷︸ O( 1β ) = 1 β\n( Ti,j ∣∣∣ θ=µ︸ ︷︷ ︸\nO(1)\n+ ∞∑ n=1 1 2n! Ti,j,k1,...,k2n ∣∣∣ θ=µ\nµ̃(k1,...,k2n)︸ ︷︷ ︸ O(β) ) + Ci,j︸︷︷︸ O( 1β )\nNow we assumed thatHµ isO(1) (so Ti,j ∣∣∣ θ=µ is too), which means that Σ−1i,j must be at leastO( 1 β ). If Σ−1 = O( 1β ), then Σ = O(β). From Isserlis’ theorem, we know that µ̃ (k1,...,k2n) is composed of\nthe product of n elements of Σ, so µ̃(k1,...,k2n) = O(βn). Ti,j,k1,...,k2n ∣∣∣ θ=µ is constant with respect to β, so is O(1). Hence, the summation is O(β), which for small β is negligible compared to the O(1) term Ti,j ∣∣∣ θ=µ , so can therefore be ignored. Then, keeping only O( 1β ) terms,\nO( 1β )︷︸︸︷ Σ−1i,j = 1\nβ\n( O(1)︷ ︸︸ ︷ Ti,j ∣∣∣ θ=µ + O(β)︷ ︸︸ ︷ ∞∑ n=1 1 2n! Ti,j,k1,...,k2n ∣∣∣ θ=µ µ̃(k1,...,k2n) ) + O( 1β )︷︸︸︷ Ci,j\nO( 1β )︷︸︸︷ Σ−1i,j = O( 1β )︷ ︸︸ ︷ 1\nβ Ti,j ∣∣∣ θ=µ +\nO(1)︷ ︸︸ ︷ 1\nβ ( ∞∑ n=1 1 2n! Ti,j,k1,...,k2n ∣∣∣ θ=µ µ̃(k1,...,k2n) ) + O( 1β )︷︸︸︷ Ci,j\n≈ 1 β Ti,j ∣∣∣ θ=µ + Ci,j\n= 1\nβ Hµ,i,j + Ci,j\nΣ−1 ≈ 1 β Hµ + C\nC.2 CORRESPONDING GVCL’S λ AND ONLINE EWC’S λ\nWe use DKLλ̃ in place of DKL, with DKLλ̃ defined as\nDKLλ̃(qT ‖qT−1) = 1\n2\n( (µT − µT−1)>Σ̃−1T−1,λµT − µT−1) + Tr(Σ −1 T−1ΣT )\n+ log |ΣT−1| − d− log |ΣT | ) ,\nwith\nΣ̃−1T,λ := λ\nβ T∑ t=1 Ht + Σ −1 0 = λ(Σ −1 T − Σ −1 0 ) + Σ −1 0 .\nNow, the fixed point for ΣT is still given by Equation 9, but the β-ELBO for for terms involving µT has the form,\nβ-ELBO = Eθ∼q(θ)[log p(D|θ)]− β 2 (µT − µT−1)>Σ̃−1T−1,λ(µT − µT−1)\n= log p(D|θ = µT )− 1\n2 (µT − µT−1)>\n( λ\nT∑ t=1 Ht + βΣ −1 0\n) (µT − µT−1),\nwhich upweights the quadratic terms dependent on the data (and not the prior), similarly to λ in Online EWC.\nC.3 RECOVERING γ FROM TEMPERING\nIn order to recover λ, we used the KL-divergence between tempered priors and posteriors qλT−1 and qλT . Recovering γ can be done using the same trick, except we temper the posterior to q γλ T :\nDKL(q λ T ‖q γλ T−1) = 1 2 ( (µT − µT−1)>λΣ−1T−1(µT − µT−1)\n+ Tr(γλΣ−1T−1λ −1ΣT ) + log |λ−1ΣT−1| |(γλ)−1ΣT | − d ) = 12 ( (µT − µT−1)>λΣ−1T−1(µT − µT−1) + γTr(Σ −1 T−1ΣT )− log |ΣT | ) + cons. = DKLλ,γ(qT ‖qT−1)\nWe can apply the same λ to λ̃ as before to get DKLλ̃,γ(qT ‖qT−1). Plugging this into the β-ELBO and solving yields the recursion for ΣT to be\nΣ−1T = 1\nβ HT + γΣ\n−1 T−1,\nwhich is exactly that of Online EWC.\nC.4 GVCL RECOVERS THE SAME APPROXIMATION OF FT AS ONLINE EWC\nThe earlier analysis dealt with full rank ΣT . In practice, however, ΣT is rarely full rank and we deal with approximations of ΣT . In this subsection, we consider diagonal ΣT , like Online EWC, which in practice uses a diagonal approximation of FT . The way Online EWC approximates this diagonal is by matching diagonal entries of FT . There are many ways of producing a diagonal approximation of a matrix, for example matching diagonals of the inverse matrix is also valid, depending on the metric we use. Here, we aim to show that that the diagonal approximation of ΣT that is produced when Q is the family of diagonal covariance Gaussians is the same as the way Online EWC approximates FT , that is, diagonals of Σ−1T,approx match diagonals of Σ −1 T,true, i.e. we match the diagonal precision entries, not the diagonal covariance entries.\nLet ΣT,approx = diag(σ21 , σ 2 2 , ..., σ 2 d), with d the dimension of the matrix. Because we are performing VI, we are optimizing the forwards KL divergence, i.e. DKL(qapprox||qtrue). Therefore, ignoring terms that do not depend on ΣT,approx,\nDKL(qapprox||qtrue) = 1\n2 Tr(ΣT,approxΣ−1T,true)−\n1 2 log |ΣT,approx|+ (constants wrt ΣT,approx)\n= 1\n2 d∑ i=1 (ΣT,approxΣ −1 T,true)i,i − 1 2 d∑ i=1 log σ2i\n= 1\n2 d∑ i=1 ( σ2i (Σ −1 T,true)i,i)− log σ 2 i ) .\nOptimizing wrt σ2i :\n∂DKL(qapprox||qtrue) ∂σ2i = 0 = 1 2\n( (Σ−1T,true)i,i − 1\nσ2i ) ⇒ σ2i = 1\n(Σ−1T,true)i,i .\nSo we have that diagonals of Σ−1T,approx match diagonals of Σ −1 T,true.\nC.5 GVCL RECOVERS THE SAME APPROXIMATION OF HT AS SOLA\nSOLA approximates the Hessian with a rank-restricted matrix H̃ (Yin et al., 2020). We first consider a relaxation of this problem with full rank, then consider the limit when we reduce this relaxation.\nBecause we are concerned with limiting β → 0, it is sufficient to consider Σ−1true as H , the true Hessian. Because H is symmetric (and assuming it is positive-semi-definite), we can also write H as H = V DV > = ∑p i=1 λixix > i , with D, and V be the diagonal matrix of eigenvalues and a unitary matrix of eigenvectors, respectively. These eigenvalues and eigenvectors are λi and xi, respectively, and p the dimension of H .\nFor H̃ , we first consider full-rank matrix which becomes low-rank as δ → 0:\nH̃ = k∑ i=1 λ̃ix̃ix̃ > i + p∑ j=k+1 δx̃j x̃ > j\nThis matrix has λ̃i, 1 ≤ i ≤ k as its first k eigenvalues and δ as its remaining. We also set x̃>i x̃i = 1 and x̃>i x̃j = 0, i 6= j. With KL minimization, we aim to minimize (up to a constant and scalar factor),\nKL = Tr(ΣapproxΣ−1true)− log |Σapprox|\nIn our case, this is Equation 13, which we can further expand as,\nKL = Tr(H̃−1H)− log |H̃−1| (13)\n= Tr k∑ i=1 1 λ̃i x̃ix̃ > i + p∑ j=k+1 1 δ x̃j x̃ > j H + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (14)\n= Tr ( k∑ i=1 1 λi x̃ix̃ > i H ) + Tr p∑ j=k+1 1 δ x̃j x̃ > j H + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (15)\n= k∑ i=1 1 λi x̃>i Hx̃i + p∑ j=k+1 1 δ x̃>j Hx̃j + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (16)\n(17)\nTaking derivatives wrt λ̃i, we have:\n∂KL ∂λ̃i = 0 = − 1 λ̃2i x̃>i Hx̃i + 1 λ̃i (18)\n⇒ λ̃i = x̃>i Hx̃i (19)\nWhich when put into Equation 16,\nKL = k∑ i=1 1 λi x̃>i Hx̃i + p∑ j=k+1 1 δ x̃>j Hx̃j + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (20)\n= k∑ i=1 x̃>i Hx̃i x̃>i Hx̃i + p∑ j=k+1 1 δ x̃>j Hx̃j + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (21)\n= k + p∑ j=k+1 1 δ x̃>j Hx̃j + k∑ i=1 log(λ̃i) + p∑ j=k+1 log δ (22)\n= 1\nδ p∑ j=k+1 x̃>j Hx̃j + k∑ i=1 log(λ̃i) (removing constants) (23)\n= 1\nδ p∑ j=k+1 x̃>j Hx̃j + k∑ i=1 log(x̃>i Hx̃i) (24)\nNow we need to consider the constraints x̃>i x̃i = 1 and x̃ > i x̃j = 0, i 6= j by adding Lagrange multipliers to our KL cost,\nL = 1\nδ p∑ j=k+1 x̃>j Hx̃j + k∑ i=1 log(x̃>i Hx̃i)− k∑ i=1 φi,i(x̃ > i x̃i − 1)− ∑ i,j,i 6=j φi,j x̃ > i x̃j (25)\nTaking derivatives wrt x̃i:\n∂L ∂x̃i = 0 = 2Hx̃i x̃>i Hx̃i − 2φi,ix̃i − 2 ∑ i,j 6=i φi,j x̃j (26)\n∑ i,j 6=i φi,j x̃j = ( H x̃>i Hx̃i − φi,iIp ) x̃i (27)\nIn Equation 27, we have x̃i expressed as a linear combination of x̃j , j 6= i, but x̃i and x̃j are orthogonal, so x̃i cannot be expressed as such, so φi,j = 0, i 6= j, and,\nHx̃i x̃>i Hx̃i = φi,ix̃i (28)\nMeaning x̃i are eigenvectors of H for 1 ≤ i ≤ k. We can also use the same Lagrange multipliers to show that x̃i for k + 1 ≤ i ≤ p are also eigenvectors of H . This means that our cost,\nKL = 1\nδ p∑ j=k+1 x̃>j Hx̃j + k∑ i=1 log(x̃>i Hx̃i) (29)\n= 1\nδ p∑ j=k+1 κ̃j + k∑ i=1 log(κ̃i) (30)\nwhere the set (κ̃1, κ̃2, ..., κ̃p) is a permutation of (λ1, λ2, ..., λp) and κ̃i = λ̃i for 1 ≤ i ≤ k. I.e., H̃ shares k eigenvalues with H , and the rest are δ. It now remains to determine which eigenvalues are shared and which are excluded.\nConsidering only two eigenvalues, λi, λj , and let λi > λj ≥ 0. Let r = λiλj . The relative cost of excluding λi in the set {κ̃1, κ̃2, ..., κ̃k} compared to including it is,\nRelative Cost = λi − λj δ − log λi\nλj\n= λi(1− 1r )\nδ − log r\nIf the relative cost is positive, then including λi as one of the eigenvalues of H̃ is the more optimal choice. Now solving the inequality,\nRelative Cost > 0\nλi(1− 1r ) δ − log r > 0\nλi > δ(1− 1\nr ) log r\nWhich, for sufficiently small δ is always true because r > 1. Thus, it is always better to swap two eigenvalues which are included/excluded, if the excluded one is larger. This means that H̃ has the k largest eigenvalues of H , and we already showed that it shares the same eigenvectors. This maximum eigenvalue/eigenvector pair selection is exactly the procedure used by SOLA." }, { "heading": "D COLD POSTERIOR VCL AND FURTHER GENERALIZATIONS", "text": "The use of KL-reweighting is closely related related to the idea of “cold-posteriors,” in which pT (θ|D) ∝ p(θ|D) 1 τ . Finding this cold posterior is equivalent to find optimal q distributions for maximizing the τ -ELBO:\nτ -ELBO := Eθ∼q(θ)[log p(D|θ) + log p(θ)− τ log q(θ)]\nwhenQ is all possible distributions of θ. This objective is the same as the standard ELBO with only the entropy term reweighted, and contrasts the β-ELBO where both the entropy and prior likelihoods are reweighted. Here, β acts similarly to T (the temperature, not to be confused with task number). This relationship naturally leads to the transition diagram shown in Figure 11. In this, we can see that we can easily transition between posteriors at different temperatures by optimizing either the β-ELBO, τ -ELBO, or tempering the posterior.\nWhen Q contains all possible distributions, moving along any path results in the exact same distribution, for example optimizing the τ -ELBO then tempering is the same as directly optimizing the ELBO. However in the case where Q is limited, this transition is not exact, and the resulting posterior is path dependent. In fact, each possible path represents a different valid method for performing continual learning. Standard VCL works by traversing the horizontal arrows, directly optimizing the ELBO, while an alternative scheme of VCL would optimize the τ -ELBO to form cold posteriors, then heat the posterior before optimizing the τ -ELBO for a new task. Inference can be done at either the warm or cold state. Note that for Gaussians, heating the posterior is just a matter of scaling the covariance matrix by a constant factor τafterτbefore .\nWhile warm posteriors generated through this two-step procedure are not optimal under the ELBO, whenQ is limited, they may perform better for continual learning. Similar to Equation 2, the optimal Σ when optimizing the τ -ELBO is given by\nΣ−1T = 1\nτ T∑ t=1 H̃t + 1 τ Σ−10\nWhere H̃t is the approximate curvature for a specific value of τ for task t, which coincides with the true Hessian for τ → 0, like with the β-ELBO. Here, both the prior and data-dependent component are scaled by 1τ , in contrast to Equation 2, where only the data-dependent component is reweighted. As discussed in Section 2.2 and further explored in appendix A, this leads to a different scale of the quadratic approximation, which may lend itself better for continual learning. This also results in a second way to recover γ in Online EWC by first optimizing the β-ELBO with β = γ, then tempering by a factor of 1γ (i.e. increasing the temperature when γ < 1)." }, { "heading": "E MAP DEGENERACY WITH FILM LAYERS", "text": "Here we describe how training FiLM layer with MAP training leads to degenerate values for the weights and scales, whereas with VI training, no degeneracy occurs. For simplicity, consider only the nodes leading into a single node and let there be d of them, i.e. θ has dimension d. Because we only have one node, our scale parameter γ is a single variable.\nFor MAP training, we have the loss function L = −p(D|θ, γ) + λ2 θ 2, with D the dataset and λ the L2 regularization hyperparameter. Note that p(D|θ, γ) = p(D|cθ, 1cγ), hence we can scale θ arbitrarily without affecting the likelihood, so long as γ is scaled inversely. If c < 1, λ2 θ 2 < λ2 ( 1 cθ)\n2, so increasing c decreases the L2 penalty if θ is inversely scaled by c. Therefore the optimal setting of the scale parameter γ is arbitrarily large, while θ shrinks to 0.\nAt a high level, VI-training (with Gaussian posteriors and priors) does not have this issue because the KL-divergence penalizes the variance of the parameters from deviating from the prior in addition to the mean parameters, whereas MAP training only penalizes the means. Unlike with MAP training, if we downscale the weights, we also downscale the value of the variances, which increases the KL-divergence. The variances cannot revert to the prior either, as when they are up-scaled by the FiLM scale parameter, the noise would increase, affecting the log-likelihood component of the ELBO. Therefore, there exists an optimal amount of scaling which balances the mean-squared penalty component of the KL-divergence and the variance terms.\nMathematically we can derive this optimal scale. Consider the scenario with VI training with Gaussian variational distribution and prior, where our approximate posterior q(θ) has mean and variance µ and Σ and our prior p(θ) has parameters µ0 and Σ0. First consider the scenario without FiLM Layers. Now, have our loss function L = −Eθ∼q(θ) log p(D|θ) + DKL(q(θ)||q0(θ)). For multivariate Gaussians,\nDKL(q(θ)||p(θ)) = 1\n2 (log |Σ0| − log |Σ| − d+ Tr(Σ−10 Σ) + (µ− µ0)TΣ −1 0 (µ− µ0)).\nNow consider another distribution q′(θ), with mean and variance parameters cµ and c2Σ. Now if q′(θ) is paired with FiLM scale parameter γ set at 1c , the log-likelihood component is unchanged:\nEθ∼q(θ) log p(D|θ) = Eθ∼q′(θ) log p(D|θ, γ = 1\nc ),\nwith γ being our FiLM scale parameter and p(D|θ, γ) representing a model with FiLM scale layers. Now consider the DKL(q′(θ)||q0(θ)), and optimize c with µ and Σ fixed:\nDKL(q ′(θ)||p(θ)) = 1\n2 (log |Σ0| − log |c2Σ| − d+ Tr(Σ−10 c2Σ) + (cµ− µ0)TΣ −1 0 (cµ− µ0))\n= 1\n2 (log |Σ0| − log |Σ| − 2d log c− d+ c2Tr(Σ−10 Σ)\n+ (cµ− µ0)TΣ−10 (cµ− µ0))\n∂DKL ∂c |c=c∗ = 0 = − d c∗ + c∗Tr(Σ−10 Σ) + (c ∗µ− µ0)TΣ−10 µ\n0 = −d+ c∗2Tr(Σ−10 Σ) + c∗2µTΣ −1 0 µ− c∗µT0 Σ −1 0 µ 0 = c∗2(Tr(Σ−10 Σ) + µ TΣ−10 µ)− c∗µT0 Σ −1 0 µ− d\n⇒ c∗ = µT0 Σ −1 0 µ±\n√ (µT0 Σ −1 0 µ) 2 + 4d(Tr(Σ−10 Σ) + µ TΣ−10 µ)\n2(Tr(Σ−10 Σ) + µ TΣ−10 µ)\n.\nAlso note that c = 0 results in an infinitely-large KL-divergence, so there is a barrier at c = 0, i.e. If optimized through gradient descent, c should never change sign. Furthermore, note that\n∂2DKL ∂c2 = d c2 + Tr(Σ−10 Σ) + µ TΣ−10 µ > 0.\nSo the KL-divergence is concave with respective to c, so c∗ is a minimizer of DKL and therefore DKL(q(θ)||p(θ)) ≥ DKL(q′(θ)||p(θ))|c=c∗, which implies the optimal value of the FiLM scale parameter γ is 1c∗ . While no formal data was collected, it was observed that the scale parameters do in fact reach very close to this optimal scale value after training." }, { "heading": "F CLUSTERING OF FILM PARAMETERS", "text": "In this section, we test the interpretability of learned FiLM Parameters. Such clustering has been done in the past with FiLM parameters, as well as node-wise uncertainty parameters. One would intuitively expect that tasks from similar domains would finds similar features salient, and thus share similar FiLM parameters. To test this hypothesis, we took the 8 mixed vision task from Section 5.3 and split each task into multi 5-way classification tasks so that there were many tasks from similar domains. For example, CIFAR100, which originally had 100 classes, became 20 5-way clasification tasks, Trafficsigns became 8 tasks (7 5-way and 1 8-way), and MNIST 2 (2 5-way). Next, we trained the same architecture used in Section 5.3 except trained all 58 resulting tasks. Joint training was chosen over continual learning to avoid artifacts which would arise from task ordering. Figure 12 shows that the results scale and shift parameters can be clustered and FiLM parameters which arise from the same base task cluster together. Like in Achille et al. (2019), this likely could be used as a means of knowing which tasks to learn continually and which tasks to separate (i.e. tasks from the same cluster would likely benefit from joint training, while tasks from different ones should be separately trained), however we did not explore this idea further." }, { "heading": "G HOW FILM LAYERS INTERACT WITH PRUNING", "text": "In Section 3, we discussed the problem of pruning in variational continual learning and how it prevents nodes from becoming reactivated. To reiterate, pruning broadly occurs in three steps:\n1. Weights incoming to a node begin to revert to the prior distribution\n2. Noise from these high-variance weights affect the likelihood term in the ELBO\n3. To prevent noise, the bias concentrates at a negative value to be cut off by the ReLU activation\nLater tasks then are initialized with this negative bias with low variance, meaning that the node has a difficult time reactivating the node without incurring a high prior cost. This results in the effect shown in Figure 1, where after the first task, effectively no more nodes are reactivated. The effect is further exacerbated with larger values of β, where the pruning effect is stronger. Increasing λ worsens this as well, as increasing the quadratic cost further prevents already low-variance negative biases from moving.\nWe verify that this mechanism is indeed the cause of the limited capacity use by visualizing the posteriors for weights and biases entering a node in the first convolutional layer for a network trained on Easy-CHASY (Figure 13). Here, we see that biases in pruned nodes when there are no FiLM Layers do indeed concentrate at negative values. In contrast, biases in models with FiLM layers are able to revert to their prior because the FiLM parameters perform pruning." }, { "heading": "H RELATED WORK", "text": "Regularization-based continual learning. Many algorithms attempt to regularize network parameters based on a metric of importance. The most directly comparable algorithms to GVCL are EWC (Kirkpatrick et al., 2017), Online EWC (Schwarz et al., 2018), and VCL (Nguyen et al., 2018). EWC measures importance based on the Fisher information matrix, while VCL uses an approximate posterior covariance matrix as an importance measure. Online EWC slightly modifies EWC so that there is only a single regularizer based on the cumulative sum of Fisher information matrices. Lee et al. (2017) proposed IMM, which is an extension to EWC which merges posteriors based on their Fisher information matrices. Ritter et al. (2018) and Yin et al. (2020) both aim to approximate the Hessian by using either Kronecker-factored or low-rank forms, using the Laplace approximation to form approximate posteriors of parameters. These methods all use second-order approximations of the loss.Ahn et al. (2019), like us, use regularizers based on the ELBO, but also measure importance on a per-node basis than a per-weight one. SI (Zenke et al., 2017) measures importance using “Synaptic Saliency,” as opposed to methods based on approximate curvature.\nArchitectural approaches to continual and meta-learning. This family of methods modifies the standard neural architecture by either adding parallel or series components to the network. Progressive Neural Networks adds a parallel column network for every task. Pathnet (Fernando et al., 2017) can be interpreted as a parallel-network based algorithm, but rather than growing model size over time, the model size remains fixed while paths between layer columns are optimized. FiLM parameters can be interpreted as adding series components to a network, and has been a mainstay in the multitask and meta-learning literature. Requeima et al. (2019) use hypernetworks to amortize FiLM parameter learning, and has been shown to be capable of continual learning. Architectural approaches are often used in tandem with regularization based approaches, such as in HAT (Serra et al., 2018), which uses per-task gating parameters alongside a compression-based regularizer. Adel et al. (2020) propose CLAW, which also uses variational inference alongside per-task parameters, but requires a more complex meta-learning based training procedure involving multiple splits of the dataset. GVCL with FiLM layers adds to this list of hybrid architectural-regularization based approaches.\nCold Posteriors and likelihood-tempering. As mentioned in Section 2, likelihood-tempering (or KL-reweighting) has been empirically found to improve performance when using variational inference for Bayesian Neural Networks over a wide number of contexts and papers (Osawa et al., 2019; Zhang et al., 2018). Cold posteriors are closely related to likelihood tempering, except they temper the full posterior rather than only the likelihood term, and often empirically outperform Bayesian posteriors when using MCMC sampling Wenzel et al. (2020). From an information-theoretic perspective, KL-reweighted ELBOs have also studied as compression (Achille et al., 2020). Achille et al. (2019), like us, considers a limiting case of β, and uses this to measure parameter saliency, but use this information to create a task embedding rather than for continual learning. Outside of the Bayesian Neural Network context, values of β > 1 have also been explored (Higgins et al., 2017), and more generally different values of β trace out different points on a rate-distortion curve for VAEs (Alemi et al., 2018)." }, { "heading": "I EXPERIMENT DETAILS", "text": "I.1 REPORTED METRICS\nAll reported scores and figures present the mean and standard deviation across 5 runs of the algorithm with a different network initialization. For Easy-CHASY and Hard-CHASY, train/test splits are also varied across iterations. For the Mixed Vision tasks, task permutation of the 8 tasks is also randomized between iterations.\nLet the matrix Ri,j represent the performance of jth task after the model was trained on the ith task. Furthermore, let Rindj be the mean performance of the jth for a network trained only on that task and let the total number of tasks be T . Following Lopez-Paz & Ranzato (2017) and Pan et al. (2020), we define\nAverage Accuracy (ACC) = 1\nT T∑ j=1 RT,j ,\nForward Transfer (FWT) = 1\nT T∑ j=1 Rj,j −Rindj ,\nBackward Transfer (BWT) = 1\nT T∑ j=1 RT,j −Rj,j .\nNote that these metrics are not exactly the same as those presented in all other works, as the FWT and BWT metrics are summed over the indices 1 ≤ j ≤ T , whereas Lopez-Paz & Ranzato (2017) and Pan et al. (2020) sum from 2 ≤ j ≤ T and 1 ≤ j ≤ T − 1 for FWT and BWT, respectively. For FWT, this definition does not assumes that R1,1 = Rind1 , and affects algorithms such as HAT and Progressive Neural Networks, which either compress the model, resulting in lower accuracy, or use a smaller architecture for the first task. The modified BWT transfer is equal to the other BWT metrics apart from a constant factor T−1T .\nIntuitively, forward transfer equates to how much continual learning has benefited a task when a task is newly learned, while backwards transfer is the accuracy drop as the network learns more tasks compared to when a task was first learned. Furthermore, in the tables in Appendix J, we also present net performance gain (NET), which quantifies the total gain over separate training, at the end of training continually:\nNET = FWT + BWT = 1\nT T∑ j=1 RT,j −Rindj .\nNote that for computation of Rind, we compare to models trained under the same paradigm, i.e. MAP algorithms (all baselines except for VCL) are compared to a MAP trained model, and VI algorithms (GVCL-F, GVCL and VCL) are compared to KL-reweighted VI models. This does not make a difference for most of the benchmarks where RindMAP ≈ RindVI . However, for Easy and HardCHASY, RindMAP < R ind VI , so we compare VI to VI and MAP to MAP to obtain fair metrics.\nIn Figure 5b, we plot ∆ACCi, which we define as\n∆ACCi = 1\ni i∑ j=1 Ri,j −RT,j .\nThis metric is useful when the tasks have very different accuracies and their permutation is randomized, as is the case with the mixed vision tasks. Note that this means that Ri,j would refer to a different task for each permutation, but we average over the 5 permutations of the runs. Empirically,\nif two algorithms have similar final accuracies, this metric measures how much the network forgets about the first i tasks from that point to the end, and also measures how high the accuracy would have been if training was terminated after i tasks. Plotting this also captures the concept of graceful vs catastrophic forgetting, as graceful forgetting would show up as a smooth downward curve, while catastrophic forgetting would have sudden drops.\nI.2 OPTIMIZER AND TRAINING DETAILS\nThe implementation of all baseline methods was based on the Github repository6 for HAT (Serra et al., 2018), except the implementions of IMM-Mode and EWC were modified due to an error in the computation of the Fisher Information Matrix in the original implementation. Baseline MAP algorithms were trained with SGD with a decaying learning starting at 5e-2 with a maximum epochs of 200 per task for the Split-MNIST, Split-CIFAR and the mixed vision benchmarks. The number of maximum epochs for Easy-CHASY and Hard-CHASY was 1000, due to the small dataset size. Early stopping based on the validation set was used. 10% of the training set was used as validation for these methods, and for Easy and Hard CHASY, 8 samples per class form the validation set (which are disjoint from the training samples or test samples).\nFor VI models, we used Adam optimizer with a learning rate of 1e-4 for Split-MNIST and Mixture, and 1e-3 for Easy-CHASY, Hard-CHASY and Split-CIFAR. We briefly tested running the baselines algorithms using Adam rather than SGD and performance did not change. Easy-CHASY and HardCHASY were run for 1500 epochs per task, Split-MNIST for 100, Split-CIFAR for 60, and 180 for Mixture. The number of epochs was changed so that the number of gradient steps for each task was roughly equal. For Easy-CHASY, Hard-CHASY and Split-CIFAR, this means that later tasks are run for more epochs, since the largest training sets are at the start. For Mixture, we ran 180 equivalents epochs for Facescrub. For how many epochs this equates to in the other datasets, we refer the reader to Appendix A in Serra et al. (2018). We did not use early stopping for these VI results. While we understand that in some cases we trained for many more epochs than the baselines, the baselines used early stopping and therefore all stopped long before the 200 epoch limit was reached, so allocating more time would not change their results. Swaroop et al. (2019) also finds that allowing VI to converge is crucial for continual learning performance. We leave the discussion of improving this convergence time for future work.\nAll experiments (both the baselines and VI methods) use a batch size of 64.\nI.3 ARCHITECTURAL DETAILS\nEasy and Hard CHASY. We use a convolutional architecture with 2 convolutions layers with:\n1. 3x3 convolutional layer with 16 filters, padding of 1, ReLU activations 2. 2x2 Max Pooling with stride 2 3. 3x3 convolutional layer with 32 filters, padding of 1, ReLU activations 4. 2x2 Max Pooling with stride 2 5. Flattening layer 6. Fully connected layer with 100 units and ReLU activations 7. Task-specific head layers\nSplit-MNIST. We use a standard MLP with:\n1. Fully connected layer with 256 units and ReLU activations 2. Fully connected layer with 256 units and ReLU activations 3. Task-specific head layers\nSplit-CIFAR. We use the same architecture from Zenke et al. (2017):\n1. 3x3 convolutional layer with 32 filters, padding of 1, ReLU activations 6Repository at https://github.com/joansj/hat\n2. 3x3 convolutional layer with 32 filters, padding of 1, ReLU activations\n3. 2x2 Max Pooling with stride 2\n4. 3x3 convolutional layer with 64 filters, padding of 1, ReLU activations\n5. 3x3 convolutional layer with 64 filters, padding of 1, ReLU activations\n6. 2x2 Max Pooling with stride 2\n7. Flattening\n8. Fully connected layer with 512 units and ReLU activations\n9. Task-specific head layers\nMixed vision tasks. We use the same AlexNet architecture from Serra et al. (2018):\n1. 4x4 convolutional layer with 64 filters, padding of 0, ReLU activations\n2. 2x2 Max Pooling with stride 2\n3. 3x3 convolutional layer with 128 filters, padding of 0, ReLU activations\n4. 2x2 Max Pooling with stride 2\n5. 2x2 convolutional layer with 256 filters, padding of 0, ReLU activations\n6. 2x2 Max Pooling with stride 2\n7. Flattening\n8. Fully connected layer with 2048 units and ReLU activations\n9. Fully connected layer with 2048 units and ReLU activations\n10. Task-specific head layers\nFor MAP models, dropout layers with probabilities of either 0.2 or 0.5 were added after convolutional or fully-connected layers. For GVCL-F, FiLM layers were inserted after convolutional/hidden layers, but before ReLU activations.\nI.4 HYPERPARAMETER SELECTION\nFor all algorithms on Easy-CHASY, Hard-CHASY, Split-MNIST and Split-CIFAR, hyperparameter selection was done by selecting the combination which produced the best average accuracy on the first 3 tasks. The algorithms were then run on the full number of tasks. For the Mixed Vision tasks, the best hyperparameters for the baselines were taken from the HAT Github repository. For GVCL, we performed hyperparameter selection in the same way as in Serra et al. (2018): we found the best hyperparameters for the average performance on the first random permutation of tasks. Note that in the mixture tasks, we randomly permute the task order for each iteration (with permutations kept consistent between algorithms), whereas for the other 4 benchmarks, the task order is fixed. Hyperparameter searches were performed using a grid search. The best selected hyperparameters are shown in Table 3.\nFor the Joint and Separate VI baselines, we used the same β. For the mixed vision tasks, we had to used a prior variance of 0.01 (for both VCl, GVCL and GVCL-F), but for all other tasks we did not need to tune this." }, { "heading": "J FURTHER EXPERIMENTAL RESULTS", "text": "In following section we present more quantitative results of the various baselines on our benchmarks. For brevity, in the main text, we only included the best performing baselines and those which are most comparable to GVCL, which consisted of HAT, PathNet, Online EWC and VCL.\nJ.1 EASY-CHASY ADDITIONAL RESULTS\nJ.2 HARD-CHASY ADDITIONAL RESULTS\nJ.3 Split-MNIST ADDITIONAL RESULTS\nJ.4 Split-CIFAR ADDITIONAL RESULTS\nJ.5 MIXED VISION TASKS ADDITIONAL RESULTS" }, { "heading": "K CLUSTERED HASYV2 (CHASY)", "text": "The HASYv2 dataset is a dataset consisting over 32x32 black/white handwritten Latex characters. There are a total of 369 classes, and over 150 000 total samples (Thoma, 2017).\nWe constructed 10 classification tasks, each with a varying number of classes ranging from 20 to 11. To construct these tasks, we first trained a mean-field Bayesian neural network on a 200-way classification task on the 200 classes with the most total samples. To get an embedding for each class, we use the activations of the second-last layer. Then, we performed K-means clustering with 20 clusters on the means of the embedding generated by each class when the samples of the classes were input into the network. Doing this yielded the classes shown in figure 32. Now, within each cluster are classes which are deemed “similar” by the network. To make the 10 classification tasks, we then took classes from each cluster sequentially (in order of the class whose mean was closest to the cluster’s mean), so that each task contains at most 1 symbol from each cluster. Doing this ensures that tasks are similar to one another, since each task consists of classes which are different in similar ways. With the classes selected, the training set is made by selecting 16 samples of each classes, and using the remaining as the test set. This procedure was used to generate the “easy” set of tasks, which should have the maximum amount of similarity between tasks. We also constructed a second set of tasks, the “hard” set, in which each task is individually difficult. This was done by selecting each task to be classification within each cluster, selecting clusters with the most number of symbols first. This corresponds to clusters 1-10 in figure 32. With the classes for each task selected, 16 samples from each class are used in the training set, and the remainder are used as the test set. Excess samples are discarded so that the test set class distribution is also uniform within each task.\nIt was necessary to perform this clustering procedure as we found it difficult to produce sizable transfer gains if we simply constructed tasks by taking the classes with the most samples. While we were able to have gains of up to 3% from joint training on 10 20-way classification tasks with the tasks chosen by class sample count, these gains were significantly diminished when performing MAP estimation as opposed to MLE estimation, and reduced even further when performing VI. Because one of our benchmark continual learning methods is VCL, showing transfer when trained using VI is necessary.\nFigures 33a and 34 show the performance gains of joint training over separate training on this new dataset, for both MAP, and KL-reweighted VI, respectively. Figure 33b shows how relative test set accuracy varies for each specific task for these training procedures." } ]
2,021
null
SP:2d435fd5053bd60dd56049e2177e4cc8f5218759
[ "The paper proposed MAFL, a novel approach to conduct Mixup under the federated learning setting whiling preserving data privacy. The proposed FedMix scheme is inspired by Taylor’s expansion of the global Mixup formulation. The effectiveness of MAFL is justified via empirical studies over a simulated federated learning environment, which indicates that Mixup achieves better test accuracies on various machine learning tasks." ]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device, thus preserving privacy and eliminating the need to store data globally. While there are promising results under the assumption of independent and identically distributed (iid) local data, current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases. To resolve this issue, we propose a simple framework, Mean Augmented Federated Learning (MAFL), where clients send and receive averaged local data, subject to the privacy requirements of target applications. Under our framework, we propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup, but does not require local raw data to be directly shared among devices. Our method shows greatly improved performance in the standard benchmark datasets of FL, under highly non-iid federated settings, compared to conventional algorithms.
[ { "affiliations": [], "name": "Sumin Shin" }, { "affiliations": [], "name": "Sung Ju Hwang" }, { "affiliations": [], "name": "Eunho Yang" } ]
[ { "authors": [ "Sean Augenstein", "H. Brendan McMahan", "Daniel Ramage", "Swaroop Ramaswamy", "Peter Kairouz", "Mingqing Chen", "Rajiv Mathews", "Blaise Aguera y Arcas" ], "title": "Generative models for effective ml on private, decentralized datasets", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Sebastian Caldas", "Sai Meher Karthik Duddu", "Peter Wu", "Tian Li", "Jakub Konečný", "H. Brendan McMahan", "Virginia Smith", "Ameet Talwalkar" ], "title": "Leaf: A benchmark for federated settings", "venue": "International Conference on Machine Learning (ICML) Workshop on Federated Learning for Data Privacy and Confidentiality,", "year": 2019 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Léon Bottou", "Vladimir Vapnik" ], "title": "Vicinal risk minimization", "venue": "Conference on Neural Information Processing Systems (NIPS),", "year": 2000 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "Emnist: an extension of mnist to handwritten letters", "venue": "International Joint Conference on Neural Networks (IJCNN),", "year": 2017 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "Conference on Neural Information Processing Systems (NIPS),", "year": 2012 }, { "authors": [ "Z. Eaton-Rosen", "Felix J.S. Bragman", "Sébastien Ourselin", "M. Cardoso" ], "title": "Improving data augmentation for medical image segmentation", "venue": "Conference on Medical Imaging with Deep Learning (MIDL),", "year": 2020 }, { "authors": [ "Matt Fredrikson", "Somesh Jha", "Thomas Ristenpart" ], "title": "Model inversion attacks that exploit confidence information and basic countermeasures", "venue": "Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2015 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Augmenting data with mixup for sentence classification: An empirical study", "venue": null, "year": 1905 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "Conference on Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Kevin Hsieh", "Amar Phanishayee", "Onur Mutlu", "Phillip B. Gibbons" ], "title": "The non-iid data quagmire of decentralized machine learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Wonyong Jeong", "Jaehong Yoon", "Eunho Yang", "Sung Ju Hwang" ], "title": "Federated semi-supervised learning with inter-client consistency. International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2020 (FL-ICML’20), 2020", "venue": null, "year": 2020 }, { "authors": [ "Jakub Konečný", "H. Brendan McMahan", "Felix X. Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "Conference on Neural Information Processing Systems (NIPS) Workshop on Private Multi-Party Machine Learning,", "year": 2016 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "Proceedings of Machine Learning and Systems (MLSys),", "year": 2020 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "H. Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2017 }, { "authors": [ "H. Brendan McMahan", "Daniel Ramage", "Kunal Talwar", "Li Zhang" ], "title": "Learning differentially private recurrent language models", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Seungeun Oh", "Jihong Park", "Eunjeong Jeong", "Hyesung Kim", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Mix2fld: Downlink federated learning after uplink federated distillation with two-way mixup", "venue": "IEEE Communication Letters,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Felix Sattler", "Simon Wiedemann", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "Robust and communication-efficient federated learning from non-iid data", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2019 }, { "authors": [ "MyungJae Shin", "Chihoon Hwang", "Joongheon Kim", "Jihong Park", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Xor mixup: Privacy-preserving data augmentation for one-shot federated learning. International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2020 (FL-ICML’20), 2020", "venue": null, "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Virginia Smith", "Simone Forte", "Chenxin Ma", "Martin Takac", "Michael I. Jordan", "Martin Jaggi" ], "title": "Cocoa: A general framework for communication-efficient distributed optimization", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2016 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet Talwalkar" ], "title": "Federated multi-task learning", "venue": "Conference on Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "Conference on Neural Information Processing Systems (NIPS),", "year": 2019 }, { "authors": [ "Tiffany Tuor", "Shiqiang Wang", "Bong Jun Ko", "Changchang Liu", "Kin K. Leung" ], "title": "Overcoming noisy and irrelevant data in federated learning", "venue": "International Conference on Pattern Recognition (ICPR),", "year": 2020 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "Aaron Courville", "David Lopez-Paz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hongyi Wang", "Mikhail Yurochkin", "Yuekai Sun", "Dimitris Papailiopoulos", "Yasaman Khazaeni" ], "title": "Federated learning with matched averaging", "venue": "International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Pete Warden" ], "title": "Speech commands: A dataset for limited-vocabulary speech", "venue": "recognition. ArXiv,", "year": 2018 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Jaehong Yoon", "Wonyong Jeong", "Giwoong Lee", "Eunho Yang", "Sung Ju Hwang" ], "title": "Federated continual learning with adaptive parameter communication", "venue": null, "year": 2003 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Nghia Hoang", "Yasaman Khazaeni" ], "title": "Bayesian nonparametric federated learning of neural networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 }, { "authors": [ "Caldas" ], "title": "EMNIST is very similar to MNIST", "venue": null, "year": 2019 }, { "authors": [ "Wang" ], "title": "Heterogeneity from skewed label distribution Recent papers (Yurochkin et al., 2019; Wang et al., 2020) suggested an alternative heterogeneous environment, which does not limit the number of classes per client but skews label distribution in local data. We used a Dirichlet distribution of α = 0.2", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device, thus preserving privacy and eliminating the need to store data globally. While there are promising results under the assumption of independent and identically distributed (iid) local data, current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases. To resolve this issue, we propose a simple framework, Mean Augmented Federated Learning (MAFL), where clients send and receive averaged local data, subject to the privacy requirements of target applications. Under our framework, we propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup, but does not require local raw data to be directly shared among devices. Our method shows greatly improved performance in the standard benchmark datasets of FL, under highly non-iid federated settings, compared to conventional algorithms." }, { "heading": "1 INTRODUCTION", "text": "As we enter the era of edge computing, more data is being collected directly from edge devices such as mobile phones, vehicles, facilities, and so on. By decoupling the ability to learn from the delicate process of merging sensitive personal data, Federated learning (FL) proposes a paradigm that allows a global neural network to learn to be trained collaboratively from individual clients without directly accessing the local data of other clients, thus preserving the privacy of each client (Konečný et al., 2016; McMahan et al., 2017). Federated learning lets clients do most of the computation using its local data, with the global server only aggregating and updating the model parameters based on those sent by clients.\nOne of the standard and most widely used algorithm for federated learning is FedAvg (McMahan et al., 2017), which simply averages model parameters trained by each client in an element-wise manner, weighted proportionately by the size of data used by clients. FedProx (Li et al., 2020b) is a variant of FedAvg that adds a proximal term to the objective function of clients, improving statistical stability of the training process. While several other methods have been proposed until recently (Mohri et al., 2019; Yurochkin et al., 2019; Wang et al., 2020), they all build on the idea that updated model parameters from clients are averaged in certain manners.\nAlthough conceptually it provides an ideal learning environment for edge devices, the federated learning still has some practical challenges that prevent the widespread application of it (Li et al., 2020a; Kairouz et al., 2019). Among such challenges, the one that we are interested in this paper is the heterogeneity of the data, as data is distributed non-iid across clients in many real-world settings; in other words, each local client data is not fairly drawn from identical underlying distribution. Since each client will learn from different data distributions, it becomes harder for the model to be trained efficiently, as reported in (McMahan et al., 2017). While theoretical evidence on the convergence of FedAvg with non-iid case has recently been shown in (Li et al., 2020c), efficient algorithms suitable for this setting have not yet been developed or systematically examined despite some efforts (Zhao et al., 2018; Hsieh et al., 2020).\nIn addition to non-iid problem, another important issue is that updating model parameters individually trained by each client is very costly and becomes even heavier as the model complexity increases. Some existing works Smith et al. (2016); Sattler et al. (2019) target this issue to decrease the amount of communications while maintaining the performance of FedAvg. A more practical approach to reduce communication cost is to selectively update individual models at each round, rather than having all clients participate in parameter updates. This partial participation of clients per round hardly affects test performance in ideal iid settings but it can exacerbate the heterogeneity of weight updates across clients and as a result, the issue of non-iid (McMahan et al., 2017).\nIn order to mitigate the heterogeneity across clients while protecting privacy, we provide a novel yet simple framework, mean augmented federated learning (MAFL), in which each client exchanges the updated model parameters as well as its mashed (or averaged) data. MAFL framework allows the trade-off between the amount of meaningful information exchanged and the privacy across clients, depending on several factors such as the number of data instances used in computing the average. We first introduce a naive approach in our framework that simply applies Mixup (Zhang et al., 2018) between local data and averaged external data from other clients to reduce a myopic bias.\nHere, we go further in our framework and ask the following seemingly impossible question: can only averaged data in our framework that has lost most of the discriminative information, bring the similar effect as a global Mixup in which clients directly access others’ private data without considering privacy issues? Toward this, we introduce our second and more important approach in our framework, termed Federated Mixup (FedMix), that simply approximates the loss function of global Mixup via Taylor expansion (it turns out that such approximation only involves the averaged data from other clients!). Figure 1 briefly describes the concept of our methods.\nWe validate our method on standard benchmark datasets for federated learning, and show its effectiveness against the standard federated learning methods especially for non-iid settings. In particular, we claim that FedMix shows better performance and smaller drop in accuracy with more heterogeneity or fewer clients update per communication round, further increasing difficulty of federated learning.\nOur contribution is threefold:\n• We propose a simple framework for federated learning that averages and exchanges each local data. Even naive approach in this framework performing Mixup with other clients’ mashed data shows performance improvement over existing baselines on several settings.\n• We further develop a novel approximation for insecure global Mixup accessing other clients’ local data, and find out that Taylor expansion of global Mixup only involves the averaged data from other clients. Based on this observation, we propose FedMix in our framework approximating global Mixup without accessing others’ raw data.\n• We validate FedMix on several FL benchmark datasets especially focusing on non-iid data settings where our method significantly outperforms existing baselines while still preserving privacy with minimal increases in communication cost." }, { "heading": "2 RELATED WORK", "text": "Federated learning Federated learning was first proposed in Konečný et al. (2016) where the prevalent asynchronous SGD (Dean et al., 2012) is used to update a global model in a distributed fashion. A pioneering work in this field proposed the currently most widely used algorithm, FedAvg (McMahan et al., 2017), which is also the first synchronous algorithm dedicated to federated setting. Shortly after, Li et al. (2020b) proposed a variant of FedAvg, named FedProx, where the authors claimed to overcome statistical heterogeneity and increase stability in federated learning. Recent studies attempt to expand federated learning with the aim of providing learning in more diverse and practical environments such as multi-task learning (Smith et al., 2017), generative models (Augenstein et al., 2020), continual learning (Yoon et al., 2020), semi-supervised learning (Jeong et al., 2020), and data with noisy labels (Tuor et al., 2020). Our paper focuses on general federated settings, but it could be considered in such various situations.\nHowever, these algorithms may obtain suboptimal performance when clients participating in FL have non-iid (Zhao et al., 2018; Hsieh et al., 2020) distributions. While the convergence of FedAvg on such settings was initially shown by experiments in McMahan et al. (2017) and later proved in Li et al. (2020c), it does not guarantee performance as good as it would have been for iid setting. Existing algorithms that pointed out this issue have major limitations, such as privacy violation by partial global sharing of local data (Zhao et al., 2018) or no indication of improvement over baseline algorithms such as FedAvg (Hsieh et al., 2020). Our method aims to improve performance particularly on these non-iid situations, without compromising privacy.\nMixup Mixup (Zhang et al., 2018) is a popular data augmentation technique that generates additional data by linear interpolation between actual data instances. Mixup has been usually applied to image classification tasks and shown to improve test accuracy on various datasets such as CIFAR10 and ImageNet-2012 (Russakovsky et al., 2015), and, on popular architectures such as ResNet (He et al., 2016) and ResNeXt (Xie et al., 2017), for various model complexity. It is also reported in Zhang et al. (2018) that Mixup helps with stability, adversarial robustness (Zhang et al., 2018), calibration, and predictive certainty (Thulasidasan et al., 2019). Mixup is expanding from various angles due to its simplicity and popularity. First, beyond image classification tasks, its effectiveness has been proven in various domains such as image segmentation (Eaton-Rosen et al., 2020), speech recognition (Warden, 2018), and natural language processing (Guo et al., 2019). Also, several extensions such as Manifold Mixup (Verma et al., 2018), which performs Mixup in latent space, or CutMix (Yun et al., 2019), which replaces specific regions with others patches, have been proposed.\nIn most of the previous studies on federated learning, Mixup was partially (or locally) used as a general data augmentation technique. Some recent studies (Oh et al., 2020; Shin et al., 2020) proposed to send blended data to server using Mixup, but they require sending locally- and linearly-mixed (mostly from two instances) data to server at every round, therefore being susceptible to privacy issues with huge communication costs. Our work properly modifies Mixup under the restrictions of federated learning and mitigates the major challenges of federated learning such as non-iid clients." }, { "heading": "3 MEAN AUGMENTED FEDERATED LEARNING (MAFL) AND FEDMIX", "text": "We now provide our framework exchanging averaged data for federated learning and main method approximating insecure global Mixup under our framework, after briefly introducing the setup." }, { "heading": "3.1 SETUP AND BACKGROUND", "text": "Federated learning and FedAvg Federated Averaging (FedAvg) (McMahan et al., 2017) has been the most popular algorithmic framework for federated learning. For every communication round t = 0, . . . , T − 1, a client k ∈ 1, . . . , N selected for local training sends back its model wkt (or only difference to reduce communication cost) to a global server. For every round, K number of clients are selected to locally update and send model parameters. The server simply averages parameters received, so that the global model wt after t rounds of communications becomes wt = ∑n k=1 pkw k t where pk is the importance of client k based on the relative number of data in k among all selected clients at t. The updated global model is sent back to clients for the next round, which undergoes the following E local updates via stochastic gradient descent (SGD):\nwkt+1,i+1 ← wkt+1,i − ηt+1∇`(f(xki ;wt+1,i), yki ) for i = 0, 1, . . . , E − 1, batch size B, and local learning rate η. Here, ` is the loss function for learning and f(x;wt) is the model output for input x given model weight wt.\nMixup Mixup (Zhang et al., 2018) is a simple data augmentation technique using a linear interpolation between two input-label pairs (xi, yi) and (xj , yj) to augment x̃ = λxi + (1 − λ)xj and ỹ = λyi + (1 − λ)yj . The variable λ ∈ [0, 1] is a hyperparameter that is chosen from the beta distribution for each training step." }, { "heading": "3.2 MAFL: MEAN AUGMENTED FEDERATED LEARNING", "text": "The most obvious and powerful way for local models to receive information about data from other clients is to simply receive raw individual data. However, under typical federated setting, each client does not have direct access to individual external data due to privacy constraints, leading to overall performance degradation. We propose a federated learning framework that relaxes the limitation of accessing others’ raw data and allows a more granular level of privacy depending on applications. In our new framework, termed mean augmented federated learning (MAFL), clients not only exchange model parameters but also its mashed (or averaged) data.\nIn MAFL, only the part that exchanges averaged data of each client has been added to the standard FL paradigm (Algorithm 1). Here, the number of data instances used in computing the average, Mk, controls the key features of MAFL such as privacy and communication costs. Lower Mk value results in more relevant information passed over, but only in cost of less secure privacy and larger communication cost. In one extreme of Mk = 1, raw data is thoroughly exchanged and privacy is not protected at all, which is clearly inappropriate for FL. But, in the other extreme, all data of each client is averaged to ensure a considerable degree of privacy. In addition, it also has an advantage on communication cost; each client sends a set of nk/Mk averaged data where nk is local data size of client k. The remaining question is whether it is possible to improve performance even when exchanging information that is averaged from all local data and loses discriminative characteristics.\nThe most naive way we can consider in our MAFL framework is to directly use the mashed data from other clients, just like regular local data. However, since mashed data has a lot less usable\nAlgorithm 1: Mean Augmented Federated Learning (MAFL) Input: Dk = {Xk,Yk} for k = 1, . . . , N Mk: number of data instances used for\ncomputing average x̄, ȳ\nInitialize w0 for global server for t = 0, . . . , T − 1 do\nfor client k with updated local data do Split local data into Mk sized batches Compute x̄, ȳ for each batch Send all x̄, ȳ to server\nend St ← Kclients selected at random Send wt to clients k ∈ St if updated then\nAggregate all x̄, ȳ to Xg,Yg Send Xg,Yg to clients k ∈ St\nend for k ∈ St do\nwkt+1 ← LocalUpdate(k,wt;Xg,Yg) end wt+1 ← 1K ∑ k∈St pkw k t+1\nend\nAlgorithm 2: FedMix LocalUpdate(k,wt;Xg,Yg) under MAFL (Algorithm 1): w ← wt for e = 0, . . . , E − 1 do\nSplit Dk into batches of size B for batch(X,Y ) do\nSelect an entry xg,yg from Xg,Yg `1 =\n(1− λ)` ( f((1− λ)X;w),Y ) `2 = λ` ( f((1−λ)X;w),yg\n) `3 = λ ∂`1 ∂x · xg\n(derivative calculated at x = (1− λ)xi and y = yi for each of xi, yi in X,Y ) ` = `1 + `2 + `3 w ← w − ηt+1∇`\nend end return w\ninformation than local data, we can think of a method of mixing it with local data: `NaiveMix = (1− λ)` ( f ( (1− λ)xi + λx̄j ) , yi ) + λ` ( f ( (1− λ)xi + λx̄j ) , ȳj ) (1)\nwhere (xi, yi) is an entry from local data and (x̄j , ȳj) corresponds to means of (inputs,labels) from other client j. Note that Eq. (1) can be understood as the generalization of the loss of directly using the mashed data mentioned above in the sense that such loss can be achieved if λ in Eq. (1) is set deterministically to 0 and 1.\nIn the experimental section, we confirm the effectiveness of MAFL using `NaiveMix. However, in the next subsection, we will show how to achieve better performance by approximating the global Mixup in a more systematical way in our MAFL framework." }, { "heading": "3.3 FEDMIX: APPROXIMATING GLOBAL MIXUP VIA INPUT DERIVATIVE", "text": "We now provide our main approach in the MAFL framework that aims to approximate the effect of global Mixup only using averaged data from other clients. Consider some client i with its local data (xi, yi). It is not allowed in federated learning, but let us assume that client i has access to client j’s local data (xj , yj). Then, client i would leverage (xj , yj) to improve the performance of its local model especially in non-iid settings by augmenting additional data via Mixup:\nx̃ = (1− λ)xi + λxj and ỹ = (1− λ)yi + λyj . (2) If Mixup rate λ is 1, (xj , yj) from client j is again directly used like a regular local data, and it would be much more efficient than indirect update of local models through the server.\nThe essence of our method is to approximate the loss function ` ( f(x̃), ỹ ) for the augmented data from Eq. (2), with Taylor expansion for the first argument x. Specifically, we derive the following proposition:\nProposition 1 Consider the loss function of the global Mixup modulo the privacy issues, `GlobalMixup ( f(x̃), ỹ ) = ` ( f ( (1− λ)xi + λxj ) , (1− λ)yi + λyj ) (3)\nfor cross-entropy loss `1. Suppose that Eq. (3) is approximated by applying Taylor series around the place where λ 1. Then, if we ignore the second order term (i.e., O(λ2)), we obtain the following approximated loss:\n(1− λ)` ( f ( (1− λ)xi ) , yi ) + λ` ( f ( (1− λ)xi ) , yj ) + λ ∂`\n∂x · xj (4)\nwhere the derivative ∂`∂x is evaluated at x = (1− λ)xi and y = yi.\nWhile Eq. (4) still involves xj and yj , invading the privacy of client j, the core value of Proposition 1 gets clearer when mixing up multiple data instances from other clients. Note that the vanilla Mixup is not mixing one specific instance with other data, but performing augmentations among several random selected data. In a non-iid FL environment, we can also expect that the effect will be greater as we create Mixup data by accessing as much private data as possible from other clients. From this point of view, let us assume that client i has received a set of M private instances, J , from client j. Then, the global Mixup loss in Eq. (3) is\n1 |J | ∑ j∈J ` ( f ( (1− λ)xi + λxj ) , (1− λ)yi + λyj ) ,\nand the approximated FedMix loss in Proposition 1 becomes\n`FedMix = 1 |J | ∑ j∈J (1− λ)` ( f ( (1− λ)xi ) , yi ) + λ` ( f ( (1− λ)xi ) , yj ) + λ ∂` ∂x · xj\n= (1− λ)` ( f ( (1− λ)xi ) , yi ) + λ` ( f ( (1− λ)xi ) , ȳj ) + λ ∂`\n∂x · x̄j (5)\nwhere we utilize the linearity of Equation 4 in terms of xj and yj , and x̄j and ȳj correspond to mean of M inputs and labels in J , respectively. The algorithmic details are provided in the appendix due to the space constraint (see Algorithm 2 in Appendix A).\n1Throughout the paper, we implicitly assume the classification tasks. For regression tasks, we can consider the squared loss function, and the proposition still holds." }, { "heading": "3.4 PRIVACY ISSUES AND ADDITIONAL COSTS OF MAFL", "text": "Privacy issues of MAFL MAFL requires exchanging averaged data by construction. Even though MAFL exchanges only the limited information allowed by the application, it may causes new types of privacy issues. The potential privacy risk of FL or MAFL is beyond the main scope of our study, but in this section, we briefly discuss some basic privacy issues of MAFL and potential solutions.\n• There is possibility that local data distribution can be inferred relatively easily from averaged data. This issue simply arises as Mk is not large enough, so that individual data could be inferred from the averaged data easily. On the other hand, if nk is not big enough, each entry in Xg,Yg could reveal too much about the whole local distribution of the client it came from.\n• It could be easy to infer ownership of each entry in Xg,Yg , if it contains client-id specific information. If clients could identify what other client each entry came from, information about local data of that client could be inferred.\n• Additional concerns involve identification of data by detecting change in exchanged averaged data, in case of continual learning, which involves local data change across time. This issue is exacerbated as there is update of averaged data for every minute change on local data, which makes the client receiving Xg,Yg easier to infer the changed portion. One simple suggestion to alleviate this issue would be to only update Xg,Yg when there is enough change in local data across enough number of clients, so that such changes are not easily exploitable.\n• As a way to strengthen privacy protection under MAFL (and possibly to help with issues mentioned above), we in the server can average within entries of Xg,Yg . If this additional average is done across every random m entries at the server, it would effectively provide averaged data across all local data of m clients, but would result in an m-fold decrease in the number of averaged data. This variant is considered in Appendix J.\n• In case where the global server is not credential, the averaged data itself should ensure privacy as it is sent to the server. A most obvious concern comes from when Mk is not large enough, so that each entry of Mk reveals more of information of each individual input. Simply using a sufficiently large value of Mk can alleviate this issue, although this might result in worse performance.\n• However, for clients whose nk is quite small, there is a limit for Mk to be large enough. One way to alleviate this issue is to introduce a cut-off threshold for allowing clients to send averaged data to server. We report the results in Appendix H.\nCommunication cost Since MAFL requires sending averaged input data between server and clients, additional communication costs are incurred. However, it turns out that this additional cost is very small compared to communication cost required for exchanging model parameters. This is mainly due to the fact that input dimension is typically much smaller than number of model parameters. Specifically, for input dimension di, exchange of averaged data among N clients incurs 2Ndi cost (factor of 2 for server receiving and sending the values). Meanwhile, the cost for exchange of model parameters is 2Npm where pm is number of model parameters. Under typical circumstances, averaged data is only exchanged at the beginning of the first communication round, while model parameters have to be exchanged every round. Thus the ratio between the two costs after T communication rounds is di/(Tpm). Since di pm in general, we consider extra communication burden to be negligible (even in the worst case where we update averaged data every round, the ratio is still di/(pm).\nFedMix also requires calculation of input derivative term in its loss function, so potentially extra memory is required. We further provide additional computation costs of MAFL in Appendix G." }, { "heading": "4 EXPERIMENTS", "text": "We test our result on various benchmark datasets with NaiveMix (direct mixup between local data and averaged data) and FedMix, then compare the results with FedAvg (McMahan et al., 2017) and FedProx (Li et al., 2020b), as well as other baseline Mixup scenarios. We create a highly non-iid environment to show our methods excel in such situations." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Dataset We implement the typical federated setting where clients have their own local data and one centralized server that receives and sends information from/to the clients. We utilize a large number of clients and only utilize partial set of clients chosen each round to locally update. We used three popular image classification benchmark datasets: FEMNIST (Caldas et al., 2019), CIFAR10, and CIFAR100, as well as a popular natural language processing benchmark dataset, Shakespeare. See Appendix B for more details about dataset, models, and hyperparameters used. We introduce data size heterogeneity for FEMNIST dataset: each client has different size of local data, each from a unique writer. Meanwhile, we introduce label distribution heterogeneity for CIFAR datasets, with clients having data with only a limited number of classes.\nAlgorithms We study the performance of FedMix and NaiveMix and compare with FedAvg and FedProx. We also compare our method against FedAvg with Mixup within local data (labeled LocalMix; see Figure 1(b)), to show whether Mixup within local data is sufficient to allow the model to perform well on external data. To show the effectiveness of FedMix, we also compare our method to the case where we perform direct Mixup with external data (and thus violating privacy, labeled Global Mixup; see Figure 1(a))." }, { "heading": "4.2 PERFORMANCE OF FEDMIX AND NAIVEMIX ON NON-IID FEDERATED SETTINGS", "text": "We compare the learning curves of each method under the same federated settings, in terms of number of communication rounds conducted. Comparing MAFL-based algorithms, NaiveMix shows slight performance increases than FedAvg, FedProx, and Localmix, while FedMix outperforms and shows faster convergence than all of FedAvg, FedProx, Localmix and NaiveMix for all datasets tested as in Figure 2.\nWhile NaiveMix and FedMix is already superior to FedProx, they are parallel to FedProx modification and can be applied in conjunction with FedProx. We compare performances across FedProx\nvariants of various Mixup algorithms in Table 1. FedMix outperforms vanilla FedProx for various datasets, although they do fall short of default version of FedMix used for the main experiment.\nTo confirm whether received information is properly incorporated, we compare FedMix with possible Mixup scenarios under MAFL. We show the results in Appendix D.\nWhile Mixup is usually performed for image classification tasks, it could be applied for language models. For language datasets, since Mixup cannot be performed on input, we perform Mixup on embeddings (for a detailed explanation of Mixup between hidden states, see Appendix E). When tested on Shakespeare dataset, FedMix and NaiveMix both show better performance than baseline algorithms (Table 2). Note that for this task, LocalMix has the lowest performance, and global Mixup does not result in the superior performance above federated algorithms as expected. We think Mixup does not provide performance boost for this specific task, but claim that MAFL algorithms still result in better performance compared to FedAvg.\nWe also claim that FedMix is superior compared to other methods under various settings, in terms of varying number of clients (N ) and varying number of local data per clients. We observe superior performance of FedMix compared to other algorithms for all settings (see Tables 3 and 4). We also vary the number of local epochs (E) between global updates, and still observe that FedMix outperforms other methods (see Appendix F).\nFedMix compared to global Mixup with fixed mixup ratio Since FedMix approximates loss function of global Mixup for fixed value of λ 1, we can evaluate the efficiency of approximation by comparing between FedMix and a global Mixup scenario with fixed λ value. Table 5 shows varying performance between global Mixup and FedMix under various values of λ. As λ increases, Mixup data reflects more of the features of external data, resulting in better performance in case of global Mixup. However, this also results in our approximation being much less accurate, and we indeed observe performance of FedMix decreasing instead. The result shows that the hyperparameter λ should be chosen to balance between better Mixup and better approximation. However, it seems that high λ results in significant decrease in both methods, probably due to external data (which is out-of-distribution for local distribution) being overrepresented during local update.\nEffect of Mk to compute mean In our algorithm, we chose to calculate Xg,Yg with all local data for each client. To observe a potential effect of Mk, we varied Mk used to compute the averaged data that is sent from other clients. Inevitably, reducing Mk will result in Xg,Yg having much more rows, imposing additional computation burden and less preservation of privacy. In general, for both FEMNIST and CIFAR10, there is only small performance decline as privacy is enhanced, as can be seen in Figure 3. We show that using all local data to calculate each mean is sufficient to both preserve privacy and still have good performance.\nMixup between hidden states Manifold Mixup (Verma et al., 2018) was proposed to show improvements over input Mixup (Zhang et al., 2018) in various image classification tasks such as CIFAR10, CIFAR100, and SVHN. We discuss the possibilities and implications of applying Mixup between hidden states in Appendix E. In summary, we show that variants of using hidden states do not show meaningful advances over FedMix using input Mixup, suggesting that in general, it is relatively inefficient since it imposes additional communication burden.\nEffect of non-iid-ness and client participation We claim that our method is efficient when faced with non-iid federated settings. For example, our setting of CIFAR10 having only data from 2 classes per client is very non-iid, as in average a pair of clients share only roughly 20% of data distribution. We test settings for CIFAR10 where clients have data from greater number of classes, and while there is little difference for iid (10 class/client) setting, we observe that FedMix outperform other methods and suffer less from increased heterogeneity from highly non-iid settings (Table 6). In addition, we also observe less decline and better performance for MAFL-based algorithms, FedMix in particular, as we train less number of clients per round, reducing communication burden in cost of performance (Table 7)." }, { "heading": "5 CONCLUSION", "text": "We proposed MAFL, a novel framework, that exchanges averaged local data, to gain relevant information while still ensuring privacy. Under the new framework, we first suggested NaiveMix, which is a naive implementation of Mixup between local and received data. More interestingly, we proposed FedMix, which provides approximation of global Mixup only using averaged data. MAFL, and FedMix in particular, showed improved performance over existing algorithms in various benchmarks, particularly in non-iid environments where each client has data distributed heterogeneously. While our method is very effective and still preserving privacy, future work needs to be done to deal with various non-iid environments, desirably with better privacy and beyond image classification tasks." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the National Research Foundation of Korea (NRF) grants (No.2018R1A5A1059921, No.2019R1C1C1009192) and Institute of Information & Communications Technology Planning & Evaluation (IITP) grants (No.2017-0-01779, XAI, No.2019-0-01371, Development of brain-inspired AI with human-like intelligence, and No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)) funded by the Korea government (MSIT)." }, { "heading": "A ALGORITHMS", "text": "We present a brief depiction of FedAvg in Algorithm 3.\nAlgorithm 3: FedAvg\nInput: N,T,K,E,B, pk,Dk = {Xk,Yk}, k = 1, . . . , N, ηt, t = 0, . . . , T − 1 Initialize w0 for global server for t = 0, . . . , T − 1 do\nSt ← Kclients selected at random Send wt to clients k ∈ St for k ∈ St do\nwkt+1 ← LocalUpdate(k,wt) end wt+1 ← 1K ∑ k∈St pkw k t+1\nend\nLocalUpdate(k,wt): w ← wt for e = 0, . . . , E − 1 do\nSplit Dk into batches of size B for batch(X,Y ) do\nw ← w − ηt+1∇`(f(X;w),Y ) end\nend return w" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "FEMNIST FEMNIST is EMNIST (Cohen et al., 2017), a handwritten MNIST dataset, organized into federated setting, as in Caldas et al. (2019). EMNIST is very similar to MNIST, but has several differences. It includes all 26 capital and small letters of alphabet as classes along with numbers, making it 62 classes in total to classify. Also, each image contains information of the writer of the letter. In a realistic non-iid setting, each client has local data consists of only one writer, which is about 200 to 300 samples per client in average, with differing number of samples. We use N = 100 clients and trained only K = 10 clients per communication round.\nWe used LeNet-5 (Lecun et al., 1998) architecture for client training. LeNet is consisted of 2 conv layers followed by 2x2 maxpool layer then 3 fc layers. We used 5x5 conv layers with 6 and 16 channels. Following fc layers have exactly the same hidden dimension of original LeNet-5 model.\nCIFAR10 and CIFAR100 CIFAR10 and CIFAR100 are very popular and simple image classification datasets for federated setting. Both contain 50,000 training data and 10,000 test data. We split the data into each client, N = 60 in case of CIFAR10 and N = 100 in case of CIFAR100. To create an artificial non-iid environment, we allocate data such that each client only has data from 2 (20 for CIFAR100) randomly chosen classes. We train only K = 15 clients per round for CIFAR10 and K = 10 for CIFAR100. No validation data was split and we used all training data for local training.\nWe used modified version of VGG architecture (Simonyan & Zisserman, 2015). Modified VGGnet is consisted of 6 convolutional layers with 3 max pooling layers. 3x3 conv layers are stacked and 2x2 maxpool layer is stacked after every 2 conv layers. Conv layers have channel sizes of 32, 64, 128, 128, 256, 256. Then 3 fc layers are stacked with hidden dimension 512. We use Dropout layer three times with probability 0.1 after the second, third maxpool layers and before the last fc layer. We remove all batch normalization layers since it is reported that they hurt federated learning performance (Hsieh et al., 2020).\nShakespeare We use dataset from The Complete Works of William Shakespeare, which is a popular dataset for next-character prediction task. We partition the dataset so that each client has conversations of one speaking role, as in Caldas et al. (2019), which naturally results in a heterogeneous setting, as in FEMNIST. We use N = K = 60, each with different number of data (minimum is 200). Since input-level Mixup cannot be performed for discrete character labels, we performed Mixup on the embedding layer. Additional concerns for this variation is considered at Appendix E.\nWe used 2-layer LSTM, both with hidden dimension of 256. The recurret network is followed by an embedding layer. The output of LSTM is passed to a fully-connected layer, with softmax output of one node per character. There are 84 characters used in the dataset.\nLocal clients are trained by SGD optimizer with learning rate 0.01 and learning decay rate per round 0.999. We set local batch size as 10 for training. Specific hyperparameter setting for each dataset is\nexplained in following Table 8. Throughout the experiment,Mk is fixed to local client’s dataset size. Changes in these parameters are indicated, if made, are stated for all experiments. Note that we use a fixed small value of λ for MAFL-based algorithms to show superior performance." }, { "heading": "C PROOF OF PROPOSITION 1", "text": "We demonstrate mathematical proof of Proposition 1 for FedMix.\nStarting from Eq. (3), since the loss function is linear in y for cross-entropy loss `2, we have\n` ( f(x̃), ỹ ) = (1− λ)` ( f ( (1− λ)xi + λxj ) , yi ) + λ` ( f ( (1− λ)xi + λxj ) , yj ) . (6)\nUnlike the original paper, if we assume λ 1, we can treat this loss as objective loss function for vicinal risk minimization (VRM). Under this assumption, each term in Eq. (6) can be approximated by independent Taylor expansion on the first and second argument of `, so that we have\n(1− λ)` ( f ( (1− λ)xi ) , yi ) + (1− λ)× ∂`\n∂x ∣∣∣∣ (1−λ)xi,yi · (λxj)\n+λ` ( f ( (1− λ)xi ) , yj ) + λ× ∂`\n∂x ∣∣∣∣ (1−λ)xi,yj · (λxj). (7)\nSince λ 1, we can ignore the last term in the second row of Eq. (7), which isO(λ2). We simplify this equation and switch the second term in the first row and the first term in the second row, to finally obtain\n` ( f(x̃), ỹ ) ≈ (1− λ)` ( f ( (1− λ)xi ) , yi ) + λ` ( f ( (1− λ)xi ) , yj ) + λ ∂`\n∂x · xj . (8)\nThe derivative ∂`∂x is calculated at x = (1 − λ)xi and y = yi. The coefficient of the last term is changed to λ from λ(1− λ) since we are ignoring O(λ2) terms." }, { "heading": "D COMPARISON OF FEDMIX WITH BASELINE MIXUP SCENARIOS", "text": "Since our MAFL-based algorithms could be considered as VRM, it could be considered as a data augmentation (Chapelle et al., 2000). Thus, it is important to confirm that increased performance from MAFL not only comes from data augmentation but also from relevant information received from other clients. To check whether this is true, we compare FedMix with algorithms where we use either randomly generated noises as averaged data for Mixup (labeled Mixup w/ random noise) and where we use only locally averaged data for Mixup (labeled Mixup w/ local means).\nIn Table 9, we observe that if averaged data for MAFL is substituted for randomly generated noise or locally generated images, it does not show the level of performance FedMix is able to show. Thus, we claim that FedMix properly incorporates relevant information received.\n2We implicitly assume the classification tasks. For regression tasks, we can consider the squared loss function and use the equivalent loss that is linear in y\nE VARIANT OF FEDMIX AND NAIVEMIX WITH MIXUP BETWEEN HIDDEN STATES\nWhile input Mixup methods promise significant enhancements, one can expect similar performance from hidden state Mixup, originally proposed by Verma et al. (2018). The authors of this work suggest that Manifold Mixup demonstrates similar, if not greater, advantage in terms of performance and adversarial robustness. We can think of variants of FedMix and NaiveMix that implement hidden state Mixup, and test if this variant outperforms vanilla methods based on input Mixup.\nAlthough the original paper proposed randomizing the layer k just before hidden states that undergo Mixup for each batch, we propose setting this layer k constant. This is to reduce communication cost significantly, since selecting randomized layer for Mixup will require other clients having to send multiple hidden states (which usually have large dimensions), further imposing communication burden.\nAnother change is that while original Manifold Mixup (Verma et al., 2018) suggests backpropagating the entire computational graph through the whole network, including the encoder (part of model projecting input to designated hidden representation) and the decoder (rest of the model projecting hidden representation to output). Such thing is impossible to do in typical federated setting, since the computational graph to calculate hidden representation of local data and the graph to calculate hidden representation of other clients’ data is separated, and they cannot be updated simultaneously through local update (doing so requires communicating encoder weights across clients every local update, which is highly inefficient in terms of communication). Thus, during Mixup between hidden representations, only the decoder weights can be updated, since only updating encoder of the selected local client will desynchronize encoder weight values for calculating hidden states of local data from those for calculating hidden states of other clients’ data every local update, so that the model does not learn properly.\nTo compensate for this downside, we propose performing vanilla SGD updates without Mixup after Manifold Mixup SGD (which updates weights only in decoder). The vanilla updates will have both local encoder and decoder weights to be updated, thus driving the model to have better hidden representations for Mixup. However, this difference not only imposes additional computation cost, but also does not guarantee that it will show better performance compared to input Mixup methods.\nWhile utilizing hidden representations from other clients sound like a safe idea, it does not ensure data privacy, primarily because the updating client has knowledge of the exact encoder weight values used to calculate hidden states received, and based on our modification, the received hidden states are treated as constants during Mixup. Model inversion attacks (Fredrikson et al., 2015) have been suggested to recover input images from hidden states or outputs, with access to weight values. Thus, direct Mixup between hidden states does not guarantee data privacy. Variant of FedMix and NaiveMix can be applied during decoder training phase, so that privacy is ensured while we successfully approximate Mixup.\nThe performance of proposed algorithms is shown in Figure 4 on CIFAR10 (same settings with main experiment for dataset, model, and training is used; see Appendix B). Comparison between methods in Figure 4(a) shows that while variants of FedMix and NaiveMix show improved performance over existing methods, they still do not outperform our method based on input Mixup (compare with dotted line). Meanwhile, comparison between using different layers for Mixup is shown in Figure 4(b). It is shown that k = 4 has fastest learning curve but converges similarly to case of k = 2, both being slightly outperformed by case of input Mixup.\nConsidering additional computation burden required to communicate hidden states (which often have larger dimensions than raw input) and necessity to communicate the hidden states every communication round (since hidden representations change with encoder weights), we propose that FedMix using input Mixup is superior, and use this method for our main analyses." }, { "heading": "F EFFECT OF LOCAL EPOCHS", "text": "Previous works (McMahan et al., 2017; Caldas et al., 2019) show that number of local epochs, E, affects federated learning performance. We tested the effect of E on CIFAR10. In general, we showed that test performance increases as E increases. In addition, we observed that under various values ofE, FedMix shows the best performance compared to other algorithms (see Table 10), being a close second after NaiveMix for E = 10. MAFL-based algorithms outperform existing algorithms for all values of E tested." }, { "heading": "G ADDITIONAL COMPUTATION COST INCURRED BY MAFL", "text": "For FedMix, additional computation and memory are required on edge devices during model training for each communication round, since `FedMix requires additional terms, including gradient by input, ∂` ∂x , compared to vanilla FedAvg. We claim that FedMix does not result in an additional computation burden. Specifically, we trained FedMix on CIFAR10 with the same settings as the main experiment to 70% accuracy in 1.94 hours; FedAvg takes 1.95 hours. FedMix spends a comparable amount of time to reach a similar level of performance of FedAvg. While in the memory aspect, FedMix requires about twice more GPU memory allocation compared to FedAvg, this phenomenon is also observed on LocalMix and NaiveMix. The extra memory burden comes from Mixup by enlarging the input dimension twice. For instance, FedAvg requires 46.00MB to allocate, LocalMix requires 94.00MB and 98.00MB for FedMix. Calculating gradient of the input derivative gives only negligi-\nble 2-3MB additional memory usage, which is reasonable concerning the substantial performance increase from LocalMix to FedMix.\nH INTRODUCTION OF CUT-OFF THRESHOLD IN MAFL\nTo better ensure privacy, a cut-off threshold that prevents clients with fewer data to send averaged data could be introduced. We performed this in FEMNIST, since for such procedure to be effective, heterogeneous size of local client data is necessary. We test with N = 300 clients, and introduce different threshold levels to test its efficiency. In addition, we also test with multiple λ values, to see whether threshold level affects optimal value of λ for FedMix.\nWe present the results in Table 11. While threshold does not hugely affect performance, we observe that a moderately small threshold level of 100 results in the best performance. We suggest that as the threshold level is heightened, there is less overfitting to clients with small size local data, but it also results in a decrease in the number of averaged data received by each client. We indeed find an appropriate value of threshold that maximizes performance.\nIn case where there are a different number of data per client, the sensitivity of λ could also be different compared to when all clients have the same number of data. Results in Table 13 show that there is little change in performance by change in λ, especially compared to Table 5. In addition, an inspection of the performance of a global model on individual test data of clients does not reveal any noticeable pattern by the size of local data (see Table 13)." }, { "heading": "I MAFL IN CONJUNCTION WITH GAUSSIAN NOISE", "text": "With results in Figure 3, we expressed concern with small values of Mk causing privacy issues with only a small performance boost, if at all. A common practice of introducing additional privacy is adding Gaussian noise. This is a popular method associated with differential privacy (McMahan et al., 2018), but adding noise alone does not guarantee differential privacy, since the noise level should be explicitly linked to differential privacy levels, and δ. Addition of artificial pixel-wise noise will enhance privacy but will result in a quality drop of averaged data. While privacy added by noise and privacy from averaging data cannot be directly compared, we can select a noise level which in conjunction with smallMk, visually provides data privacy similar to that of maximumMk.\nResults show that the introduction of Gaussian noise does result in a decline in performance (Table 14),although the decline is very small. Interestingly as noise gets larger as σ = 0.3, random noise\nprovides an effect as data augmentation and results in a performance increase compared to σ = 0. This experiment is in line with Appendix D. We conclude that introduction of noise in averaged data could provide us with a reasonable alternative to FedMix with large Mk. While our method does not align directly with differential privacy, we leave as future work how FedMix could be smoothly combined with DP-related methods and how its privacy could be quantified in terms of differential privacy." }, { "heading": "J ADDITIONAL EXPERIMENTS: VARIATIONS OF FEDMIX", "text": "Averaging within Xg,Yg Further averaging between entries of Xg,Yg practically provides an extension of the range of viable Mk such that it exceeds nk, in the sense that each averaged data is from multiple clients’ data. Such a process would also result in fewer data included in Xg,Yg , so we tested effect of this procedure on model performance. Table 15 shows that for m-fold extra averaging, we even observe increase in performance, but it quickly declines asm gets too large. This method provides an improvement in privacy while even possibly resulting in better performance.\nEffect of same-class split for averaging We perform random split of local data for averaging, but an unbalanced split, such as only averaging data with the same class labels, could result in better performance. We compared between random split and same-class split while keeping Mk = 0.5nk be equal for both methods. Same-class split resulted in a significant decline in performance, and we conclude that there is no advantage of such split over random split that we are using for our main results.\nNaiveMix with varying Mixup ratio λ We varied Mixup ratio λ for NaiveMix as well. Results in Table 17 shows that NaiveMix also has an intermediate optimal value of λ. The drop in performance for λ = 0.5 is much more dramatic than for FedMix (see Table 5 for comparison with Global Mixup and FedMix). We think that NaiveMix loss also suffers as it gives more weight to the averaged data, especially for large Mk.\nHeterogeneity from skewed label distribution Recent papers (Yurochkin et al., 2019; Wang et al., 2020) suggested an alternative heterogeneous environment, which does not limit the number of classes per client but skews label distribution in local data. We used a Dirichlet distribution of α = 0.2, 0.5 as described by Yurochkin et al. (2019) and Wang et al. (2020). Results show that FedMix still outperforms all other algorithms. We think that such label skewing introduces less heterogeneity compared to our practice of limiting the number of classes per client, but nevertheless, FedMix is still the most powerful method in terms of performance." } ]
2,021
null
SP:332c59e494e36b043e48760cb2dfac206cafdcec
[ "The authors proposed that cerebellum in the brain computes synthetic gradients, as used in decoupled neural interfaces (DNI), to enable learning in neural circuits without waiting for gradients to propagate backwards. The authors incorporated several architectural properties of biological cerebellum into their cerebellar model. They showed that a LSTM trained with such synthetic gradients can learn a variety of tasks, from motor reaching to caption generation. The paper is clearly written, and the link between DNI and cerebellum is a novel idea. However, the authors made little attempt to actually compare their model with experimental findings from cerebellum (except for the cerebellar properties built into the network), limiting its scientific impact. Meanwhile, it is not clear whether the cerebellum-inspired DNI provides concrete advantages over DNI proposed in Jaderberg 2017." ]
The brain solves the credit assignment problem remarkably well. For credit to be correctly assigned across multiple cortical areas a given area should, in principle, wait for others to finish their computation. How the brain deals with this locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and backward phase. Recently, decoupled neural interfaces (DNI) were introduced as a solution to the forward and backward locking problems. Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve the locking problem closely matching the computations and architecture of DNI. In particular, we propose that classical cerebellar forward and inverse models are equivalent to solving the backward and forward locking problems, respectively. To demonstrate the potential of this framework we focus on modelling a given brain area as a recurrent neural network in which the cerebellum approximates temporal feedback signals as provided by BPTT. We tested the cortico-cerebellar-DNI (CC-DNI) model in a range of sensorimotor and cognitive tasks that have been shown to be cerebellar-dependent. First, we show that the CC-DNI unlocking mechanisms can facilitate learning in a simple target reaching task. Next, by building on the sequential MNIST task we demonstrate that these results generalise to more complex sensorimotor tasks. Our cortico-cerebellar model readily applies to a wider range of modalities, to demonstrate this we tested the model in a cognitive task, caption generation. Models without the cerebellarDNI component exhibit deficits similar to those observed in cerebellar patients in both motor and cognitive tasks. Moreover, we used CC-DNI to generate a set of specific neuroscience predictions. Finally, we introduce a CC-DNI model with highly sparse connectivity as observed in the cerebellum, which substantially reduces the number of parameters while improving learning through decorrelation. Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue of research between deep learning and neuroscience.
[]
[ { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning representations by backpropagating", "venue": "errors. Nature,", "year": 1986 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Networks adjusting networks", "venue": "In Proceedings of\" Distributed Adaptive Neural Information Processing\",", "year": 1990 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation. In Joint european conference on machine learning and knowledge discovery in databases, pages 498–515", "venue": null, "year": 2015 }, { "authors": [ "Adam H Marblestone", "Greg Wayne", "Konrad P Kording" ], "title": "Toward an integration of deep learning and neuroscience", "venue": "Frontiers in computational neuroscience,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Wojciech Marian Czarnecki", "Simon Osindero", "Oriol Vinyals", "Alex Graves", "David Silver", "Koray Kavukcuoglu" ], "title": "Decoupled neural interfaces using synthetic gradients", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "David Marr" ], "title": "A theory of cerebellar cortex", "venue": "The Journal of Physiology,", "year": 1969 }, { "authors": [ "James S Albus" ], "title": "A theory of cerebellar function", "venue": "Mathematical Biosciences,", "year": 1971 }, { "authors": [ "Jennifer L Raymond", "Javier F Medina" ], "title": "Computational principles of supervised learning in the cerebellum", "venue": "Annual review of neuroscience,", "year": 2018 }, { "authors": [ "Daniel M Wolpert", "R Chris Miall", "Mitsuo Kawato" ], "title": "Internal models in the cerebellum", "venue": "Trends in cognitive sciences,", "year": 1998 }, { "authors": [ "R.C. Miall", "D.J. Weir", "D.M. Wolpert", "J.F. Stein" ], "title": "Is the cerebellum a smith predictor", "venue": "Journal of Motor Behavior,", "year": 1993 }, { "authors": [ "Jeremy D. Schmahmann", "Xavier Guell", "Catherine J. Stoodley", "Mark A. Halko" ], "title": "The Theory and Neuroscience of Cerebellar Cognition", "venue": "Annual Review of Neuroscience,", "year": 2019 }, { "authors": [ "Mark J. Wagner", "Liqun Luo" ], "title": "Neocortex–Cerebellum Circuits for Cognitive Processing", "venue": "Trends in Neurosciences,", "year": 2020 }, { "authors": [ "James A. Brissenden", "David C. Somers" ], "title": "Cortico–cerebellar networks for visual attention and working memory", "venue": "Current Opinion in Psychology,", "year": 2019 }, { "authors": [ "Xavier Guell", "Franziska Hoche", "Jeremy D. Schmahmann" ], "title": "Metalinguistic Deficits in Patients with Cerebellar Dysfunction: Empirical Support for the Dysmetria of Thought", "venue": "Theory. Cerebellum,", "year": 2015 }, { "authors": [ "Xavier Guell", "John D.E. Gabrieli", "Jeremy D. Schmahmann" ], "title": "Triple representation of language, working memory, social and emotion processing in the cerebellum: convergent evidence from task and seed-based resting-state fMRI analyses in a single large cohort", "venue": null, "year": 2018 }, { "authors": [ "Ben Deverett", "Mikhail Kislin", "David W Tank", "S Samuel", "H Wang" ], "title": "Cerebellar disruption impairs working memory during evidence", "venue": "accumulation. bioRxiv,", "year": 2019 }, { "authors": [ "S C Baker", "R D Rogers", "Adrian M Owen", "C D Frith", "Raymond J Dolan", "R S J Frackowiak", "Trevor W Robbins" ], "title": "Neural systems engaged by planning: a PET study", "venue": "of the Tower of London task. Neuropsychologia,", "year": 1996 }, { "authors": [ "Julie A Fiez", "Steven E Petersen", "Marshall K Cheney", "Marcus E Raichle" ], "title": "Impaired non-motor learning and error detection associated with cerebellar damage: A single case study", "venue": null, "year": 1992 }, { "authors": [ "Jörn Diedrichsen", "Maedbh King", "Carlos Hernandez-Castillo", "Marty Sereno", "Richard B. Ivry" ], "title": "Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across", "venue": "Task Domains. Neuron,", "year": 2019 }, { "authors": [ "Francois P Chabrol", "Antonin Blot", "Thomas D Mrsic-Flogel" ], "title": "Cerebellar contribution to preparatory activity in motor", "venue": "neocortex. Neuron,", "year": 2019 }, { "authors": [ "Zhenyu Gao", "Courtney Davis", "Alyse M Thomas", "Michael N Economo", "Amada M Abrego", "Karel Svoboda", "Chris I De Zeeuw", "Nuo Li" ], "title": "A cortico-cerebellar loop for motor", "venue": "planning. Nature,", "year": 2018 }, { "authors": [ "Jerome N Sanes", "Bozhidar Dimitrov", "Mark Hallett" ], "title": "Motor learning in patients with cerebellar dysfunction", "venue": null, "year": 1990 }, { "authors": [ "Peter A. Butcher", "Richard B. Ivry", "Sheng Han Kuo", "David Rydz", "John W. Krakauer", "Jordan A. Taylor" ], "title": "The cerebellum does more than sensory prediction error-based learning in sensorimotor adaptation tasks", "venue": "Journal of Neurophysiology,", "year": 2017 }, { "authors": [ "Abdulraheem Nashef", "Oren Cohen", "Ran Harel", "Zvi Israel", "Yifat Prut" ], "title": "Reversible Block of Cerebellar Outflow Reveals Cortical Circuitry for Motor Coordination", "venue": "Cell Reports,", "year": 2019 }, { "authors": [ "Terence D. Sanger", "Okito Yamashita", "Mitsuo Kawato" ], "title": "Expansion coding and computation in the cerebellum: 50 years after the Marr–Albus codon theory", "venue": "Journal of Physiology,", "year": 2020 }, { "authors": [ "N Alex Cayco-Gajic", "Claudia Clopath", "R Angus Silver" ], "title": "Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks", "venue": "Nature communications,", "year": 2017 }, { "authors": [ "Suzana Herculano-Houzel" ], "title": "The human brain in numbers: a linearly scaled-up primate brain", "venue": "Frontiers in Human Neuroscience,", "year": 2009 }, { "authors": [ "J C Eccles", "M Ito", "J Szentagothai" ], "title": "The Cerebellum as a Neuronal Machine", "venue": null, "year": 2013 }, { "authors": [ "N Schweighofer", "K Doya", "F Lay" ], "title": "Unsupervised learning of granule cell sparse codes enhances cerebellar adaptive control", "venue": "Neuroscience, 103(1):35–50,", "year": 2001 }, { "authors": [ "David S Broomhead", "David Lowe" ], "title": "Radial basis functions, multi-variable functional interpolation and adaptive networks. Technical report, Royal Signals and Radar Establishment Malvern (United Kingdom)", "venue": null, "year": 1988 }, { "authors": [ "Terence D Sanger", "Okito Yamashita", "Mitsuo Kawato" ], "title": "Expansion coding and computation in the cerebellum: 50 years after the Marr-Albus codon theory", "venue": "The Journal of Physiology,", "year": 2019 }, { "authors": [ "M Ito" ], "title": "Neurophysiological aspects of the cerebellar motor control system", "venue": "International journal of neurology,", "year": 1970 }, { "authors": [ "Masao Ito" ], "title": "Control of mental activities by internal models in the cerebellum", "venue": "Nature Reviews Neuroscience,", "year": 2008 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Rui Ponte Costa", "Ioannis Alexandros Assael", "Brendan Shillingford", "Nando de Freitas", "Tim Vogels" ], "title": "Cortical microcircuits as gated-recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro" ], "title": "Backpropagation through time and the brain", "venue": "Current opinion in neurobiology,", "year": 2019 }, { "authors": [ "Jordan Guerguiev", "Timothy P Lillicrap", "Blake A Richards" ], "title": "Towards deep learning with segregated dendrites", "venue": "ELife, 6:e22901,", "year": 2017 }, { "authors": [ "João Sacramento", "Rui Ponte Costa", "Yoshua Bengio", "Walter Senn" ], "title": "Dendritic cortical microcircuits approximate the backpropagation algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Blake A Richards", "Timothy P Lillicrap" ], "title": "Dendritic solutions to the credit assignment problem", "venue": "Current opinion in neurobiology,", "year": 2019 }, { "authors": [ "Alexandre Payeur", "Jordan Guerguiev", "Friedemann Zenke", "Blake Richards", "Richard Naud" ], "title": "Burstdependent synaptic plasticity can coordinate learning in hierarchical circuits. bioRxiv, 2020", "venue": "doi: 10.1101/2020.03.30.015511", "year": 2020 }, { "authors": [ "Nasir Ahmad", "Marcel A J van Gerven", "Luca" ], "title": "Ambrogioni. GAIT-prop: A biologically plausible learning rule derived from backpropagation of error", "venue": "arXiv preprint arXiv:2006.06438,", "year": 2020 }, { "authors": [ "Guillaume Bellec", "Franz Scherr", "Elias Hajek", "Darjan Salaj", "Robert Legenstein", "Wolfgang Maass" ], "title": "Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets", "venue": null, "year": 1901 }, { "authors": [ "Jacques Kaiser", "Hesham Mostafa", "Emre Neftci" ], "title": "Synaptic plasticity dynamics for deep continuous local learning", "venue": "arXiv preprint arXiv:1811.10766,", "year": 2018 }, { "authors": [ "John Porrill", "Paul Dean", "Sean R Anderson" ], "title": "Adaptive filters and internal models: Multilevel description of cerebellar function", "venue": "Neural Networks,", "year": 2013 }, { "authors": [ "Quoc V Le", "Navdeep Jaitly", "Geoffrey E Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Matthis Synofzik", "Axel Lindner", "Peter Thier" ], "title": "The Cerebellum Updates Predictions about the Visual Consequences of One’s Behavior", "venue": "Current Biology,", "year": 2008 }, { "authors": [ "Andrea L Gebhart", "Steven E Petersen", "W Thomas Thach" ], "title": "Role of the posterolateral cerebellum in language", "venue": "Annals of the New York Academy of Sciences,", "year": 2002 }, { "authors": [ "Wojciech Marian Czarnecki", "Grzegorz Swirszcz", "Max Jaderberg", "Simon Osindero", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Understanding synthetic gradients and decoupled neural interfaces", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Mark J. Wagner", "Tony Hyun Kim", "Jonathan Kadmon", "Nghia D. Nguyen", "Surya Ganguli", "Mark J. Schnitzer", "Liqun Luo" ], "title": "Shared Cortex-Cerebellum Dynamics in the Execution and Learning of a Motor", "venue": "Task. Cell,", "year": 2019 }, { "authors": [ "Catherine J. Stoodley", "Jeremy D. Schmahmann" ], "title": "Functional topography in the human cerebellum: A meta-analysis of neuroimaging studies", "venue": "ISSN 10538119. doi: 10. 1016/j.neuroimage.2008.08.039. URL http://dx.doi.org/10.1016/j.neuroimage", "year": 2009 }, { "authors": [ "Naveen Sendhilnathan", "Anna E Ipata", "Michael E Goldberg" ], "title": "Neural correlates of reinforcement learning in mid-lateral cerebellum", "venue": null, "year": 2020 }, { "authors": [ "Maedbh King", "Carlos R Hernandez-Castillo", "Russell A Poldrack", "Richard B Ivry", "Jörn Diedrichsen" ], "title": "Functional boundaries in the human cerebellum revealed by a multi-domain task battery", "venue": "Nature Neuroscience,", "year": 2019 }, { "authors": [ "Reiko Ashida", "Nadia L Cerminara", "Richard J Edwards", "Richard Apps", "Jonathan C W Brooks" ], "title": "Sensorimotor, language, and working memory representation within the human cerebellum", "venue": "Human Brain Mapping,", "year": 2019 }, { "authors": [ "Nadia L. Cerminara", "Richard Apps" ], "title": "Behavioural significance of cerebellar", "venue": "modules. Cerebellum,", "year": 2011 }, { "authors": [ "Aparna Suvrathan", "Hannah L. Payne", "Jennifer L. Raymond" ], "title": "Timing Rules for Synaptic Plasticity", "venue": "Matched to Behavioral Function. Neuron,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318", "venue": "Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Chin-Yew Lin" ], "title": "ROUGE}: A Package for Automatic Evaluation of Summaries", "venue": "In Text Summarization Branches Out,", "year": 2004 }, { "authors": [ "Michael Denkowski", "Alon Lavie" ], "title": "Meteor universal: Language specific translation evaluation for any target language", "venue": "In Proceedings of the ninth workshop on statistical machine translation,", "year": 2014 }, { "authors": [ "Ramakrishna Vedantam", "C Lawrence Zitnick", "Devi Parikh" ], "title": "Cider: Consensus-based image description evaluation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Peter Anderson", "Basura Fernando", "Mark Johnson", "Stephen Gould" ], "title": "Spice: Semantic propositional image caption evaluation", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Robert A Barton", "Chris Venditti" ], "title": "Rapid evolution of the cerebellum in humans and other great apes", "venue": "Current Biology,", "year": 2014 }, { "authors": [ "Suzana Herculano-Houzel" ], "title": "Coordinated scaling of cortical and cerebellar numbers of neurons", "venue": "Frontiers in Neuroanatomy,", "year": 2010 }, { "authors": [ "Jaderberg" ], "title": "2017) the cerebellar network first predicts gradients for both the memory cell and output state of the LSTM (i.e. for each neuron in the main network the cerebellar network receives two inputs), so that the ‘cerebellar’ input and output size is two times the number of LSTM units, and this synthetic gradient ĝ is then scaled by a factor of 0.1 before being used by the main model for stability", "venue": null, "year": 2017 }, { "authors": [ "Jaderberg" ], "title": "Schematic of a biologically plausible RNN with eligibility traces that encode information needed to compute the gradients at a later point in time (green traces; cf", "venue": "(Bellec et al.,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Efficient credit assignment in the brain is a critical part of learning. However, how the brain solves the credit assignment problem remains a mystery. One of the central issues of credit assignment across multiple stages of processing is the need to wait for previous stages to finish their computation before others can proceed (Rumelhart et al., 1986; Schmidhuber, 1990; Lee et al., 2015; Marblestone et al., 2016; Jaderberg et al., 2017). In deep artificial neural networks these constraints are explicit. During the forward phase a given layer has to wait for all its previous layers to finish before it can proceed, a constraint known as the forward lock. Similarly, during the backward phase a given layer has to wait for all the layers above to finish computing its gradients – backward lock. Recently, a framework was introduced to decouple artificial neural networks – decoupled neural interfaces (DNI; (Jaderberg et al., 2017))1, effectively breaking forward and/or backward locks.\nHere, we propose that a specialised brain area, the cerebellum, performs a similar role in the brain. In the classical view the cerebellum is key for fine motor control and learning by constructing internal models of behaviour. (Marr, 1969; Albus, 1971; Raymond and Medina, 2018; Wolpert et al., 1998;\n1DNIs are related to earlier work on using network critics to train neural networks (Schmidhuber, 1990).\nMiall et al., 1993). More recently, however, the idea that the cerebellum is also involved in cognition has gained significant traction (Schmahmann et al., 2019; Wagner and Luo, 2020; Brissenden and Somers, 2019). An increasing body of behavioural, anatomical and imaging studies points to a role of the cerebellum in cognition in humans and non-human primates (Schmahmann et al., 2019; Brissenden and Somers, 2019; Guell et al., 2015; 2018). Impairments in cerebellar patients occur across a range of tasks including language (Guell et al., 2015), working memory (Deverett et al., 2019), planning (Baker et al., 1996), and others (Fiez et al., 1992). These observations suggest that the cerebellum implements a universal function across the brain (Marr, 1969; Albus, 1971; Raymond and Medina, 2018; Diedrichsen et al., 2019). Moreover, experimental studies looking at corticocerebellar interactions have demonstrated that cerebellar output is crucial for maintaining neocortical representations in order to drive behaviour (Chabrol et al., 2019; Gao et al., 2018). However, to the best of our knowledge, no theoretical framework has considered what might be the function of such interactions between the cerebellum and cortical areas.\nIn an attempt to reduce the existing gap between experimental observations and existing computational approaches we introduce DNI as a cortico-cerebellar model – cortico-cerebellar DNI (CC-DNI). Consistent with the cerebellar universal role we theorise that the cerebellum serves to break the locks inherent to both feedforward and feedback information processing in the brain, akin to DNI. In particular, we posit that the two classical internal models of the cerebellum, forward and inverse models, are equivalent to DNI-mediated unlocking of feedback (gradients) and feedforward communication, respectively. Following this view the cerebellum not only provides motor or sensory estimates, but also any other modality encoded by a particular brain region. Inspired by neuroscientific studies, we test our model on sensorimotor tasks: (i) a target reaching task (Sanes et al., 1990; Butcher et al., 2017; Nashef et al., 2019) and (ii) a set of more complex temporal tasks based on the MNIST dataset, but also (iii) on a cognitive task – caption generation (Guell et al., 2015). Our results support the cortico-cerebellar DNI models we study and show that they generally speed up learning by unlocking the main network, qualitatively consistent with a wide range of behavioural observations (Guell et al., 2015; Sanes et al., 1990; Butcher et al., 2017; Nashef et al., 2019).\nTwo defining features of the cerebellum are the large expansion at the granule cell input layer with 50 billion neurons (the most numerous cell in the brain) and the highly sparse connectivity (each granule cell receives ∼ 4 synapses) (Sanger et al., 2020). These observations have been long suggested to help speed-up learning in the cerebellum through decorrelation (Albus, 1971; Sanger et al., 2020; Cayco-Gajic et al., 2017). Building on these studies we introduce a new DNI model, sparse CC-DNI. Consistent with classical cerebellar models (Albus, 1971; Cayco-Gajic et al., 2017) we show that input sparsity can improve learning in the presence of high correlations. We finish with a discussion on the implications and predictions of this new brain-wide model of the cerebellum." }, { "heading": "2 CEREBELLUM AS A DECOUPLING MACHINE", "text": "We first describe DNIs following Jaderberg et al. (2017) and then establish the link to corticocerebellar networks. Assume that a feedforward neural network consists of N layers, with the ith layer (1 ≤ i ≤ N ) performing a “computational step” fi with parameters θi. Given input x at layer 1, the output of the network at its final layer is therefore given by fN (fN−1(. . . f2(f1(x)) . . . )). We use Fji to denote the composition of steps from layer i to layer j (inclusively). Finally, let hi denote the (hidden) activity at layer i, so that hi = fi(hi−1) with h0 = x.\nTo illustrate the locking constraints of standard artificial neural networks used in deep learning suppose that a network is in the process of learning via backpropogation, with current input-target pair (x, ytarg). To update the layer parameters θi the gradient ∂L∂θi is required, where L = L(y, ytarg) is the loss which compares the target value against the model output y = FN1 (x) under some loss function L; we then apply gradient descent on the parameters, θi ← θi − α ∂L∂θi , with learning rate α > 0. Suppose however that the network has only recently received the input x and is currently only at module i of the forward computation. In order to update the corresponding parameters of that layer, θi, the layer must first wait for all remaining layers to finish fj (j > i) for the loss to be computed. Only then the various gradients of the loss are backpropogated and ∂L∂θi is finally available. These two characteristics of backpropogation make layer i “backward locked” to F ji+1, enforcing a strong dependence of the layer’s learning to the speed of forward and backward propagation through\nthe rest of the network. Similarly the the network is forward locked during the forward pass. DNI can be used to unlock both backward and forward locks Jaderberg et al. (2017).\nTo illustrate the model here we focus on backward DNI, which goal is to break the backward lock by feeding the hidden layer i activity hi to a separate neural network, a backward synthesiser CBi , that learns to produce a synthetic gradient ĝi – an estimate of the real gradient expected by layer i, Ci(hi) = ĝi ≈ ∂L∂hi . This synthetic gradient can then be used to update the weights of layer i as soon as hi is available, following\nθi ← θi − αC ĝi ∂hi ∂θi ; ĝi = Ci(hi) (1)\nMore specifically, the parameters of CBi (hi) are learned by comparing the estimated gradient with a target gradient ḡi so as to minimise LC = ||ḡi − ĝi||. Ideally we set the target gradient as the true gradient, ḡi = ∂L∂θi , as can be implemented without difficulty in the case of feedforward networks. However, if we consider the temporal case (see sections 2.1.1 and 3) where we wish to estimate gradients of future losses many timesteps ahead, L = ∑ t Lt, the true backpropagated gradient is computational expensive to obtain. Jaderberg et al. (2017) counter this potential problem by applying a bootstrapping principle in which a DNI module itself is used to guide learning; explicitly, the target gradient at timestep t is ḡt = ∑T τ>t ∂Lτ ht\n+ CT (hT )∂hT∂ht , where T defines some limited horizon; note the interesting resemblance to the n-step return used in reinforcement learning algorithms." }, { "heading": "2.1 CORTICO-CEREBELLAR DNI MODELS", "text": "Building directly on DNI we introduce a model of cortico-cerebellar computation (CC-DNI). These models use a simple feedforward neural network, consistent with the mostly feedforward architecture of the cerebellum (Marr, 1969; Albus, 1971; Raymond and Medina, 2018). The input layer of the\ncerebellum module C models the mossy fiber input onto granule cells (GCs; Fig. 1), which are the most numerous neurons in the brain (> 50 billion in humans (Herculano-Houzel, 2009)). Consistent with this cerebellar dimensionality expansion, in our models we useM N , whereM is the number of GCs and N the number of neurons in the main (cortical) network (Fig. 1). In particular, we use ratios MN ∼ 4, consistent with experimental observations (Herculano-Houzel, 2009). This is different to DNI, in which the synthesizer uses a single hidden layer with the same number of units as LSTM (for comparison in performance see ref to Fig. S2). In addition, the hidden layers and the output of C approximate the role of GCs and Purkinje cells, and the cerebellar output nuclei, respectively.\nCerebellar granule cells receive sparse input connections (K) with only around 3-7 synapses per GC (Herculano-Houzel, 2009; Eccles et al., 2013). These architectural constraints have led to sparse encoding and decorrelation theories of cerebellar function (Albus, 1971; Cayco-Gajic et al., 2017; Schweighofer et al., 2001; Broomhead and Lowe, 1988; Billings et al., 2014; Litwin-Kumar et al., 2017; Sanger et al., 2019). Inspired on these features, we introduce a new model – sparse CC-DNI (sCC-DNI), for which we set a small number of incoming input connections K = 4 (Fig. 1).\nTo measure decorrelation achieved by sCC-DNI we use the Pearson correlation and a population correlation metric rpop that has been shown to better capture cerebellar effects (Cayco-Gajic et al., 2017) rpop = ZZ−1 ( max{ √ λi}i∑\ni\n√ λi − 1Z\n) where Z are the number of neurons being considered and λi\nare the eigenvalues of the covariance matrix of the neuronal activity (e.g. hM ). To manipulate the hidden correlations of the model, we adjust the variance of its input and recurrent weights by scaling each by a factor b with b 6= 1 (see SM). There is a strong similarity between CC-DNI models and the flow of information in standard internal models of the cortico-cerebellar networks (Fig. 1, Table S1). Below we draw a parallel between classical cerebellar internal models and our CC-DNI models. For simplicity, below and in our results we focus on the link between forward internal models and backward DNI (but see S1 for our interpretation of inverse internal model as forward DNI)." }, { "heading": "2.1.1 FORWARD CEREBELLAR MODELS AS BACKWARD DNI", "text": "We propose that classical forward cerebellar models are equivalent to backward DNI. In the forward model of sensorimotor control, the cerebellum receives an efferent copy of the motor command from the motor cortex (Miall et al., 1993; Ito, 1970), and sensory feedback from motor centres. With these two inputs the forward model learns to predict the sensory consequences of motor commands. We argue that a similar predictive model can be applied to predict cerebral activity in brain regions such as the prefrontal cortex and the temporo-parietal cortex which are involved in planning of cognitive behaviour and decision making (Schmahmann et al., 2019; Wagner and Luo, 2020; Brissenden and Somers, 2019; Ito, 2008) (Fig. 1a). Similarly to the forward model backward-DNI also receives an efferent copy, which is represented by the hidden activity from a given layer (or brain area) hi. From this input the synthesiser learns to predict the gradient with respect to the activity of layer hi+1 (Fig. 1a,c). In addition, we suggest that the cost function is computed by the inferior olive (e.g. L = ||gM − ĝM ||; Fig. 1a), which mediates learning in the cerebellum via climbing fibres, consistent with existing cerebellar theoretical frameworks (Marr, 1969; Albus, 1971; Raymond and Medina, 2018). Finally, the prediction ĝM is sent back to the brain.\nSuch backward DNIs are of particular importance when using recurrent neural networks (RNNs), which learn to estimate future gradients from timestep t+ 1 given current state at time t (Jaderberg et al., 2017) (Fig. 1b; see SM for more details). Our tasks use RNNs with weak (i.e. with low truncations) backpropagation through time (BPTT) as it allow us to more clearly demonstrate the potential of the CC-DNI models. As is the case in DNIs the cerebellum relies on a bootstrapped cost function (Fig. 1c). We interpret such bootstrapping as being provided by the current or other cerebellar modules, consistent with the highly modular cerebellar architecture (Apps et al., 2018). Here we use LSTMs (Hochreiter and Schmidhuber, 1997) as a model of cortical networks, which can be potentially linked to cortical microcircuit (Costa et al., 2017)." }, { "heading": "2.1.2 ENCODING GRADIENTS IN THE BRAIN", "text": "Recent developments have introduced biologically plausible solutions to how the brain encodes gradients (Lillicrap and Santoro, 2019; Guerguiev et al., 2017; Sacramento et al., 2018; Richards\nand Lillicrap, 2019; Payeur et al., 2020; Ahmad et al., 2020). Regarding spatial backpropagation of gradients (i.e. across layers), Sacramento et al. (2018) demonstrated that biological networks do not need to explicitly send gradient information, but rather that this can be reconstructed locally in dendritic microcircuits. On the other hand Bellec et al. (2019) showed that temporal gradients as used in BPTT can be approximated by eligibility traces that transmit information forward in time. Both of these solutions can be incorporated into our framework, in which CC-DNI would predict feedback activity originating from upstream brain areas (as in Sacramento et al. (2018)) and/or eligibility traces in RNNs (see S2 for a schematic; Bellec et al. (2019)). However, this is outside of the scope of the current paper and we leave this to future work (see more detailed discussion in SM)." }, { "heading": "2.2 COMPARISON WITH EXISTING CEREBELLAR COMPUTATIONAL MODELS", "text": "Over the past decades the regular circuitry of the cerebellum has driven theorists to model the cerebellar cortex in order to understand its function (Kaiser et al., 2018). The Marr-Albus models are particular popular models which view the local cerebellar circuitry as multi-layer perceptrons (Albus, 1971). Another modelling framework that has been used is based on Kalman filters, which are also used to model the local circuitry to generate sensorimotor predictions (Porrill et al., 2013). These models have enabled significant theoretical and experimental advances. However, they do not consider potential interactions with the wider brain, in particular the neocortex, which are known to exist and appear to be critical for cerebellar function (Gao et al., 2018). Moreover, these models have so far only captured relatively simplistic (sensorimotor) tasks (e.g. eye-blink conditioning). In contrast, our framework provides a solution to both of these issues, while being consistent with existing local models of the cerebellum." }, { "heading": "3 RESULTS", "text": "In this paper we focus our experiments on backward-CC-DNI models (cerebellar forward model; Fig. 1a,b) which we test on a range of task domains that have been shown to be cerebellar-dependent. Below we provide a summary of these different tasks and the respective results (see details in SM).\n3.1 SIMPLE SENSORIMOTOR TASK: TARGET REACHING\nInspired by classical sensorimotor studies in the cerebellum we first test a simple target reaching task (Sanes et al., 1990; Butcher et al., 2017; Nashef et al., 2019), in which given an input at time t1 the network needs to reach a target (Fig. 2a and S8; see SM for more details). In this task error signals are only available at the end of task, which must be associated with the initial input. In line\nwith cerebellar experiments (Sanes et al., 1990; Butcher et al., 2017; Nashef et al., 2019), only the CC-DNI models learn to reach the target with good accuracy, while LSTM still learns but much more slowly (Fig. 2a,b). Interestingly, sparse CC-DNI not only learns more quickly than CC-DNI (Fig. 2b,d, and S3), but also reaches the target more quickly (i.e. in fewer time steps) than the two other models (Fig. 2c). Consistent with cerebellar theories (Sanger et al., 2020; Cayco-Gajic et al., 2017; Billings et al., 2014) sCC-DNI helps when high correlations are present in the main network activity h (Fig. 2e), due to decorrelation of its estimated gradients ĝ (Figs. 2f; see S6 for pop. corr.). CC-DNI properties are particularly important when learning with low BPTT truncations (Fig. 2d), predicting that the cerebellum is particularly important for more temporally challenging tasks and consistent with DNI theory (Jaderberg et al., 2017).\n3.2 ADVANCED SENSORIMOTOR TASKS\nNext, to test whether the target reaching results generalise to more realistic tasks we explored a range of more advanced sensorimotor tasks building on the classical MNIST dataset. We first test the standard sequential MNIST (seqMNIST), in which the model gradually receives an MNIST image row by row before classifying the image as a number 0− 9 (Fig. 3a) (Le et al., 2015). As in the target reaching task, the loss function is only defined at the end of the sequence, making this a challenging temporal task.\nWith truncated BPTT with modest truncation sizes T ≤ 6, CC-DNI models learn more quickly and achieve better overall performance compared to a standard LSTM (Fig. 3b,c), in line with the previous task. Interestingly, for this task we observe that the cerebellar architecture (i.e. high MN ratio, and sparse input connections K) to be the best solution (Figs. S4, S5). As in the target reaching task,\nsparse CC-DNI helps in the case of high correlations in the main network activity h, likely achieved by the decorrelation effect sCC-DNI has on the estimated gradients ĝ (Figs. 3d, S7).\nTo see how well the above results generalise to a regression task in which the model has to reach a particular point (as in the target reaching task), we develop a coordinate seqMNIST (c-seqMNIST) task. As before, the model receives the image sequentially, but must now output one out of ten different equally spaced 2D coordinates (Fig. 3a). Note that this is a harder task as models have to learn to transform a complex input pattern towards a specific 2D point (Fig. 3b). Both DNI models outperformed the LSTM model. There is however notably less difference between sCC-DNI and CC-DNI between compared to the standard version of seqMNIST (see also the following tasks), which may suggests that the cerebellum is better suited for pattern separation tasks, in line with existing cerebellar pattern separation theories (Albus, 1971; Sanger et al., 2020; Cayco-Gajic et al., 2017; Litwin-Kumar et al., 2017).\nNext, we test an explicit drawing task in which we trained the models to draw either a template digit (dd-seqMNIST) or a straight line (ld-seqMNIST) given an image of a digit (Fig. 3a). For these tasks, the loss function is computed at every point in time as L = (yi − ŷi)2, where yi and ŷi denote the desired and predicted coordinate at timestep i, respectively. Due to the loss being computed at every timestep in both tasks, CC-DNI models are not as important and show a minimal improvement in performance (Figs. 3b, S4). To demonstrate that the temporal resolution of the loss is important for CC-DNI, we varied the loss sparsity, giving the model a loss every γ timesteps for ld-seqMNIST. This setup resembles sensorimotor feedback which is typically periodic rather than continuous (Sanes et al., 1990; Synofzik et al., 2008). As expected, sparser losses (higher γ) improve learning (Fig. 3b)." }, { "heading": "3.3 COGNITIVE TASK: CAPTION GENERATION", "text": "Our framework does not only apply to sensorimotor tasks, but should generalise to virtually any task within the grasp of deep learning systems, or indeed the brain. To demonstrate this and inspired by cognitive tasks in which cerebellar patients have shown deficits (Gebhart et al., 2002) we test our models in a caption generation task. In this task the network needs to generate a textual description for a given image. All models have two components: a pretrained convolutional neural network (CNN) to extract a lower dimensional representation of the image, and an LSTM on top to generate text (Fig. 4a). For simplicity and to draw comparison with previous tasks the DNI models contain one synthesiser at the RNN-level, but more could be added to help learning of the CNN.\nWe use a standard dataset (ILSVRC-2012-CLS (Russakovsky et al., 2015)) and the networks are trained to maximise the likelihood of each target word given an image (SM for more details). We find that CC-DNI models exhibit faster learning (Fig. 4b) for truncation sizes T ≤ 10 (Fig. 4d) and better generalisation2 (Fig. S13). All models produce reasonable captions for images unseen during training, but CC-DNI models tend to produce captions more semantically accurate than LSTMs (Figs. 4c, S12), consistent with cerebellar deficits (Guell et al., 2015). In addition, we observe that sCC-DNI are more robust to initial conditions (Fig. 4d), consistent with the decorrelation property. Moreover, consistent with bridging of gradient truncations provided by DNIs, we observe that CC-DNI helps learning in the presence of long captions, whereas sCC-DNI helps mostly for the most common caption lengths (Fig. 4d), suggesting that sCC-DNI better captures the data distribution.\n3.4 MODEL PREDICTIONS\n0/1 1/2\n1 2 5\n10 20\nM /N\nseqMNIST\n17%\n20%\n22%\n25%\n27%\n1 4 8 10 30 60 * K\n0/1 1/2\n1 2 5\n10 20\nM /N\nc-seqMNIST\n35\n40\n45\n50\nFigure 5: Cerebellar sparsity (K) and expansion (M/N ) parameters in seqMNIST (top) and c-seqMNIST (bottom). Grey, orange and cyan boxes denote LSTM, CC-DNI, sCC-DNI models respectively.\nOur model makes numerous predictions for experimental neuroscience, some of which can be compared with existing observations. First, we tested how important are the exact cerebellar expansion M/N ∼ 5 and sparsity parameters (K = 4) observed experimentally. We find that the parameters found experimentally provide a good combination to facilitate learning in the pattern recognition sequential MNIST task (Fig. 5a). This region of good hyperparameters is much wider in regression tasks such as the coordinate sMNIST (Fig. 5b) or other variants (Fig. S4). This predicts that the cerebellum has evolved to specifically facilitate pattern recognition tasks.\n0 100 200 300 epoch\n0\n100 M SE\n0* 10 50 100 150 200 250 ablation epoch\n1 2 3 4\nno rm\nal ize\nd M\nSE\n0 25 50 75 100 epoch\n25%\n50% er ro\nr\n0* 5 10 15 25 50 ablation epoch\n1.00\n1.25\nno rm\nal ize\nd er\nro r a b Figure 6: CC-DNI ablation results. (a) (top)Learning curve for the target reaching task with multiple total cerebellar ablations at specific epochs (vertical line). (bottom) Summary of average error normalized to the control model, for both CC-DNI (orange) and sCC-DNI (cyan). * denotes LSTM case. (b) Same as (a) but for seqMNIST.\nA common experimental paradigm is to do cerebellar ablation experiments. We used our model to generate specific predictions in terms of behavioural outcome with total cerebellar ablations at different points during learning. We find that ablations generally impair learning (Fig. 6a-b), but that this impairment reduces over learning and once the main network no longer needs the cerebellar estimations its presence can impair learning (Fig. 6a-b).\nNext, we used our model to make predictions in terms of expected correlations between the neocortex and the cerebellum. Because of the complex temporal gradients in the RNN case here we focus on a\n2See Czarnecki et al. (2017) for discussion on the regularisation effect of synthetic gradients.\nsimple feedforward network trained on MNIST. In addition, we use this example to demonstrate that when approximating activity directly instead of gradients, as we propose to happen in biology, the cerebellum output converges to some non-zero value (Fig. 7, top). We find that correlations between the main network and the cerebellum increase over learning for neurons with final high correlations whereas correlations decrease for neurons with initial high correlations (Fig. 7, bottom), consistent with recent experimental observations (Wagner et al., 2019). Moreover, we also observe that the variance of total correlations decreases over learning (not shown).\n4 CONCLUSIONS AND DISCUSSION\nWe introduced a deep learning model of cortico-cerebellar function, in which the cerebellum acts as a decoupling machine. We demonstrate that this idea is directly related to classical cerebellar models (Marr, 1969; Albus, 1971), but with the key prediction that the cerebellum decouples activity propagation across the brain, speeding up credit assignment across multiple brain areas. Our model is also a step towards solving the temporal credit assignment problem in the brain, as the CC-DNI models reduce the need for strong BPTT (i.e. without trun-\ncations). Temporal credit assignment is directly related to the ability of networks to maintain input information (Bellec et al., 2019). Therefore, our results on truncated BPTT suggest that the cerebellum becomes increasingly important as task difficulty increases due to the inability of cortical networks to maintain a information for longer than hundreds of milliseconds (Deverett et al., 2019).\nOur results are largely consistent with observed cognitive deficits in cerebellar patients, such as in language (Guell et al., 2015), which cannot be attributed directly to deficits in motor control (Stoodley and Schmahmann, 2009). Because of the explicit use of labels (or desired targets) here we have relied on supervised learning settings3, but the same framework can be easily applied in reinforcement and other unsupervised learning settings (Jaderberg et al., 2017), which do not require an explicit teacher. Indeed the cerebellar model we propose here is of particular relevance for reinforcement learning due to the prevalence of sparse and delayed rewards, consistent with recent observations of reward-related signals in the cerebellum (Sendhilnathan et al., 2020).\nOur work makes numerous predictions and opens the exciting possibility of comparing CC-DNI models with recent studies across a range of tasks (King et al., 2019; Ashida et al., 2019). We performed three separate experiments to highlight predictions made by the model: (i) we used ablations to show how behavioural performance is shaped by cerebellar predictions, (ii) a study on cerebellar expansion and sparsity that suggests that the cerebellum may have evolved to facilitate learning in specific tasks and (iii) study how correlations between main network and the cerebellum develop over learning (see Section 5). Moreover, the model also predicts which connections should project to GCs (source brain area activity) and the inferior olive (target brain area activity). In addition, as demonstrated by Czarnecki et al. (2017) the CC-DNI model also predicts that without a cerebellum the neocortical representations should become less distributed across different brain areas. Furthermore, as shown by Jaderberg et al. (2017) these models enable a network to be trained in a fully decoupled fashion, which means that it can update asynchronously. Given that the brain is asynchronous this may be a fundamental benefit of having a cerebellar system.\nFinally, our framework can also inspire new deep learning DNI systems. For example, a key feature of cerebellar architectures are the many functionally separate modules (Apps et al. (2018); Cerminara and Apps (2011); Suvrathan et al. (2016); Fig. 1c) to speed-up learning, generalising the ideas used by Jaderberg et al. (2017).\n3Note, however, that learning of language models (as in caption generation) is often done in an unsupervised (self-supervised) setting." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "In each of our tasks we use a long-short term memory network (LSTM; (Hochreiter and Schmidhuber, 1997)) as the main “cortical” network (Costa et al., 2017), and a simple feedforward network of one (sensorimotor tasks) or two hidden layers (caption generation) as the synthesiser “cerebellar” network. As in (Jaderberg et al., 2017) the cerebellar network first predicts gradients for both the memory cell and output state of the LSTM (i.e. for each neuron in the main network the cerebellar network receives two inputs), so that the ‘cerebellar’ input and output size is two times the number of LSTM units, and this synthetic gradient ĝ is then scaled by a factor of 0.1 before being used by the main model for stability purposes. The final ‘readout’ of the model is a (trained) linear sum of the LSTM output states. All networks are optimised using ADAM (Kingma and Ba, 2014) (see learning rates below). In each experiment all initial LSTM parameters are drawn from the uniform distribution U( 1√nLSTM , 1√ nLSTM\n), where nLSTM is the number of LSTM units. Other than the final layer, as in (Jaderberg et al., 2017), the feedforward weights of the cerebellar network are initialised according to U(−bk, bk) where bk denotes the “kaiming bound” as computed in (He et al., 2015) (slope a =\n√ 5), and the biases are draw from U( 1√ninp , 1√ ninp\n), where ninp denotes the input size of the layer. As in (Jaderberg et al., 2017), the last layer (both weights and bias) of the cerebellar network is zero-initialised, so that the produced synthetic gradients at the start are zero. A range of correlations (here quantified as Pearson or population correlation coefficient) for the LSTM activity h can be achieved naturally through different random initialisations (accessed through different random seeds), but can be also controlled by the scaling of initial parameters of the LSTM with a bias factor b, where b > 1 and b < 1 induces greater and smaller variability in parameter values respectively. The results demonstrating the effect of correlation (Figs 2f,g, 3d) are obtained across 10 random seeds, for multiple bias factor values b ∈ [0.01, 0.1, 0.5, 1, 2]. During learning, backpropogation through time (BPTT) takes place strictly within distinct truncations, and computed gradients are not propagated between them: truncated BPTT. As soon as the forward computation of a given truncation is completed, assuming a corresponding (synthetic) loss gradient is available, the model parameters are updated; that is, at a resolution of the preset truncation size, there is online learning. We split the original input into truncations as follows. Given an input sequence of N timesteps x1, x2, . . . , xN and a truncation size T , we divide the sequence into T sized truncations with any remainder going to the last truncation. In other words, the sequence is now made up truncations of (x1, . . . , xT ), (xT+1, . . . , x2T ), . . . , (x(m−1)T+1, . . . , xmT ), (xN−r, . . . , xN ), where N = mT + r for positive integers m, r with 0 ≤ r < T . Note that, along with the value T , how well the sequence is divided into truncations (i.e. values m, r) is an important factor for learning. Unless stated otherwise, reported error/loss values are on a held-out validation set at the end of learning. All experiments were conducted using PyTorch." }, { "heading": "A.1 TARGET REACHING TASK", "text": "In the target reaching task, an LSTM network receives a discrete input cue which signals the network to move to a particular target location in 2D space. In the results presented in the main text (Fig. 2) we set 6 distinct non-zero input-target pairs {(xi, yi)}6i=1, where each input xi is a (one dimensional) integer ∈ {±1,±2,±3}, and the targets {yi}6i=1 lie equidistantly on a circle centred on the origin with radius 10. Once an input cue is received at timestep t0, the model receives no new information (i.e. all future input is zero) and has 10 timesteps to move to the corresponding target location. The model is trained to minimise the mean squared error (MSE) between its final output and the cue-based target. In addition, if no input cue is provided (x = 0), the network is trained to not move at all (y = 0). Note therefore that only the final model output ‘matters’ in the sense that it defines model performance. The cortical network has one hidden layer of 10 LSTM units (i.e. 20 when including the memory cells); unless stated otherwise, the cerebellar network contains one hidden layer\nof 80 hidden neurons. The initial learning rate is set to 0.001. Each epoch comprises one batch of 50 randomised examples. Model performance is averaged over 10 random seeds (with error bars), where each seed determines the initial weights of the network. Truncation values T ∈ [2, 3, 4, 5, 7] are considered. Furthermore, to see how the above conditions generalise we consider variants of the task with 1. different numbers of cue/target pairs (see Fig. S8a and b) and 2. higher dimensional input (see Fig. S8c and d). More specifically, we consider two cases where the model receives two-dimensional (Fig. S8c) and 28-dimensional (Fig. S8d) input. For the former the model architecture remains the same except now the model receives at time t0 the (two-dimensional) target values scaled by a factor 0.1 as input. For the latter we have an LSTM with 30 hidden units and the input x is now a binary vector of size 28. In order to increase the task difficulty for these variants of higher dimensional input, we add to the input Gaussian noise with variance 0.1 at each timestep." }, { "heading": "A.2 SEQUENTIAL MNIST TASKS", "text": "For each seqMNIST based task the model receives the same temporal MNIST input, and the tasks are only differentiated by the model output. Given a 28 × 28 MNIST image, at timestep i the model receives the pixels from row i of the image, so that there are 28 total timesteps and an input size of 28. Only the final model output matters for the seqMNIST and c-seqMNIST tasks - that is, the loss is solely defined at the end of the sequence - whereas the model output is assessed at every timestep for dd-seqMNIST and ld-seqMNIST. In each case we have one hidden layer of 30 LSTM units in the main model and one hidden layer of 300 (unless stated otherwise) hidden units in the feedforward cerebellar network. Data was presented in batches of 50 with an initial learning rate of 0.0001. Training and validation data was assigned a 4 : 1 split, containing 48000 and 12000 distinct image/number pairs respectively. The truncation values T ∈ [2, 3, 4, 5, 7, 10] are considered. Model performance averaged (with error bars) over 3 random seeds for weight initialisation." }, { "heading": "A.2.1 SEQMNIST", "text": "This is the standard form of seqMNIST, where at the end of the presentation of the image the model must classify in the image as a number between 0 and 9. The output of the model is a vector probabilities of size 10 (one entry for each number), and the model was trained to maximise the likelihood of the correct number." }, { "heading": "A.2.2 C-SEQMNIST", "text": "In this variant each number 0-9 MNIST image is allocated an xy position on the edge of a circle centred at 0, with the position uniquely determined by the digit represented by the image (Fig. 3a). With the model output then a vector of size 2, the training loss is defined at the end by the mean squared error (MSE) between the final output of the model and the target coordinate; since the radius of the circle is length 10 (arbitrary unit), a static model, where the model output remains at 0, would have a constant MSE of 100." }, { "heading": "A.2.3 DD-SEQMNIST", "text": "Like c-seqMNIST, in this variant the model outputs 2D coordinates, but now the model must learn to predict an entire sequence of coordinates {ŷi}28i=1. The target sequence {yi}28i=1 can be considered a drawing, and in this case resembles the shape of the number itself (Fig. 3c; number 0 not shown). The model is then trained at each timestep to minimise the MSE between the model output at that time and corresponding target point, so that the loss at timestep t defined as MSE(yt, ŷt). For each number, the corresponding target drawing lies in [0, 1]2, with the gap between each successive point roughly the same. To prevent the model being too harshly judged at timestep 1, all drawings begin in the top left corner (0, 1) (apart from the drawing of 1 which begins slightly beneath/to the right). MSE scores are reported as 100 times their raw values to ease comparison with c-SEQMNIST/dd-SEQMNIST." }, { "heading": "A.2.4 LD-SEQMNIST", "text": "This variant can be thought of as a mixture between c-seqMNIST and dd-seqMNIST. Like dd-seqMNIST the model is constantly assessed and must produce a desired shape, but now\nthe desired set of points form an equally spaced line where the start point is the origin and the end point is determined by the number-to-coordinate mapping of c-seqMNIST. As with dd-seqMNIST, the loss is defined at each timestep by the difference between the model output and appropriate point on the line." }, { "heading": "A.3 CAPTION GENERATION", "text": "The architecture for the caption generation task consists of a pretrained CNN coupled with an RNN (LSTM in this case). The synthesiser (cerebellar network) only communicates to the LSTM. The LSTM network has one layer of 256 LSTM units and the cerebellar network has two hidden layers of 1024 neurons. The dynamics from image to caption is as follows. As part of image preprocessing and data augmentation, a given image is randomly cropped to size 224× 224, flipped horizontally with even chance, and appropriately normalised to be given to a pretrained resnet model (He et al., 2016). A feature vector X of size 256 is thus obtained and is passed to the LSTM at timestep 0. The LSTM is subsequently presented the “gold standard” caption {wi}ni=1 one word per timestep, each time learning to predict the next word; i.e., at timestep t the model learns P (wt|X, {wi}t−1i=1). The network simultaneously learns a word embedding so that each word wi is first transformed to a feature vector of size 256 before being served as input. With a preset vocabulary of 9956 distinct words, the final output of the model (P (wi)) is a probability vector of size 9956. We find that though DNI could help without any explicit method for regularisation (not shown), all models are prone to overfitting. For this reason, we apply dropout (during training) on the input to the LSTM, where a given input element is set to zero with p = 0.5 probability. Once training is complete the models can generate their own unique captions to previously unseen images (Figs. 4, S12). Given an image at timestep 0„ the model applies a ‘greedy search’ where the output of the model at timestep i is the word with the highest probability, and the same word is then provided as input to the model at timestep i+ 1. In this way the model eventually outputs an entire sequence of words which forms a caption. In the (highly) rare case where the model generates a sequence of > 20 words, we consider only the first 20 words as its caption. The coco training set ILSVRC-2012-CLS (Russakovsky et al., 2015) holds 414113 total image-caption pairs with 82783 unique images while the held-out validation set holds 202654 with 40504 unique images; note that each image therefore has ∼ 5 distinct gold standard captions. Training takes place in batches of 100 image/caption pairs, with learning rate of 0.001. Model performance averaged (with error bars) over 5 random seeds for weight initialisation. In order to judge the models beyond their learning curves in BPW, we quantify their ability to generate captions using a variety of language modelling metrics popular in the realm of language evaluation (image captioning, machine translation, etc). In particular, we compare model-generated captions against the gold standard captions using the metrics BLEU, Rouge-L, METEOR, CIDEr, SPICE (Papineni et al., 2002; Lin, 2004; Denkowski and Lavie, 2014; Vedantam et al., 2015; Anderson et al., 2016). We use BLEU_4 (i.e. consider only n-grams for n ≤ 4) as the default BLEU score (see Fig 4), though we also look at BLEU_1, BLEU_2, BLEU_3 (Fig S15." }, { "heading": "B FORWARD DNI: MNIST", "text": "We performed a simple implementation of forward DNI - where now the cerebellar network is trained to predict the activity (not gradients) of the main network - on the MNIST database. With no temporal dimension, the main network is a feedforward neural network which, given a 28× 28 MNIST image, will tune its output (a vector of size 10, where each value corresponds to a possible digit) so as to maximise the probability of the associated number (or equally minimise the cross-entropy loss - Fig. 7, top). In our case the feedforward network has two hidden layers, each of size 256; after each hidden layer a batch normalisation is applied as well as the non-linear ReLU function. We apply a feedforward cerebellar network with one hidden layer of size 256 which receives\nthe original MNIST input x and is trained to predict the activity of the second hidden layer of the main network h2; the synthesiser loss used to train the synthesiser parameters then becomes LC := ||h2 − C(x)||. The cerebellar module can then break the “forward lock” by allowing the main model to bypass its forward computations up until its second layer and instead use C(x) to make its prediction. As before, each network is trained using gradient descent with ADAM. Correlations reported are computed using the Pearson correlation. For ease of display reported losses, activity and correlations in Fig. 7 are smoothed over batches using a Savitzky-Golay filter (window size 19, polynomial order 3). Since the output weights of the cerebellar network can be positive or negative, we do not distinguish between positive or anti correlation (i.e. show the absolute correlation value) between the hidden activity of the cerebellar network and its respective target hidden activity of the feedforward network." }, { "heading": "C COMPARISON WITH OTHER MODELS OF THE CEREBELLUM", "text": "In section 2 of the main text we argue that there is a strong similarity between the flow of information in DNI and classical internal models of the cerebellum. In Table S1 we compare explicitly the input and output components of both types of models.\nForward Model Backward DNI Inverse Model Forward DNI Controller Neocortex Main model Cerebellum Synthesiser Input hm hm hs, ds hs Output funk(hm) CB(hm) = ∂̂L∂hm funk(hs,ds) C\nF (hs) Ouput destination Neocortex Main model Controlled object Main model\nTable S1: Internal models versus DNI. The properties of the forward model of the cerebellum can be set against those of backward DNI (blue); similarly, the properties of the inverse model of the cerebellum can be set against those of forward DNI (red). Notation is largely consistent with section 2 of the main text: hm,hs denotes the hidden activity of a motor area and sensory area, respectively; CB , CF denotes the computation of backward DNI and forward DNI, respectively; L denotes the loss function. In addition, the inverse model of the cerebellum traditionally also has access to a desired state ds (in particular, one can consider this a special case of “context” provided to the synthesiser; cf (Jaderberg et al., 2017)). There are no explicit equations for the computational processes of the forward and inverse models, and both are thus represented by the unknown function funk.\nC.0.1 INVERSE AND FORWARD DNI CEREBELLAR MODELS\nThe inverse model of the cerebellum in motor control computes the motor predictions given a target (or desired) response (Ito, 1970). Feedback errors are derived by comparing a motor command from the motor cortex with the predicted motor command in the cerebellum. We propose that the inverse model is equivalent to forward DNI, as it aims to predict the incoming activity at a given layer (e.g. motor command, hM , or associative input, hA) given the input activity observed at a lower layer (e.g. sensory target response, hS ; Fig. 1). Consistent with the modular anatomy of the cerebellum (Apps et al., 2018; Cerminara and Apps, 2011; Ruigrok, 2011) multiple CC-DNI (backward and/or forward) modules can co-exist estimating different pathways components." }, { "heading": "D RECURRENT DNI", "text": "In section 2 of the main text we formally describe the dynamics of DNI in a feedforward setting. Here we show how these dynamics easily generalise to the time dependent, recurrent setting. We now take the main model as a recurrent neural network (RNN), where the (recurrent) hidden layer is again connected to a distinct, feedforward synthesiser. By unrolling the network over several timesteps we can keep much of the notation as introduced in section 2, but now hi denotes the hidden (cortical) activity at timestep i (as opposed to layer i) with a unique recurrent weight θ (= θi∀i); furthermore, the loss L is now a sum of individual losses over the timesteps: L = ∑ i Li.\nInferior Olive\n-+ error\nMotor areas\nControlled objects\ninput\nSensory areas\nInverse cerebellum ~ forward DNI\nMotor prediction\na\nTa rg et re sp on se Cerebellum\nMotor modules\nForward unlocking\nGC PC\nInferior Olive\n-+ error\nAssociative areas\nCognitive areas\ninput\nSensory areas\nAssociative pred.\nTa rg et re sp on se\nCerebellum Associative modules\nb\nSensory modules\nCerebellum\nCognitive modules\nCerebellum\nGC PC\nmotor domain\nnon-motor domain\nFigure S1: Equivalent to Fig. 1 for how cerebellar inverse models map onto forward DNI (forwardCC-DNI). (a) In the inverse model the cerebellum generates motor predictions (ĥM ) given desired target responses (hS). This allows the brain to break forward locks (dashed black arrows), for example to go directly from sensory areas to higher areas. The inferior olive plays a similar role to (a,b) but here it allows the cerebellum to learn estimations of forward activities (e.g. L = ||hS − ĥM ||). (b) Example of forward CC-DNI model in a non-motor domain. The cerebellum computes an associative prediction hA given a sensory input hS . Multiple backward and forward DNI-cerebellum modules can co-exist working in parallel. h and g variables represent forward and gradient information as in DNI (see main text).\nDNI is strictly applied to RNNs which learn via truncated BPTT, where BPTT takes place in equally-spaced truncations of time so that any hidden activity will only be attributed to future losses within the truncation window T (i.e. ∂Li+t∂hi = 0 ∀t > T ), effectively ignoring\na b\n...\n... ...\nBackward synthetic gradient w/ eligibility traces and sparse error signalsBackward synthetic temporal gradient\nc d\nFigure S2: Synthetic gradients in RNNs with eligibility traces. (a) An example of backpropagation through time with truncation every T=3 timesteps (dashed black arrows). A DNI can be used to bridge the gap between truncations allowing gradients to be sent further back in time (δ̂t and ˆδt+3). Adapted from Jaderberg et al. (2017). (b) Schematic of a biologically plausible RNN with eligibility traces that encode information needed to compute the gradients at a later point in time (green traces; cf. (Bellec et al., 2019)). The cerebellar-DNI model C can be used to estimate gradients δ̂ derived from general losses L or reward signals R to update synaptic weights at a given point in time. Note that teaching signals/rewards are often sparse. (c) Schematic of information used by synthesiser to learn in a BPTT setting (δt+3 represents the future gradients). (d) Schematic of information used by synthesiser to learn to predict future losses δ from the current activity h. For biological simplicity we assume that the synthesiser has access to the learning signal directly, but in practice its likely to be a function of L.\nfuture losses and potentially disregarding important temporal correlations. The key idea behind recurrent DNI is to bridge these truncations using synthetic gradients (Fig. S2a). For simplicity assume that timestep i is the start of a given truncation. By enabling DNI, a (backward) synthesiser C is used to provide an estimated gradient δ̂i+T for all future losses, outside of the truncation, with respect to the last hidden activity of the truncation hi+T ; i.e. δ̂i+T ≈ δi+T = ∑∞ j=i+T+1 ∂Lj ∂hi+T\n. The weight update is then analogous to the feedforward case in equation 1\nθ ← θ − α [ i+T∑ j=i ∂Lj ∂θ + δ̂i+T ∂hi+T ∂θ ] ; δ̂i+T = C(hi+T ) (2)\nIt is not difficult to see that if the synthesiser perfectly predicts the future gradient, δ̂i+T = δi+T , then the weight will be perfectly updated according to the gradient of all future losses, θ ← θ − α ∑∞ j=i ∂Li ∂θ . Allowing the synthesiser access to this gold standard, δi+T , however, requires waiting an arbitrary amount of time and is therefore not practical for training. Instead, δ̂i+T is trained to replicate an itself approximate, bootstrapped version of the future gradient, δ̄i+T , where δ̄i+T = ∑i+2T j=i+T+1 ∂Lj ∂hi+T + δ̂i+2T ∂hi+2T ∂hi+T\n. Note that δ̄i+T is available in one truncation’s timespan. In the brain an approximation of BPTT can be implemented forward in time using a mixture of eligibility traces and sparse learning signals (e.g. reward) ((Bellec et al., 2019); Fig. S2b). The dynamics of this more “biologically plausible” learning regime are subtle and are not described here; we simply mention that the cerebellum might be the source of such (future) learning signals and predict that many of the results observed with BPTT will hold for the forward propagation of gradients.\nE INPUT EXPANSION AND SPARSE CONNECTIVITY IN" }, { "heading": "CEREBELLUM-DNI", "text": "We also study the effects of input expansion and sparse connectivity from the main model to the synthesiser in our (s)CC-DNI model. Input expansion and sparse connectivity are two defining features of the cerebellum. Often human intelligence is linked to its expanded neocortex. However, the cerebellar/neocortical cell ratio in humans 4/1 and the cerebellum has significantly expanded throughout evolution (Barton and Venditti (2014); HerculanoHouzel (2010)). Here we aim to study the effect of this input expansion into the cerebellum by systematically varying the number of hidden units in the synthesiser relative to the number of units in the main network. We show this for both sensorimotor task: target reaching task (Fig. S3) and seq-MNIST tasks (Fig. S4 and S5) . In addition, it is known that one granule cell receives input from four mossy fibres on average. Mossy fibres form the main inputs to the cerebellum. The largest source of mossy fibres is the neocortex which projects inputs to the cerebellum via the pons. To quantify the effect of the sparse input connectivity we quantify next to the Pearson correlation also the population correlation coefficient for the target reaching task (Fig. S6) and the standard seq-MNIST task (Fig. S7)." }, { "heading": "F TARGET REACHING SUPPLEMENTARY DATA", "text": "Here we show that the target reaching task results extrapolate to variants of the task with varied input-output combinations (Fig. S8). Furthermore, in line with Fig. 2b we include the learning curves across different truncation values for the three models (Fig. S9)." }, { "heading": "G SEQMNIST SUPPLEMENTARY DATA", "text": "In line with Fig.3c, we include the learning curves across different truncation values T for the standard seqMNIST task as well as its variants (Figs. S10, S11).\nFigure S3: (a) Learning curves for target reaching as in Fig. 2 over different divergence ratios M/N , where M is the number of hidden ‘granular’ units in cerebellar model, N the number of input units, and number of non-zero input connections K. LSTM performance (grey) shown along with fully (orange) and sparsely (cyan) connected CC-DNI models. Note that N is fixed here as 2× 10 = 20 (10 LSTM units with gradients calculated for both cell and output states), hence * denotes full connectivity. The smaller x-axis in each subplot represents the epoch number and the smaller y-axis represents mean squared error. (b) Average performance over last five epochs (295-300) against divergence ratio M/N and input connection sparsity K for target reaching as in Fig. 2. (c) Same as (a) but for complex input (input dimension =28) and seven 2-D target coordinates (30 LSTM units with gradients calculated for both cell and output states). (d) Same as (b) for target reaching with complex input (input dimension = 28) and seven 2-D target coordinates over last 5 epochs (95-100).\nH IMAGE CAPTIONING SUPPLEMENTARY DATA\nIn line with Fig. 4, we include more examples of image-caption pairs (Fig. S12) as well as the learning curves and metric scores across different truncation values (Figs. S14, S15). Moreover, we demonstrate that DNI models for this task often generalise to unseen data better than regular LSTM models (Fig. S13).\n1 4 8 10 30 60 *\nK\n20/1\n10/1\n5/1\n2/1\n1/1\n1/2\nM /N\n5 20\n20% 50%\n5 20 5 20 5 20 5 20 5 20\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n20% 50%\n5 20 20% 50% 5 20 5 20 5 20 5 20 5 20 20% 50%\n1 4 8 10 30 60 *\nK\n20/1\n10/1\n5/1\n2/1\n1/1\n1/2\nM /N\n5 20\n40 70\n5 20 5 20 5 20 5 20 5 20\n40 70\n40 70\n40 70\n40 70\n40 70\n40 70\n40 70\n40 70\n40 70\n5 20 40 70 5 20 5 20 5 20 5 20 5 20 40 70\n1 4 8 10 30 60 *\nK\n20/1\n10/1\n5/1\n2/1\n1/1\n1/2\nM /N\n5 20\n10 15\n5 20 5 20 5 20 5 20 5 20\n10 15\n10 15\n10 15\n10 15\n10 15\n10 15\n10 15\n10 15\n10 15\n5 20 10 15 5 20 5 20 5 20 5 20 5 20 10 15\n1 4 8 10 30 60 *\nK\n20/1\n10/1\n5/1\n2/1\n1/1\n1/2\nM /N\n5 20\n10\n25 5 20 5 20 5 20 5 20 5 20\n10\n25\n10\n25\n10\n25\n10\n25\n10\n25\n10\n25\n10\n25\n10\n25\n10\n25\n5 20 10\n25\n5 20 5 20 5 20 5 20 5 20 10\n25\nseqMNIST (error) c-seqMNIST (MSE)\ndd-seqMNIST (MSE) ld-seqMNIST (MSE)\nFigure S4: Learning curves for all seqMNIST based tasks over different cerebellum divergence ratios M/N , where M is the number of hidden ‘granular’ units in cerebellar model, N the number of input units, and number of non-zero input connections K. LSTM performance (grey) shown as a reference along with fully (orange) and sparsely (cyan) connected cc-DNI models. Note that N is fixed here as 2× 30 = 60 (30 LSTM units with gradients calculated for both cell and output states), hence * denotes full connectivity. The smaller x-axis in each subplot represents the epoch number and the y-axis represents performance over validation data (error for seqMNIST, MSE for the others).\n1 4 8 10 30 60 * K\n0/ 1\n1/ 2\n1/ 1\n2/ 1\n5/ 1\n10 /1 20 /1\nM /N\nseqMNIST (error)\n18%\n20%\n22%\n24%\n26%\n28%\n1 4 8 10 30 60 * K\n0/ 1\n1/ 2\n1/ 1\n2/ 1\n5/ 1\n10 /1 20 /1\nM /N\nc-seqMNIST (MSE)\n36\n38\n40\n42\n44\n46\n48\n50\n1 4 8 10 30 60 * K\n0/ 1\n1/ 2\n1/ 1\n2/ 1\n5/ 1\n10 /1 20 /1\nM /N\ndd-seqMNIST (MSE)\n8.6\n8.8\n9.0\n9.2\n9.4\n1 4 8 10 30 60 * K\n0/ 1\n1/ 2\n1/ 1\n2/ 1\n5/ 1\n10 /1 20 /1\nM /N\nld-seqMNIST (MSE)\n5.8\n6.0\n6.2\n6.4\n6.6\n6.8\nFigure S5: Related to Fig. S4. Average performance over last five epochs (21-25) against divergence ratio M/N and input connection sparsity K for each seqMNIST based task. * denotes full connectivity. The divergence ratio and connectivity of the default LSTM, CC-DNI and sCC-DNI models used for the seqMNIST tasks (see Fig. 3) are illustrated by the grey, orange and cyan squares respectively.\n0.40 0.45 0.50 0.55 0.60 0.65\nrpop(h)\n0\n50\ner ro\nr ( M\nS E\n) r = -0.07 r = 0.1\n0 20 40 60 80 100\nepoch\n0.2\n0.4\n0.6\nr p op\n(g )\nFigure S6: Equivalent to Fig. 2f for population correlation rpop (see equation ??). (top) Effect of correlation of hidden (cortical) activity h on performance for cc-DNI (orange) and sCC-DNI (cyan) on the target reaching task, where population correlation and performance are recorded during the first and last (300) epoch respectively. (bottom) Evolution of the population correlation of the synthetic gradient ĝ over the first hundred epochs.\n1 2 3 4 5 epoch\n0.0\n0.1\n0.2\n0.3\nr p op\n(g )\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 rpop(h)\n25%\n30%\n35%\nEr ro\nr ( %\n)\nr = 0.32 r = 0.1\nFigure S7: Equivalent to Fig. 3d for population correlation rpop (see equation ??). (top) Effect of correlation of hidden (cortical) activity h on performance for cc-DNI (orange) and sCC-DNI (cyan) on the (standard) seqMNIST task, where population correlation and performance are recorded during the first and fifth epoch respectively. (bottom) Evolution of the population correlation of the synthetic gradient ĝ over the first five epochs.\nFigure S8: (a, b) Target reaching task as in Fig. 2 with varying number of targets. (a) Given a one-dimensional input the network has to learn to drive its output towards one of the 3 targets over 10 timesteps. For this variant divergence ratio for CC-DNI is M/N = 4 and K = 4 for sCC-DNI. (b) Given a one-dimensional input the network has to learn to drive its output towards one of the 9 targets over 10 timesteps. For this variant divergence ratio for CC-DNI is M/N = 2 and K = 4 for sCCDNI. (c, d) Two variants on the target reaching task with varying input dimension, two-dimensional (i.e. X = {x1, x2}) and and 28-dimensional (Fig. S8d) (i.e. X = {x1, ..., x28}) input respectively. (c) Given a two-dimensional input the network has to learn to drive its output towards one of the 7 targets over 10 timesteps. For this variant divergence ratio for CC-DNI is M/N = 4 and K = 4 for sCC-DNI. (d) Given a 28-dimensional input the network has to learn to drive its output towards one of the 7 targets over 10 timesteps. For this variant divergence ratio for CC-DNI is M/N = 4 and K = 4 for sCC-DNI.\n0 100 200 300 epoch\n0\n20\n40\n60\n80\n100\ner ro\nr ( M\nS E\n)\nT = 2\n100 200 300 epoch\nT = 3\n100 200 300 epoch\nT = 4\n100 200 300 epoch\nT = 5\n100 200 300 epoch\nT = 7\nFigure S9: Learning curves for the target reaching task as in Fig. 2b across different truncation values T shown for the three models LSTM (grey), CC-DNI (orange) and sCC-DNI (cyan). Pararameters are the same as in Fig. 2b with divergence ratio M/N = 4 and sparse input connectivity K = 4 for sCC-DNI.\n0 10 20\n20%\n40%\n60%\ner ro\nr ( %\n) T = 2\n0 10 20\nT = 3\n0 10 20\nT = 4\n0 10 20\nT = 5\n0 10 20\nT = 7\n0 10 20\nT = 10\n0 10 20 20\n40\n60\n80\ner ro\nr ( M\nSE )\nT = 2\n0 10 20\nT = 3\n0 10 20\nT = 4\n0 10 20\nT = 5\n0 10 20\nT = 7\n0 10 20\nT = 10\n0 10 20\n10\n12\n14\n16\n18\ner ro\nr ( M\nSE )\nT = 2\n0 10 20\nT = 3\n0 10 20\nT = 4\n0 10 20\nT = 5\n0 10 20\nT = 7\n0 10 20\nT = 10\n0 10 20\n10\n15\n20\n25\n30\ner ro\nr ( M\nSE )\nT = 2\n0 10 20\nT = 3\n0 10 20\nT = 4\n0 10 20\nT = 5\n0 10 20\nT = 7\n0 10 20\nT = 10\nseqMNIST\nc-seqMNIST\ndd-seqMNIST\nld-seqMNIST\nepoch\nFigure S10: Learning curves for all seqMNIST based tasks across different truncation values. Model colours as in the main text (LSTM grey; CC-DNI orange; sCC-DNI cyan), with the default parameters of a 5/1 divergence (300 GCs) in the DNI models and K = 4 for sCC-DNI.\n2 3 4 5 6 7 8 9 10 truncation\n20\n30\n40\n50\n60\n70\ner ro\nr ( M\nSE )\nc-seqMNIST\n2 3 4 5 6 7 8 9 10 truncation\n8\n9\n10\n11\n12\ner ro\nr ( M\nSE )\ndd-seqMIST\n2 3 4 5 6 7 8 9 10 truncation\n5.5\n6.0\n6.5\n7.0\n7.5\ner ro\nr ( M\nSE )\nld-seqMNIST\nFigure S11: Related to Fig. S10. Average performance over last five epochs (21-25) for c-seqMNIST, dd-seqMNIST, ld-seqMNIST tasks across truncation sizes for LSTM (grey), cc-DNI (orange), sCCDNI (cyan). See Fig. 3c for (standard) seqMNIST.\nFigure S12: Example images from the validation set with corresponding model captions (LSTM (grey); CC-DNI (orange); sCC-DNI (cyan)) and gold standard captions (black). Here we show a combination of examples of how the models describe the presented image. In some case all or some models fail to give an accurate description of the image. In other cases all models are able to an accurate caption describing the image, with each model displaying subtle differences in the generated captions.\n0.010 0.005 0.000 0.005 0.010 0.015 training loss\n0.010\n0.005\n0.000\n0.005\n0.010\n0.015\nv al\nid at\nio n\nlo ss\nT = 3 T = 4 T = 5 T = 6 T = 7\nFigure S13: Generalisation of CC-DNI (orange) and sCC-DNI (cyan) for truncation values T from 3 to 7. The change in loss is computed with reference to the LSTM (i.e. (s)CC-DNI - LSTM). Training loss is calculated after training (with dropout disabled) to enable fair comparison with final validation performance.\n1 10 3.10\n3.15\n3.20\n3.25\n3.30\n3.35\nlo ss\n(B PW\n)\nT = 3\n0 5 10\n3.0\n3.5\n1 10\nT = 4\n0 5 10\n3.0\n3.5\n1 10\nT = 5\n0 5 10\n3.0 3.5\n1 10 epoch\nT = 6\n0 5 10\n3.0\n3.5\n1 10\nT = 7\n0 5 10\n3.0 3.5\n1 10\nT = 10\n0 5 10\n3.0 3.5 4.0\n1 10\nT = 15\n0 5 10\n3 4 5 6\nFigure S14: Validation learning curves in bits per word (BPW) for LSTM (grey), CC-DNI (orange) and sCC-DNI (cyan) across different truncation sizes T (see Fig. 4). BPW range restricted to enable comparison between truncation values; full curves are shown in inset. The surprising performance for T = 10 is likely due to how the sequence is divided into truncations. With a strong majority of (gold standard) caption lengths between 11-15 (mean ∼ 13, the sequence will often be divided into two uneven truncations, perhaps making BPTT difficult for the LSTM. However, in the case of two truncations a synthetic gradient will only be required once (in between the two truncations) and is analogous to the “easier” job of a synthesiser predicting gradients for a feedforward network, explaining the particular relative improvement seen for the CC-DNI models in this case.\n0.65\n0.66\n0.67\nBl eu\n_1\n10 2 10 3\n0\n10 3\nB le\nu_ 1\n0.47\n0.48\n0.49\nBl eu\n_2\n10 2 10 3\n0\n10 3\nB le\nu_ 2\n0.32\n0.33\n0.34\nBl eu\n_3\n10 2 10 3\n0\n10 3\nB le\nu_ 3\n0.22\n0.23\n0.24\nBl eu\n_4\n10 2 10 3\n0 10 3 B le u_ 4\n0.475\n0.480\n0.485\nRO UG\nE_ L\n10 2\n10 3\n0\n10 3\nR OU\nGE _L\n0.210\n0.215\nM ET\nEO R\n10 3\n0\n10 3\nM ET\nEO R\n0.675\n0.700\n0.725\nCI DE\nr\n10 2 10 3\n0\n10 3\nC ID\nEr\n0.140\n0.145\nSP IC\nE\n10 3\n0\n10 3\nS PI\nCE\n3 6 9 12 15 truncation\n0.96\n0.98\n1.00\nra tio\n3 6 9 12 15 truncation\n10 3 0\n10 3 10 2\nra tio\nFigure S15: Evaluation of model-generated captions across truncation sizes for metrics (in order as shown) BLEU_1, BLEU_2, BLEU_3, BLEU_4, Rouge-L, METEOR, CIDEr, SPICE. The caption length ratio Cm/Cgs, where Cm is the length of the model-generated caption and Cgs is the length of the corresponding gold standard caption of closest length, is also shown." } ]
2,020
null
SP:1292de91b0e7ab81457f925f72022d83ec061cc6
[ "It's already known that embeddings like word2vec and glove are biased [1] and needs postprocessing for better performance. This paper designed a novel approach to do embedding normalizations. After each round of training, noise is intentionally introduced to perturbate the finetuned parameters. Afterwards, another round of training starting from perturbated local optima could in potential converge to a better one. This method is validated via CNN-based text classification." ]
We introduce a training method for better word representation and performance, which we call GraVeR (Gradual Vector Rumination). The method is to gradually and iteratively add random noises and bias to word embeddings after training a model, and re-train the model from scratch but initialize with the noised word embeddings. Through the re-training process, some noises can be compensated and other noises can be utilized to learn better representations. As a result, we can get word representations further fine-tuned and specialized in the task. On six text classification tasks, our method improves model performances with a large gap. When GraVeR is combined with other regularization techniques, it shows further improvements. Lastly, we investigate the usefulness of GraVeR1.
[]
[ { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "arXiv preprint arXiv:1607.04606,", "year": 2016 }, { "authors": [ "Ming-Wei Chang", "Lev-Arie Ratinov", "Dan Roth", "Vivek Srikumar" ], "title": "Importance of semantic representation: Dataless classification", "venue": "In AAAI,", "year": 2008 }, { "authors": [ "Yong Cheng", "Zhaopeng Tu", "Fandong Meng", "Junjie Zhai", "Yang Liu" ], "title": "Towards robust neural machine translation", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Alexis Conneau", "Holger Schwenk", "Loı̈c Barrault", "Yann Lecun" ], "title": "Very deep convolutional networks for text classification", "venue": "In European Chapter of the Association for Computational Linguistics", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Manaal Faruqui", "Jesse Dodge", "Sujay Kumar Jauhar", "Chris Dyer", "Eduard Hovy", "Noah A Smith" ], "title": "Retrofitting word vectors to semantic lexicons", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Richard HR Hahnloser", "Rahul Sarpeshkar", "Misha A Mahowald", "Rodney J Douglas", "H Sebastian Seung" ], "title": "Digital selection and analogue amplification coexist in a cortex-inspired silicon", "venue": null, "year": 2000 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Hwiyeol Jo", "Stanley Jungkyu Choi" ], "title": "Extrofitting: Enriching word representation and its vector space with semantic lexicons", "venue": "ACL 2018,", "year": 2018 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Jens Lehmann", "Robert Isele", "Max Jakob", "Anja Jentzsch", "Dimitris Kontokostas", "Pablo N Mendes", "Sebastian Hellmann", "Mohamed Morsey", "Patrick Van Kleef", "Sören Auer" ], "title": "Dbpedia–a largescale, multilingual knowledge base extracted from wikipedia", "venue": "Semantic Web,", "year": 2015 }, { "authors": [ "Ping Luo", "Xinjiang Wang", "Wenqi Shao", "Zhanglin Peng" ], "title": "Towards understanding regularization in batch normalization", "venue": null, "year": 2018 }, { "authors": [ "Andrew L Maas", "Raymond E Daly", "Peter T Pham", "Dan Huang", "Andrew Y Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume", "venue": "Association for Computational Linguistics,", "year": 2011 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Bryan McCann", "James Bradbury", "Caiming Xiong", "Richard Socher" ], "title": "Learned in translation: Contextualized word vectors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Oren Melamud", "Jacob Goldberger", "Ido Dagan" ], "title": "context2vec: Learning generic context embedding with bidirectional lstm", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Takeru Miyato", "Andrew M Dai", "Ian Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "arXiv preprint arXiv:1605.07725,", "year": 2016 }, { "authors": [ "Nikola Mrkšić", "Ivan Vulić", "Diarmuid Ó Séaghdha", "Ira Leviant", "Roi Reichart", "Milica Gašić", "Anna Korhonen", "Steve Young" ], "title": "Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),", "year": 2018 }, { "authors": [ "David C Plaut" ], "title": "Experiments on learning by back propagation", "venue": null, "year": 1986 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Joseph Turian", "Lev Ratinov", "Yoshua Bengio" ], "title": "Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pp. 384–394", "venue": "Association for Computational Linguistics,", "year": 2010 }, { "authors": [ "Twan van Laarhoven" ], "title": "L2 regularization versus batch and weight normalization", "venue": "arXiv preprint arXiv:1706.05350,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Luke Vilnis", "Andrew McCallum" ], "title": "Word representations via gaussian embedding", "venue": "arXiv preprint arXiv:1412.6623,", "year": 2014 }, { "authors": [ "Ivan Vulić", "Nikola Mrkšić", "Anna Korhonen" ], "title": "Cross-lingual induction and transfer of verb classes based on word vector space specialisation", "venue": "arXiv preprint arXiv:1707.06945,", "year": 2017 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Dongxu Zhang", "Zhichao Yang" ], "title": "Word embedding perturbation for sentence classification", "venue": "arXiv preprint arXiv:1804.08166,", "year": 2018 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 } ]
[ { "heading": null, "text": "1 Introduction\nMost machine learning methodologies can be formulated to get computational representations from real-life objects (e.g., images, languages, and sounds) and then get high-level representations using model architectures. Therefore, there have been two main approaches to improve model performances: (1) starting with better representations (Melamud et al., 2016; Peters et al., 2018), and (2) building more sophisticated architectures that can extract important features and generate higherlevel representations (Vaswani et al., 2017; Conneau et al., 2017). For better initial representations, many NLP researchers have used pretrained word vectors trained on substantially large corpus through unsupervised algorithms, such as word2vec (Mikolov et al., 2013a), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2016). The pretrained word vectors represent the general meaning of words and increase the model performances on most NLP tasks (Turian et al., 2010). In addition to the algorithms, word vector post-processing research (Faruqui et al., 2015; Vulić et al., 2017; Mrkšić et al., 2017; Jo & Choi, 2018) have attempted to enrich the pretrained representations using external resources. They simply modified the values of the vector representations in some way, and showed improved performance. It implies that we can get further improvement through better initial representations. When training NLP models, we first initialize word representations with pretrained word vectors and then update both the model parameters and the word representations. However, in the training process, the model performance can be limited due to the initial word vectors. For example, the pretrained word representations have general meanings of words, but, in some tasks, the words might not be used for the general meaning. Although the gap between meanings can be learned through the training process, it could fail. Since the pretrained representations are trained from a huge dataset, and their objective functions are based on language modeling, the word vectors are naturally biased to general and frequently used meaning. Besides, the word vectors are updated through gradient descent algorithms, so the values are changed slightly. The word vectors are thus easy to converge on local minima. Therefore, our method starts with an idea–using the word representations fine-tuned by a training process as pretrained word vectors in the next re-training process. Then, word vectors can be trained to learn more appropriate representations to the task. However, the model must be overfitted, and then the word representation would be stuck in local minima. Thus, we add random noise and bias to the word representations before the re-training processes, in order to prevent the model from overfitting and take the word representations far from the local minima.\n1http://github.com/Sweetblueday/GraVeR\nIn this paper, we propose a simple training framework to find better representations by adding random noise and bias on the word vectors during iterative training processes, which we call GraVeR (Gradual Vector Rumination). We expect that the model makes good uses of the re-training processes with noises, both for learning better representation and for model regularization.\n2 RelatedWorks\nThe representations fine-tuned by GraVeR can be considered as pretrained representations from the previous training process. Also, GraVeR utilizes word-level noises, which are used for model regularization.\n2.1 Pretrained Representations\nPretrained Embedding Vector is also called pretrained word representation. According to the distributional representation hypothesis (Mikolov et al., 2013b), pretrained embedding vectors are composed of pairs of (token, n-dimensional float vector). Unsupervised algorithms (e.g., word2vec (Mikolov et al., 2013a), GloVe (Pennington et al., 2014), fastText (Bojanowski et al., 2016)) learn the word vectors on substantial corpora to represent general meanings of words. The pretrained embedding vectors are widely used to initialize the word vectors in models. Pretrained Embedding Model is suggested to get a deep representation of each word in the context. Previous research McCann et al. (2017); Peters et al. (2018); Devlin et al. (2018) trained deep architecture models and then utilized the model weights to represent words by using the outputs of the models. Although recent advanced pretrained representations (Peters et al., 2018; Devlin et al., 2018) show good performances, we take the pretrained embedding vector approach because (1) retraining processes in the pretrained embedding models are very expensive and (2) we use word-level noises whereas the embedding models use token-level embeddings combined with position embeddings.\n2.2 Word-level Noises\nAdding noises to input data is an old idea (Plaut et al., 1986). However, a small number of studies were on word-level noises, since the noises on words can distort words’ meaning. Word Dropping. NLP tasks that utilize the text as a form of sentence and phrase have considered each word as features. However, too many features can lead models to be overfitted to the training data due to the curse of dimensionality. Therefore, the easiest way to reduce the number of features is to drop words in the sentence at random. Word Embedding Perturbation. Miyato et al. (2016) tried to perturb word vectors and used them in the adversarial training framework for model regularization. Cheng et al. (2018) utilized the noises to build a robust machine translation model. Also, Zhang & Yang (2018) considered the perturbation as a data augmentation method. The previous works added the noises to all word embeddings. It can regularize the model weights, but they ignored the change of word representations and its re-usability. On the other hand, our method gradually adds noises to word embeddings by controlling the amount of noise. Also, iterative training processes that re-use fine-tuned word representation as pretrained word vectors for the next training process can benefit from the noises and make better word representations.\n2.3 Regularization Techniques\nSome research explained that the normalization could be used for model regularization (van Laarhoven, 2017; Luo et al., 2018; Hoffer et al., 2018). Dropout (Srivastava et al., 2014) is applied to neural network models, masking random neurons with 0. Dropout randomly and temporarily removes the neural activations during training, so the masked weights are not updated. As a result, the model is prevented from over-tuning on specific features, which involves regularization. Batch Normalization (BN) (Ioffe & Szegedy, 2015) normalizes the features according to minibatch statistics. Batch normalization enables the features to avoid covariate shift–the weight gradients are highly dependent on the previous layers’ gradients. Besides, batch normalization speeds up\nthe training process by reshaping loss function. Layer Normalization (LN) (Ba et al., 2016) also utilizes mini-batch statistics to normalize the features. The difference with batch normalization is that layer normalization normalizes the inputs across the features. The statistics are computed across each feature, which is the same for all the feature dimensions.\n3 ProposedMethod\n3.1 Overall Process\nThe overall process of GraVeR is illustrated in Figure 1. GraVeR is applied to a conventional training framework, but it needs a meta-level approach that trains the model again. The iterative training process will also be denoted as a meta-epoch. When a training process finishes, we extract the fine-tuned word embeddings (W ′) and add bias (W ′ −W) weighted by 1/[MaskingRate]. The additive bias is optional but it takes the word embeddings far from the minima in which the fine-tuned embeddings converged. Next, we add maskers filled with random values to a portion of W ′, and then re-train the model from scratch with the noised word embeddings. Observing the validation performance in the training process, we select the best-performing model and the fine-tuned embeddings. The additional details in GraVeR will be described. In short, GraVeR is a training framework that adds random noises (and bias) to the fine-tuned word embeddings and then repeats the training process with the word embeddings.\n3.2 GraVeR Details\nThe details in GraVeR are based on the idea that moderate noises are required to change word vector distribution, but the noise scale should not be too large to distort the word vector distribution. Random Noises and Bias. We have two hyperparameters related to the noise maskers; Masking Rate and Noise Scale. The masking rate denotes a portion of word vectors masked in every (re)training process. We set the masking rate to 20% of the vocabulary size as default. The maskers are filled with random values in a range, which is denoted as noise scale. The default is sampled from a uniform distribution (-1,1) since word vectors in the range are widely used. Well-defined perturbation methods like Gaussian kernels (Vilnis & McCallum, 2014) can be an extension. Next, we add a bias to the fine-tuned word embeddings (W ′) in order to take W ′ to the same embedding space with better initial values. Intuitively, since we initialize with the word embeddings fine-tuned in the previous training process, the embedding space in the next training process is the same as the space in the previous training process. And, the difference of fine-tuned embedding and initial embedding (|W ′ −W |) in the space means the representations learned by a training process, which improved the performance in the previous training process. So, we add the bias to W ′, weighted by 1/[MaskingRate] where 0 <MaskingRate≤1 in order to give a large bias when a small number of word vectors are noised; 1/[MaskingRate]× (W ′−W). When the validation performance\ndecreases, the bias is [MaskingRate] × (W −W ′) for small noise. When selecting a portion of words to noise, we use word frequency driven from training data. Intuitively, if the representations of frequently used words are changed, which means the input data are changed a lot, the high-level representation such as sentence representation are largely affected. Gradualness. While following the aforementioned process, we change the number of random maskers according to validation performance. If the re-trained model’s validation performance increases, we move the maskers to the next frequently used words. Otherwise, we gradually increase the number of random maskers without moving them so that the maskers make noises on the words again in the next re-training process. As a result, GraVeR can process both words in the previous training process and words in the current training process. This gradualness lets the re-training processes benefit from newly generated noises again, which makes GraVeR dynamic. In the end, overall algorithms of GraVeR are summarized as follows:\nAlgorithm 1 Training Framework with GraVeR Accval ← 0, MaxAccval ← 0 Train set (Train), Validation set (Val), A classifier (M), Word embeddings (W), Word frequency information counted from training data - Train M with W - Get trained M′ and fine-tuned W ′ - Meta-level validation; Accval ← M′(Valval; W ′) if MaxAccval < Acc then\nMaxAccval ← Accval MaskedWords← NextFrequentlyUsedWords W ← W ′ + 1/MaskingRate × (W ′ −W)\nelse MaskedWords←MaskedWords + NextFrequentlyUsedWords W ← W −MaskingRate × (W ′ −W) end if W[MaskedWords]← W[MaskedWords] + U(−1, 1) - Repeat the training process until all the words are masked\nGraVeR can be applied to any conventional training frameworks, so the method is independent of model architectures in that most NLP models use word-level embeddings. The random noises might disturb the representation, but some of the noises which harm the performance are compensated during the re-training processes. In contrast, other noises are used to update the word vectors over the initial values. Besides, the additive bias makes the use of random noises stable. By using this re-training with the noise process, we expect that the model is prevented from overfitting. Meanwhile, the model is incrementally fitted to the validation set through early-stopping in a training process and meta-level early-stopping in re-training processes. Therefore, the model keeps being fitted to the validation set with regularization, showing better performance on the test set since the model performance on the validation set is normally correlated to the performance on the test set.\n4 Experiment\n4.1 Datasets\nWe prepare three topic classification datasets: DBpedia ontology (Lehmann et al., 2015), YahooAnswer2 (Chang et al., 2008), AGNews. We also prepare two sentiment classification datasets: Yelp reviews (Zhang et al., 2015), IMDB (Maas et al., 2011). YahooAnswer dataset is used for two different tasks: classify upper-level categories and classify lower-level categories. The data statistics are presented in Appendix. We assign 15% of each train set to validation set, and each dataset has its own test set. The validation set is used for early-stopping both at every epoch and every meta-epoch. We use all words tokenized by space and all symbols using 300 dimensional embedding space.\n2https://cogcomp.seas.upenn.edu/page/resource view/89 Note that Chang et al. (2008) said the dataset has 20 top-level categories, but it has three duplicated top-level categories because of typos.\n4.2 Classifier\nWe use TextCNN (Kim, 2014) classifiers. The model consists of 2 convolutional layers with 32 channels and 16 channels, respectively. We also adopt multiple sizes of kernels–2, 3, 4, and 5, followed by ReLU activation (Hahnloser et al., 2000) and max-pooling. The kernels are concatenated after every max-pooling layer. Finally, the features are classified into the classes using fully connected layer. Although this model has few parameters (136K) compared with recent high-performance models like BERT (Devlin et al., 2018), we use this model to utilize multiple re-training processes. Also, we employ simple techniques for model improvements such as Dropout and Layer Normalization to get similar performances to recent models and justify the use of a simple model as a basic classifier. The vanilla classifier and the tuned classifier are denoted as TextCNNbase and TextCNNtune, respectively. Additional comparison with very deep models is described in Appendix. We optimize the model using Adam (Kingma & Ba, 2014) with 1e-3 learning rate and earlystopping. If the validation accuracy does not increase over five epochs, we stop model training. Initial word embeddings are random if we do not mention explicitly.\n4.3 Baseline Implementation\nWe cannot fully compare our method with Word Embedding Perturbation (Miyato et al., 2016) because we do not use adversarial training framework. Instead, random noises are added to all word embeddings, as other word embedding perturbation methods did (Cheng et al., 2018; Zhang & Yang, 2018). In order to compare the effect of regularization, we implement five regularization (including normalization) methods. Word dropping is implemented in the pre-processing part, which removes random words in the text. We set the random probability p as 0.1. Dropout (Srivastava et al., 2014) is added to the final fully connected layer with dropout probability 0.1, which performs the best in our experiments. Batch Normalization (Ioffe & Szegedy, 2015) is located between every convolutional layer and an activation function, as used in the original paper. Layer Normalization (Ba et al., 2016) is implemented in the same position. We report the performance averaged over five runs.\n5 Results\n5.1 Performance\nFirstly, we present the effect of GraVeR on major pretrained word embeddings, as presented in Table 1. We use three pretrained word embeddings: word2vec (w2v) (Mikolov et al., 2013a) GoogleNews-vectors-negative300.bin, GloVe (glv) (Pennington et al., 2014) glove.42B.300d.txt, fastText (ftt) (Bojanowski et al., 2016) wiki-news-300d-1M-subword.vec, and 1 random embedding. GraVeR improves the model performance on most of the datasets, and the largest performance gain is in random embeddings. Random embeddings with GraVeR even perform better than pretrained embeddings in a few datasets. The result implies that we can learn better word representations through GraVeR in any embedding space since we train a model from scratch except for word embeddings. However, with GloVe, GraVeR is not effective on the model performance because the\ndistribution of word vectors in the pretrained embeddings is already good enough for the tasks. In contrast, GraVeR makes substantial improvements when using relatively poor embeddings for the tasks. The comparison with regularization techniques is presented in Table 2 (Top). The result shows that the performance gain by GraVeR is larger than other regularization methods. We also present the results when our method is combined with other regularization methods in Table 2 (Bottom). The results show that GraVeR positively matches with the other regularization techniques, further improving the model performance. Figure 2 also reports that our method definitely increases validation performance, which results in improvements in the test set. Comparisons of a recent model (e.g., BERT) without extra data resources and word embedding perturbation methods will be discussed in Further Analysis (§6).\n5.2 Word Distributions\nTo analyze GraVeR with respect to word representation, we extract the word representations updated on DBpedia dataset and present the list of top-20 nearest words of a cue word in Table 3. In order to reduce the effect of randomness, we use GloVe for initial embedding. We can see that the word vectors are further fine-tuned and even find other similar words not shown in the embedding finetuned once. These results imply that our method can change the word vector distribution to be further fine-tuned to the tasks. We also present another cue word results and visualization of top-100 nearest word vector distribution using t-SNE (Maaten & Hinton, 2008) in Appendix.\n6 Further Analysis\nFrom this Section, we mainly use TextCNNtune to show the effect of GraVeR on the model whose performance is similar to state-of-the-art models.\n6.1 Random Noise and Bias\nNoises. The range of random values filled in the maskers also affects the performance. We first re-train the model without any modification to observe the effect of the re-training processes. The model shows slightly better performance within a few meta-epochs, but it becomes overfitting, as we expected (see Table 4). The performance when we use Gaussian noise instead of the uniform distribution is also presented. It shows a comparable but slightly worse result than our default settings. When the noise range is 0, the noise only consists of the additive bias, which also shows marginally worse performance. The word perturbation method performs slightly better than just re-training, but there are large gaps between variations of GraVeR.\n6.2 Hyperparameters\nMasking Rate The amount of noise added by GraVeR is an important factor in that some noises should be small enough to be corrected during the re-training processes, while other noises should be large enough to change the word vector distribution. We first change the masking rate of how much random maskers move in every re-training process. The larger the masking rate becomes, the more words are masked in a re-training process, so the noise increases. Conversely, the amount of noise decreases as the masking rate becomes small. The effect of the masking rate is presented in Table 5. Masking Rate=0.1 also shows good performance, but it needs 2x re-training processes more than Masking Rate=0.2. We thus use 0.2 as a default.\nGradualness Policy Our proposed method is to increase the number of maskers when the validation performance decreases in order to make noises on the words masked in the previous re-training process again. Otherwise, we simply move to the next frequently used words. We try changing the gradualness that (1) do not have gradualness, (2) have gradualness only when the validation performance increase, which is reverse to our proposed method, and (3) always have gradualness regardless of the validation performance. The result is presented in Table 6. Among the ablations, our proposed approach performs the best.\n6.3 On otherModel Architecture\nWe opt our method to the transformer-based classifier. The transformer classifier has 300 embedding dimension with positional-embedding, 32 batch size, 512 sequence length. It also has 10 multi-heads but only 1 encoder layer. Stacking more than 2 encoder layers harms the performance; We guess that the number of training set is not enough to train the model parameters. We do average-pooling to the encoded sentence vectors and use linear layer in order for classification. The other parameters follow the default setting of Pytorch nn.TransformerEncoderLayer. Table 7 shows that GraVeR works well even in other model architectures.\n6.4 Pretraining by Training Data.\nTable 8 shows the classification performance according to the usage of pretrained word embeddings. Genmeans general representation trained on a large corpus, which is the pretrained embeddings what we mentioned in §5.1 and bert-base-uncased from huggingface (Wolf et al., 2019). Sp denotes specialized representation trained using only training data. We set the hyperparameter for training as default settings used in their API. Despite a small number of parameters, TextCNNtune with GraVeR shows comparable performance with BERT (Sp). GraVeR’s improvement is even similar to glv (Gen), which means GraVeR can complement the information from external resources, in this case, Common Crawl (42B tokens, 1.9M vocab). Although there is a performance gap with pretrained BERT, the advantages of GraVeR are (1) do not need any extra data resources, and (2) do not need to use very deep model architectures consisting of a bunch of parameters. Therefore, GraVeR must be an attractive option when using pretrained word embeddings.\n7 Discussion\nRe-training Cost. Although GraVeR shows strong performance, its re-training processes take 1/[MaskingRate] more times than conventional training process. However, we showed that a small model with GraVeR even performs on par with recent huge models. When considering the parameter size, for example, TextCNNtune has 136K parameters and it needs five times re-training process in our default setting MaskingRate=0.2 while BERT has 110M parameters with one fine-tuning process. Then, the parameters need to be trained are 780K and 110M, respectively; i.e. 141 times cheaper. Furthermore, the training time for small model must be faster since it can be trained with larger mini-batch size. As a representation learning. GraVeR’s representations are learned from a given training set only. In Table 8, general word embeddings trained from large data shows worse performance than domain (or data) specialized word embeddings. That is, in order to solve the task, the word representation should be specialized in given data. By using GraVeR, we can easily get specialized (further fine-tuned) representation only with the random noise including bias and the iterative training processes. Besides, we believe that using a sophisticated noising trick instead of simple random makes further improvement in GraVeR.\n8 Conclusion\nWe propose GraVeR, which adds random noises and bias to word embeddings in order to change the word vector distribution and regularize a model. Through the re-training process, we can make the use of noises to learn better representations. In the experiments, as the model incrementally fits the validation set, GraVeR largely improves model performances. We expect that our general training approach can be used to a various models to improve model performances.\nReferences Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint\narXiv:1607.06450, 2016.\nPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016.\nMing-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. Importance of semantic representation: Dataless classification. In AAAI, volume 2, pp. 830–835, 2008.\nYong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1756–1766, 2018.\nAlexis Conneau, Holger Schwenk, Loı̈c Barrault, and Yann Lecun. Very deep convolutional networks for text classification. In European Chapter of the Association for Computational Linguistics EACL’17, 2017.\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.\nManaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1606–1615, 2015.\nRichard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947, 2000.\nElad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: efficient and accurate normalization schemes in deep networks. In Advances in Neural Information Processing Systems, pp. 2164–2174, 2018.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448–456, 2015.\nHwiyeol Jo and Stanley Jungkyu Choi. Extrofitting: Enriching word representation and its vector space with semantic lexicons. ACL 2018, pp. 24, 2018.\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746–1751, 2014.\nDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nJens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. Dbpedia–a largescale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167–195, 2015.\nPing Luo, Xinjiang Wang, Wenqi Shao, and Zhanglin Peng. Towards understanding regularization in batch normalization. 2018.\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pp. 142–150. Association for Computational Linguistics, 2011.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.\nBryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pp. 6294– 6305, 2017.\nOren Melamud, Jacob Goldberger, and Ido Dagan. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 51–61, 2016.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013b.\nTakeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semisupervised text classification. arXiv preprint arXiv:1605.07725, 2016.\nNikola Mrkšić, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics, 5:309–324, 2017.\nJeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543, 2014.\nMatthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pp. 2227–2237, 2018.\nDavid C Plaut et al. Experiments on learning by back propagation. 1986.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.\nJoseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pp. 384–394. Association for Computational Linguistics, 2010.\nTwan van Laarhoven. L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350, 2017.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.\nLuke Vilnis and Andrew McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014.\nIvan Vulić, Nikola Mrkšić, and Anna Korhonen. Cross-lingual induction and transfer of verb classes based on word vector space specialisation. arXiv preprint arXiv:1707.06945, 2017.\nThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Huggingface’s transformers: Stateof-the-art natural language processing. ArXiv, abs/1910.03771, 2019.\nDongxu Zhang and Zhichao Yang. Word embedding perturbation for sentence classification. arXiv preprint arXiv:1804.08166, 2018.\nXiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pp. 649–657, 2015." } ]
null
null
SP:1d5d07627d5218eea719362a51fd1175bc2f841e
[ "This paper proposes a method to train weight-quantized neural networks. The authors propose to directly calculate the endpoints that minimize the quantization error according to the weight distribution of each layer. Empirical results on image classification tasks and object detection tasks show that the proposed method outperforms other compared weight quantization methods under the same number of bits." ]
Crossbar-enabled analog computing-in-memory (CACIM) systems can significantly improve the computation speed and energy efficiency of deep neural networks (DNNs). However, the transition of DNN from the digital systems to CACIM systems usually reduces its accuracy. The major issue is that the weights of DNN are stored and calculated directly on analog quantities in CACIM systems. The variation and programming overhead of the analog weight limit the precision. Therefore, a suitable quantization algorithm is important when deploying a DNN into CACIM systems to obtain less accuracy loss. The analog weight has its unique advantages when doing quantization. Because there is no encoding and decoding process, the set of quanta will not affect the computing process. Therefore, a generalized quantization method that does not constrain the range of quanta and can obtain less quantization error will be effective in CACIM systems. For the first time, we introduced a generalized quantization method into CACIM systems and showed superior performance on a series of computer vision tasks, such as image classification, object detection, and semantic segmentation. Using the generalized quantization method, the DNN with 8-level analog weights can outperform the 32-bit networks. With fewer levels, the generalized quantization method can obtain less accuracy loss than other uniform quantization methods.
[]
[ { "authors": [ "Ron Banner", "Yury Nahshan", "Daniel Soudry" ], "title": "Post training 4-bit quantization of convolutional networks for rapid-deployment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Yoojin Choi", "Mostafa El-Khamy", "Jungwon Lee" ], "title": "Towards the limit of network quantization", "venue": "arXiv preprint arXiv:1612.01543,", "year": 2016 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Steven K Esser", "Jeffrey L McKinstry", "Deepika Bablani", "Rathinakumar Appuswamy", "Dharmendra S Modha" ], "title": "Learned step size quantization", "venue": null, "year": 1902 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "International journal of computer vision,", "year": 2010 }, { "authors": [ "Alex Graves", "Navdeep Jaitly" ], "title": "Towards end-to-end speech recognition with recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Bharath Hariharan", "Pablo Arbeláez", "Lubomir Bourdev", "Subhransu Maji", "Jitendra Malik" ], "title": "Semantic contours from inverse detectors", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Zhezhi He", "Deliang Fan" ], "title": "Simultaneously optimizing weight and quantizer of ternary neural network using truncated gaussian approximation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Lu Hou", "James T. Kwok" ], "title": "Loss-aware weight quantization of deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Miao Hu", "Hai Li", "Qing Wu", "Garrett S Rose" ], "title": "Hardware realization of bsb recall function using memristor crossbar arrays", "venue": "In Proceedings of the 49th Annual Design Automation Conference,", "year": 2012 }, { "authors": [ "Daniele Ielmini", "H-S Philip Wong" ], "title": "In-memory computing with resistive switching devices", "venue": "Nature Electronics,", "year": 2018 }, { "authors": [ "Sangil Jung", "Changyong Son", "Seohyung Lee", "Jinwoo Son", "Jae-Joon Han", "Youngjun Kwak", "Sung Ju Hwang", "Changkyu Choi" ], "title": "Learning to quantize deep networks by optimizing quantization intervals with task loss", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Siwei Lai", "Liheng Xu", "Kang Liu", "Jun Zhao" ], "title": "Recurrent convolutional neural networks for text classification", "venue": "In Twenty-ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "Cong Leng", "Hao Li", "Shenghuo Zhu", "Rong Jin" ], "title": "Extremely low bit neural network: Squeeze the last bit out with admm", "venue": null, "year": 2018 }, { "authors": [ "Fengfu Li", "Bo Zhang", "Bin Liu" ], "title": "Ternary weight networks", "venue": "arXiv preprint arXiv:1605.04711,", "year": 2016 }, { "authors": [ "Yuhang Li", "Xin Dong", "Wei Wang" ], "title": "Additive powers-of-two quantization: A non-uniform discretization for neural networks", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xiaofan Lin", "Cong Zhao", "Wei Pan" ], "title": "Towards accurate binary convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yudeng Lin", "Huaqiang Wu", "Bin Gao", "Peng Yao", "Wei Wu", "Qingtian Zhang", "Xiang Zhang", "Xinyi Li", "Fuhai Li", "Jiwu Lu" ], "title": "Demonstration of generative adversarial network by intrinsic random noises of analog rram devices", "venue": "IEEE International Electron Devices Meeting (IEDM),", "year": 2018 }, { "authors": [ "Yudeng Lin", "Qingtian Zhang", "Jianshi Tang", "Bin Gao", "Chongxuan Li", "Peng Yao", "Zhengwu Liu", "Jun Zhu", "Jiwu Lu", "Xiaobo Sharon Hu" ], "title": "Bayesian neural network realization by exploiting inherent stochastic characteristics of analog rram", "venue": "IEEE International Electron Devices Meeting (IEDM),", "year": 2019 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Stuart Lloyd" ], "title": "Least squares quantization in pcm", "venue": "IEEE transactions on information theory,", "year": 1982 }, { "authors": [ "Daisuke Miyashita", "Edward H Lee", "Boris Murmann" ], "title": "Convolutional neural networks using logarithmic data representation", "venue": "arXiv preprint arXiv:1603.01025,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Alan F Murray", "Peter J Edwards" ], "title": "Enhanced mlp performance and fault tolerance resulting from synaptic weight noise during training", "venue": "IEEE Transactions on neural networks,", "year": 1994 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Sungho Shin", "Yoonho Boo", "Wonyong Sung" ], "title": "Fixed-point optimization of deep neural networks with adaptive step size retraining", "venue": "IEEE International conference on acoustics, speech and signal processing (ICASSP),", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings,", "year": 2015 }, { "authors": [ "Xiaoyu Sun", "Xiaochen Peng", "Pai-Yu Chen", "Rui Liu", "Jae-sun Seo", "Shimeng Yu" ], "title": "Fully parallel rram synaptic array for implementing binary neural network with (+ 1,- 1) weights and (+ 1, 0) neurons", "venue": "In Proceedings of the 23rd Asia and South Pacific Design Automation Conference,", "year": 2018 }, { "authors": [ "Vivienne Sze", "Yu-Hsin Chen", "Tien-Ju Yang", "Joel S Emer" ], "title": "Efficient processing of deep neural networks: A tutorial and survey", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Tianqi Tang", "Lixue Xia", "Boxun Li", "Yu Wang", "Huazhong Yang" ], "title": "Binary convolutional neural network on rram", "venue": "22nd Asia and South Pacific Design Automation Conference (ASPDAC),", "year": 2017 }, { "authors": [ "J Joshua Yang", "Dmitri B Strukov", "Duncan R Stewart" ], "title": "Memristive devices for computing", "venue": "Nature nanotechnology,", "year": 2013 }, { "authors": [ "Zichao Yang", "Diyi Yang", "Chris Dyer", "Xiaodong He", "Alex Smola", "Eduard Hovy" ], "title": "Hierarchical attention networks for document classification", "venue": "In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies,", "year": 2016 }, { "authors": [ "Peng Yao", "Huaqiang Wu", "Bin Gao", "Jianshi Tang", "Qingtian Zhang", "Wenqiang Zhang", "J Joshua", "He Qian" ], "title": "Fully hardware-implemented memristor convolutional neural", "venue": "network. Nature,", "year": 2020 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hengshuang Zhao", "Jianping Shi", "Xiaojuan Qi", "Xiaogang Wang", "Jiaya Jia" ], "title": "Pyramid scene parsing network", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Chris De Sa", "Zhiru Zhang" ], "title": "Improving neural network quantization without retraining using outlier channel splitting", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "International Conference on Learning Representations,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have been widely used in a variety of fields, such as computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), speech recognition (Graves et al., 2013; Hinton et al., 2012; Graves & Jaitly, 2014), natural language processing (Kim, 2014; Yang et al., 2016; Lai et al., 2015) and so on (Mnih et al., 2015; Silver et al., 2016). However, the high complexity of DNN models makes them hard to be applied on edge devices (mobile phones, onboard computers, smart sensors, wearable devices, etc.), which can only provide limited computing speed and power (Sze et al., 2017).\nCrossbar-enabled analog computing-in-memory (CACIM) systems is a promising approach to facilitate the applications of DNN on edge devices (Yang et al., 2013). It can carry out some typical operations in situ, exactly where the data are located (Ielmini & Wong, 2018). Such as the multiplyaccumulate operation (MAC), which is the most frequently performed operation in DNNs. The cost of data transferring for doing the operations can be reduced. Both the computation speed and energy efficiency can be improved significantly (Yao et al., 2020).\nThe footstone of CACIM systems for DNN is the crossbar array of the computational memory units (Hu et al., 2012). As shown in Figure 1, taking the memristor device as an example, each weight (Wij) of the connection in one layer of a neural network is stored as the conductance state (Gij) of a memristor. The input data are represented as the voltage (Vi). After applying the voltage (Vi) to each row, the current (Ij) collected at each column is exactly the MAC result according to Kirchhoff’s law and Ohm’s law, Ij =\n∑ i ViGij .\nBefore applying a DNN in CACIM systems, an essential step is writing the weights of DNN into the memory units, which is usually called as mapping. However, the mapping overhead is directly\nrelated to the precision of weights.Therefore, a weight quantization method that compresses highprecision weights in DNN is important for an efficient implementation of DNN in CACIM systems.\nThe most important criterion of a weight quantization method is the accuracy loss, which has a strong correlation with the quantization error. The quantization error is determined by the quantizer used in the quantization method. As far as our knowledge is concerned, the generalized quantizer has not been used to quantize DNN weights in CACIM systems. The previous work used either uniform quantizers (Jung et al., 2019; Yang et al., 2019) or non-uniform quantizers (Tang et al., 2017; Sun et al., 2018). However, an empirical study has shown that the weights in one layer of DNN usually follow a bell-shape and long-tailed distribution (Han et al., 2016). Meanwhile, we found that weights in the last fully-connected layer of DNN for the classification tasks usually follow an asymmetric distribution. The generalized density-aware quantizer (GDQ), which means the quantization results are determined all depends on the data without any constraints, can obtain less quantization error than either uniform or non-uniform quantizer (Figure 2). Since the weights are stored and operated as analog quantities in CACIM systems, using GDQ to quantize the weights won’t produce extra cost in the inference phase.\nIn CACIM systems, the noise of analog weights is inevitable and can not be ignored. The perturbations of weights will degrade the performance of networks severely. It is better to quantize the weights and improve the robustness to noise in the training process together.\nIn this work, we introduced a generalized density-aware quantization method and noise-aware training scheme (NATS) for DNN in CACIM systems and achieved no degradation of performance by using 8-level weights on a series of computer vision tasks. Under the same weight level, our proposed method performed better than others." }, { "heading": "2 PRELIMINARY", "text": "A quantization method for DNNs consists of two parts, one is the quantizer, the other is the quantization algorithm which describes how to use the quantizer in a neural network." }, { "heading": "2.1 QUANTIZER", "text": "Formulation: A quantizer is a function, f : R → q, where q = {qi ∈ R|i = 1, 2, · · · , v}. x = {xi ∈ R|i = 1, 2, · · · , d} is the data to be quantized. Each qi has a corresponding domain Qi ⊂ R that f(x) = qi, if x ∈ Qi, (1) where ⋃v i=1Qi = R, and Qi ∩Qj = ∅ when i 6= j. In most cases, {Qi|i = 1, 2, · · · , v} are v intervals separated by v − 1 endpoints e = {ei ∈ R|i = 1, 2, · · · , v − 1} on the real axis. Without loss of generality, we assume that q1 < q2 < · · · < qv and e1 < e2 < · · · < ev−1, that is,\nQ1 = {x : −∞ < x ≤ e1}, Q2 = {x : e1 < x ≤ e2},\n... Qv−1 = {x : ev−2 < x ≤ ev−1}, Qv = {x : ev−1 < x <∞} .\n(2)\nWe use Θ = {q, e} to denote a quantizer, and call v = |q| the precision of the quantizer. The quantization error of a data point z(xi) is defined as\nz(xi) = f(xi)− xi = qα − xi, if xi ∈ Qα . (3)\nA quantizer Θ = {q, e} is uniform if q is an arithmetic progression. The quantization method using uniform quantizer is referred as a uniform quantization method, such as the BinaryConnect (Courbariaux et al., 2015), binary weight network (BWN) (Rastegari et al., 2016), which have two levels, the ternary weight networks (TWN) (Li et al., 2016) and the trained ternary quantization (TTQ) (Zhu et al., 2016), which have three levels, and some other methods that have more levels (He & Fan, 2019; Jung et al., 2019; Yang et al., 2019; Shin et al., 2017; Esser et al., 2019).\nThe q in a non-uniform quantizer is constrained to be a kind of non-uniform distribution. Such as q is a geometric sequence (Li et al., 2020; Miyashita et al., 2016; Zhou et al., 2017). These quantization methods work best when the data to be quantized follows the corresponding exponential distribution. The work (Choi et al., 2016) adopts a k-means like algorithm to quantized the weights.\nBeyond the uniform quantization and non-uniform quantization, the generalized quantization methods do not constrain the distribution of q. q is determined based on the distribution of the data to be quantized, which is robust to all kinds of data distribution." }, { "heading": "2.2 QUANTIZATION ALGORITHM", "text": "To accelerate the inference process of a neural network, the quantizer is usually applied to both the weights and feature maps. In most of CACIM systems, the activations are still implemented by digital circuits, which is significantly different from the analog weights. So in this work, we focused on the weight quantization problem. There are two main strategies for weight quantization. The first one directly quantizes the weights in a trained neural network without fine-tuning or retraining (Zhao et al., 2019; Banner et al., 2019). This strategy is fast and convenient, but its accuracy loss is usually greater than the other one, which will repeat the training and quantization iteratively until the performance is better enough (Courbariaux et al., 2015; Rastegari et al., 2016; Li et al., 2016; Zhu et al., 2016). In the iterative quantization scheme, starts from the pre-trained neural network is more likely to get better performance than starts from scratch (Yang et al., 2019)." }, { "heading": "3 GENERAL QUANTIZATION ALGORITHMS FOR DNNS IN CACIM SYSTEMS", "text": "" }, { "heading": "3.1 LLOYD’S QUANTIZER", "text": "We used Lloyd (1982)’s quantizer to quantize the weights of DNN in this work. The Θ = {q, e} is a quantizer as defined in Section 2.1. Quantization distortion E of a quantizer is defined as\nE = ∫ +∞ −∞ z2(x) dF (x), (4)\n= v∑ α=1 ∫ Qα (qα − x)2 dF (x), (5)\nwhere F (x) is the cumulative probability distribution function of x. To minimize E, the quantizer iteratively optimizes the q and e until the relative error of E of two consecutive iterations is smaller than a given threshold." }, { "heading": "3.2 NOISE-AWARE TRAINING SCHEME", "text": "We used the noise-aware training scheme (Murray & Edwards, 1994) to improve the robustness to weight noise in this work. A Gaussian noise with zero mean and σ standard deviation was added to each weight when doing the forward calculation. The σ is determined by the production of the maximum of quantized weights (|W̄ |) and a constant ratio δ.\nW̃ = N(0, δ · |W̄ |max) (6)" }, { "heading": "3.3 TRAIN A WEIGHT QUANTIZED NEURAL NETWORK", "text": "Algorithm 1 Training a L-layers quantization network: Input: A mini-batch of inputs and targets(I , Y ), the pretrained full precision weights W , v distinguish quantized levels, learning rate η. Initialize: quantizer Θ by Lloyd (1982), projection thresh T for i = 1 to L do\nquantized weight W̄i is calculated by Θi noised weight Ŵi = W̄i + W̃ i\nend for compute model output: Ŷ = forward(I, Ŵ ) compute loss L for i = L to 1 do\ncompute weight gradient ∂L ∂Wi = ∂L ∂Ŵi update the full presicion weight Wi according to ∂L ∂Wi and η\nif | ‖ W̄i ⊙\nWi ‖1 ‖ W̄i ⊙ W̄i ‖1 − 1| > T then\nre-initialize Θ by Lloyd (1982) end if\nend for\nThe training algorithm with quantization and noise awareness is shown in Algorithm 1. ‖ X ‖p= ( ∑ i ∑ j | xij |p) 1 p is its p−norm, X ⊙ Y denotes the element-wise multiplication. There is one quantizer for each layer in the neural network. As some previous work do (Courbariaux et al., 2015; Rastegari et al., 2016), the quantized weights are used in the forward and backward propagations, but the update is applied on the full-precision weights. The Straight-Through-Estimator (STE) method (Bengio et al., 2013) are used during backward propagations. In order to reduce the frequency of optimizing the quantizer, we used the projection of the quantized weight vector on the full-precision weight vector to determine whether the quantizer need to be updated after each iteration. When the\nprojection exceeds a certain range, the quantizer will be updated. It is found that, if we start from a pre-trained neural network, the distribution of weights won’t have too much change. This means the projection won’t exceed the proper range during the whole training phase and the quantizer will be optimized only once at the beginning." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMAGE CLASSIFICATION", "text": "In this section, we validate our quantization method on the CIFAR-10 (Krizhevsky et al., 2009) and the ImageNet-ILSVRC2012 (Russakovsky et al., 2015) dataset." }, { "heading": "4.1.1 EVALUATION ON CIFAR-10", "text": "ResNet-20: We first evaluated the quantization method with ResNet-20 on the CIFAR-10 dataset, which consist of 50k training images and 10k test images in 10 classes. The data augmentation method is same as the previous work (He et al., 2016). For fine-tuning, we set the initial learning rate to 0.1, and scaled by 0.1, 0.1, 0.5 at epoch 80, 120, 160. The projection thresh T was set to 1.5. We compared the performance of quantized model with several previous works, TTQ (Zhu et al., 2016), He (He & Fan, 2019), Lq-net (Zhang et al., 2018) and Li (Li et al., 2020). As shown in Table 1, our ternary model achieves 91.11% accuracy which is 0.42% lower than the full-precision model. Our 4-level model achieves 91.40% accuracy which is only 0.08% lower than the full-precision model.\nVGG-like Network: We also quantized VGG-like network on CIFAR-10 with ternary weights for evaluation. The model architecture and hyper-parameters are same as Hou’s works (Hou & Kwok, 2018). The model has 6 convolutional layers and 2 fully-connected layers. Batch size was set to 50 and the learning rate starts 0.002 and decayed by a factor of 0.5 after every 15 epochs. The adam optimizer was used and the maximum number of epoch was set to 200. The projection thresh T was set to 0.1.\nWe compared the classification accuracy of our method and several related methods (Table 2). The average accuracy of five trials using our method is higher than others. For DNNs which don’t have pre-trained weights, training and quantizing it with our proposed method from scratch can also obtain good results. However, the quantizer may be updated frequently since the distribution of the weights will change a lot at the early stage of training. The closer to the solution, the more stable the weights distribution, that is, the fewer update times in the method." }, { "heading": "4.1.2 EVALUATION ON IMAGENET", "text": "The ImageNet dataset consists of 1.2M training and 50K validation images. We conducted the experiment with the ResNet-18b and ResNet-50b (He et al., 2016). The pre-trained models from the PyTorch model zoo were used1 . The data preprocessing is same as the origin paper (He et al., 2016). A 224 × 224 crop was randomly sampled from an image or its horizontal flip. Stochastic gradient descent (SGD) with the momentum of 0.9 was used to optimize weight parameters. The mini-batch was set to 64 and weight decay to 0.0001. The network was trained up to 90 epochs. The learning rate started from 1e-4, change to 2e-5, 4e-6 at epoch 30, 40 correspondingly, and then scaled by 0.5 after every 5 epochs. The projection thresh T was set to 0.1.\nTo make more generalization, we quantized the weights in all the layers of a DNN, including the first and the last layer in the experiments. For a fair comparison with some previous work, we also conducted the experiments that did not quantize the first and last layer.\nWhen the precision is up to 8-level, both the top-1 and top-5 accuracy (70.0%/89.3%) outperform the full precision model (69.8%/89.1%). Similar results were obtained in ResNet-50b network. The top-1 accuracy of ResNet-50b model with 4-level precision is only 0.3% lower than the fullprecision model. We compared our methods with some previous studies, which were also based on the ResNet architecture. The experimental results are listed in Table 3. In most experiments, our results achieved the least classification accuracy gap.\nWe searched the relative noise δ during training the ResNet-18b with ternary weights. No significant difference was involved when δ ranges from 1% to 4%, so we used 2% in all following experiments. The comparisons of the inference performance between our quantized models with weight noise and previous models without weight noise are shown in Figure 3.\n1https://pytorch.org/docs/stable/ modules/torchvision/models/resnet.html" }, { "heading": "4.2 OBJECT DETECTION", "text": "In this section, we evaluated our proposed approach on a general object detection task. Our experiments were conducted on SSD (single shot multibox detection) architecture (Liu et al., 2016). All methods were fine-tuned on the same pre-trained VGG16 network. The models are trained on the PASCAL VOC 2007 and 2012 training datasets and tested on the PASCAL VOC 2007 test dataset. For a fair comparison, except for the final convolutional layers with 1 × 1 kernels and the first convolution layer, the parameters of all other layers in the backbone VGG16 are quantized.\nThe input images were resized to 300 × 300. SGD optimizer with the momentum set to 0.9 was used. We used the 1e−3 learning rate for 80k iterations, then continue training for 20k iterations with 1e−4 and 1e−5. The batch size was set to 16, and the weight decay was 5e−4. The results are shown in Table 4. When our ternary weights was injected with 1% noise, the mAP gap (1.5%± 0.06%) of our method was still not less than ADMM (Leng et al., 2018)’s and Yang et al. (2019)’ without noise. Our 8-level model performed better than full-precision model." }, { "heading": "4.3 SEMANTIC SEGMENTATION", "text": "We evaluated our method on the PASCAL VOC 2012 segmentation dataset (Everingham et al., 2010) and used the PSPNet (pyramid scene parsing network) architecture. Following the same settings in Zhao et al. (2017), we used augmented data with the annotation of Hariharan et al. (2011) resulting 10,582, 1,449 and 1,456 images for training, validation and testing. The backbone of our model was ResNet50. We evaluated the model with several-scale input and used the average results. The batch size was set to 8 constrained by limited memory. The learning rate was initialized by 1e−2 and decayed by power policy with 0.9. SGD optimizer with the momentum set to 0.9 was used. Our 8-level model can achieve the same performance with full-precision model 5." }, { "heading": "3 75.5 93.6", "text": "" }, { "heading": "4 76.3 93.9", "text": "" }, { "heading": "4.4 PERFORMANCE ON REAL DEVICES", "text": "To further demonstrate the reliability of the results, we map the weights to the real memristors and use the measurement conductance to verify our method. The conductance (µS) range of our memristor device is [2.1, 17.85]. To represent the negative weights, the differential conductance (G) of two devices is used. G is given byG = G+−G−, whereG+ (G−) is the conductance of a positive (negative) device. To reduce the mapping overhead, we used a unified reference conductance (Gref ) in all differential fairs. To represent a positive (negative) weight, the negative (positive) device is mapped with Gref . For one layer’s quantized weights in DNN model, we firstly normalized them by dividing with the maximum value of absolute weights. Then we mapped the normalized weights to G by multiplying with 15.75µS and used 2.1µS as Gref .\nTaking an example of a well-trained ResNet-18b model with 4-level weight, the set of quanta in the first convolutional layer is [-0.25, -0.01, 0.11, 0.38]. After scaling the weight set, we get the G (µS) set which is [-10.36, -0.57, 4.52, 15.75]. The G+ (µS) set is [2.1, 2.1, 6.62, 17.85], the G− (µS) set is [12.46, 2.67, 2.1, 2.1]. The standard deviation of the measured G is 0.32µS, which is approximately equal to 0.02 ∗ |G|max. Figure 4(a) shows the simulated and measured distribution of the differential conductance. The measured and simulated differential conductance have the same statistical property. As shown in Figure 4(b), we compared the inference performance of three types of weights: A) simulated weights training with NATS, B) measured weights training with NATS, C) measured weights training without NATS, using 4L, 8L ResNet-18b and SSD model. The results of type A and type B are very close. The results of type B are better than those of type C indicating that NATS improved the robustness of the DNN model." }, { "heading": "5 DISCUSSION", "text": "" }, { "heading": "5.1 GENERALIZED QUANTIZATION METHOD IN DIGITAL SYSTEMS", "text": "A generalized quantizer can obtain less quantization error than a uniform one in theory when the data distribution is non-uniform. However, in digital computers, it needs more memory or additional operations to process a set of non-uniform data. As shown in Figure 5, a series of data are quantized to {1, 2, 4, 6}. In a digital system, it will use 3 bits to store each number with binary code. Although we can store these numbers also with 2 bits per number, a mapping function, that is ‘00’ = 1, ‘01’ = 2, ‘10’= 4, ‘11’ = 6 must be stored and called whenever using these numbers. This additional cost limits the application of the generalized method in digital systems to a certain extent. However, in the CACIM systems, the data is stored in analog quantities, and the operation is based on the analog computing scheme. No matter what the exact value is, there is no significant difference in the storing or computing operation. This is why the generalized quantization method is more suitable for the CACIM system." }, { "heading": "5.2 PRINCIPLES OF CACIM-ORIENTED ALGORITHM DESIGN", "text": "Different from the digital system, the CACIM system is mainly based on analog computing, which may introduce a great difference when designing and using neural network algorithms. For the quantization methods, the analog computing scheme means that the weights are represented and calculated on real quantities, not binary numbers implemented by high and low levels in the digital system. No matter what the exact weight is, the read or the computing is in the same process, that is, apply a voltage and collect the current. Therefore, we can utilize this characteristic to select a more powerful quantization algorithm, such as the GDQ, which may obtain non-uniform results. Besides the quantization, the characteristics of analog computing may also play important roles in other scenarios. Since the current is directly accumulated and collected along the column of the crossbar, the behavior of each device at the column may influence the results. If we ignore these behaviors when designing the algorithm, they will degrade the performance of the algorithms. That is why we usually call them non-ideal characteristics. However, if we have a clear understanding of these characteristics, we can overcome them. Such as we used the noise-aware training scheme to improve the accuracies in this work. Furthermore, we can even utilize them to achieve better results, such as using the variation of the device to provide the stochasticity needed in the algorithm (Lin et al., 2018; 2019), using the I-V nonlinearity of the device to introduce the nonlinearity and displace the activation functions, or using the relaxation behavior of the device to efficiently implement the weight decay operation in the training process. As demonstrated in this work, the understanding of the hardware system is helpful for us to design better algorithms." } ]
2,020
IMPROVING THE ACCURACY OF NEURAL NETWORKS IN ANALOG COMPUTING-IN-MEMORY SYSTEMS BY A GENERALIZED QUANTIZATION METHOD
SP:1a2c88b471d463d79a172c254483d8c92314fe3b
[ "The manuscript studies the effect of batch size at different sparsity levels (achieved by applying connection sensitivity pruning) on the required number of optimisation steps to reach a certain accuracy. The goal is to understand the interplay between those fundamental parameters. The empirical evaluation is performed for different triples of dataset, network architecture and optimisation scheme. The theoretical analysis is based on established bounds on the expected gradient norm." ]
We study two factors in neural network training: data parallelism and sparsity; here, data parallelism means processing training data in parallel using distributed systems (or equivalently increasing batch size), so that training can be accelerated; for sparsity, we refer to pruning parameters in a neural network model, so as to reduce computational and memory cost. Despite their promising benefits, however, understanding of their effects on neural network training remains elusive. In this work, we first measure these effects rigorously by conducting extensive experiments while tuning all metaparameters involved in the optimization. As a result, we find across various workloads of data set, network model, and optimization algorithm that there exists a general scaling trend between batch size and number of training steps to convergence for the effect of data parallelism, and further, difficulty of training under sparsity. Then, we develop a theoretical analysis based on the convergence properties of stochastic gradient methods and smoothness of the optimization landscape, which illustrates the observed phenomena precisely and generally, establishing a better account of the effects of data parallelism and sparsity on neural network training.
[ { "affiliations": [], "name": "Namhoon Lee" } ]
[ { "authors": [ "Martín Abadi" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th USENIX symposium on operating systems design and implementation (OSDI 16),", "year": 2016 }, { "authors": [ "Léon Bottou" ], "title": "Stochastic gradient learning in neural networks", "venue": "Neuro Nîmes,", "year": 1991 }, { "authors": [ "Léon Bottou" ], "title": "Online learning and stochastic approximations", "venue": "On-line learning in neural networks,", "year": 1998 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Simon S. Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": null, "year": 2019 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan", "Hongchao Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks. NeurIPS, 2017", "venue": null, "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": "arXiv preprint arXiv:1404.5997,", "year": 2014 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "SNIP: Single-shot network pruning based on connection sensitivity", "venue": null, "year": 2019 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Stephen Gould", "Philip H.S. Torr" ], "title": "A signal propagation perspective for pruning neural networks at initialization", "venue": null, "year": 2020 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "NeurIPS, pp", "year": 2018 }, { "authors": [ "Tao Lin", "Sebastian U Stich", "Kumar Kshitij Patel", "Martin Jaggi" ], "title": "Don’t use large mini-batches, use local sgd", "venue": null, "year": 2020 }, { "authors": [ "Russell Reed" ], "title": "Pruning algorithms-a survey", "venue": "Neural Networks,", "year": 1993 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Christopher J Shallue", "Jaehoon Lee", "Joseph Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": null, "year": 2019 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": null, "year": 2018 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": null, "year": 2020 }, { "authors": [ "Weiran Wang", "Nathan Srebro" ], "title": "Stochastic nonconvex optimization with large minibatches", "venue": "arXiv preprint arXiv:1709.08728,", "year": 2017 }, { "authors": [ "Guodong Zhang", "Lala Li", "Zachary Nado", "James Martens", "Sushant Sachdeva", "George Dahl", "Chris Shallue", "Roger B Grosse" ], "title": "Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Jingzhao Zhang", "Tianxing He", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Data parallelism is a straightforward and common approach to accelerate neural network training by processing training data in parallel using distributed systems. Being model-agnostic, it is applicable to training any neural networks, and the degree of parallelism equates to the size of mini-batch for synchronized settings, in contrast to other forms of parallelism such as task or model parallelism. While its utility has attracted much attention in recent years, however, distributing and updating large network models at distributed communication rounds still remains a bottleneck (Dean et al., 2012; Hoffer et al., 2017; Goyal et al., 2017; Smith et al., 2018; Shallue et al., 2019; Lin et al., 2020).\nMeanwhile, diverse approaches to compress such large network models have been developed, and network pruning – the sparsification process that zeros out many parameters of a network to reduce computations and memory associated with these zero values – has been widely employed (Reed, 1993; Han et al., 2015). In fact, recent studies discovered that pruning can be done at initialization prior to training (Lee et al., 2019; Wang et al., 2020), and by separating the training process from pruning entirely, it not only saves tremendous time and effort in finding trainable sparse networks, but also facilitates the analysis of pruned sparse networks in isolation. Nevertheless, there has been little study concerning the subsequent training of these sparse networks, and various aspects of the optimization of sparse networks remain rather unknown as of yet.\nIn this work, we focus on studying data parallelism and sparsity1, and provide clear explanations for their effects on neural network training. Despite a surge of recent interest in their complementary\n1For the purpose of this work, we equate data parallelism and sparsity to increasing batch size and pruning model parameters, respectively; we explain these more in detail in Appendix E.\nbenefits in modern deep learning, there is a lack of fundamental understanding of their effects. For example, Shallue et al. (2019) provide comprehensive yet empirical evaluations on the effect of data parallelism, while Zhang et al. (2019) use a simple noisy quadratic model to describe the effect; for sparsity, Lee et al. (2020) approach the difficulty of training under sparsity solely from the perspective of initialization.\nIn this regard, we first accurately measure their effects by performing extensive metaparameter search independently for each and every study case of batch size and sparsity level. As a result, we find a general scaling trend as the effect of data parallelism in training sparse neural networks, across varying sparsity levels and workloads of data set, model and optimization algorithm. Also, the critical batch size turns out to be no less with sparse networks, despite the general difficulty of training sparse networks. We formalize our observation and theoretically prove the effect of data parallelism based on the convergence properties of generalized stochastic gradient methods irrespective of sparsity levels. We take this result further to understand the effect of sparsity based on Lipschitz smoothness analysis, and find that pruning results in a sparse network whose gradient changes relatively too quickly. Notably, this result is developed under standard assumptions used in the optimization literature and generally applied to training using any stochastic gradient method with nonconvex objective and learning rate schedule. Being precise and general, our results could help understand the effects of data parallelism and sparsity on neural network training." }, { "heading": "2 SETUP", "text": "We follow closely the experiment settings used in Shallue et al. (2019). We describe more details including the scale of our experiments in Appendix B, and provide additional results in Appendix D. The code can be found here: https://github.com/namhoonlee/effect-dps-public\nExperiment protocol. For a given workload (data set, network model, optimization algorithm) and study (batch size, sparsity level) setting, we measure the number of training steps required to reach a predefined goal error. We repeat this process for a budget of runs while searching for the best metaparameters involved in the optimization (e.g., learning rate, momentum), so as to record the lowest number of steps, namely steps-to-result, as our primary quantity of interest. To this end, we regularly evaluate intermediate models on the entire validation set for each training run.\nWorkload and study. We consider the workloads as the combinations of the followings: (data set) MNIST, Fashion-MNIST, CIFAR-10; (network model) Simple-CNN, ResNet-8; (optimization algorithm) SGD, Momentum, Nesterov with either a fixed or decaying learning rate schedule. For the study setting, we consider a batch size from 2 up to 16384 and a sparsity level from 0% to 90%.\nMetaparameter search. We perform a quasi-random search to tune metaparameters efficiently. More precisely, we first generate Sobol low-discrepancy sequences in a unit hypercube and convert them into metaparameters of interest, while taking into account a predefined search space for each metaparameter. The generated values for each metaparameter is in length of the budget of trials, and the search space is designed based on preliminary experimental results.\nPruning. Sparse networks can be obtained by many different ways, and yet, for the purpose of this work, they must not undergo any training beforehand so as to measure the effects of data parallelism while training from scratch. Recent pruning-at-initialization approaches satisfy this requirement, and we adopt the connection sensitivity criterion in Lee et al. (2019) to obtain sparse networks." }, { "heading": "3 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "3.1 MEASURING THE EFFECT OF DATA PARALLELISM", "text": "First of all, we observe in each and every sparsity level across different workloads a general scaling trend in the relationship between batch size and steps-to-result for the effects of data parallelism (see the 1st and 2nd columns in Figure 1): Initially, we observe a period of linear scaling where doubling the batch size reduces the steps to achieve the goal error by half (i.e., it aligns closely with the dashed line), followed by a region of diminishing returns where the reduction in the required number of steps by increasing the batch size is less than the inverse proportional amount (i.e., it starts to digress from the linear scaling region), which eventually arrives at a maximal data parallelism (i.e., it hits a\nplateau) where increasing the batch size no longer decreases the steps to reach a goal error. The same trend is observed across various workloads of data set, network model, and optimization algorithm as well as different goal errors (see Appendix D). We note that our observation is consistent with the results of regular network training presented in Shallue et al. (2019); Zhang et al. (2019).\nWhen we put the results for all sparsity levels together, we observe that training a sparser network takes a longer time; a data parallelism curve for higher sparsity usually lies above than that for lower sparsity (see the 3rd column in Figure 1). For example, compared to the case of sparsity 0% (i.e., the dense, over-parameterized network), 90% sparse network takes about 2 − 4 times longer training time (or the number of training steps), consistently across different batch sizes (see Figure 12 in Appendix D.1 for more precise comparisons). Recall that we tune all metaparameters independently for each and every study case of batch size and sparsity level without relying on a single predefined training rule in order to find the best steps-to-result. Therefore, this result on the one hand corroborates the general difficulty of training sparse neural networks against the ease of training overly parameterized neural networks.\nOn the other hand, when we normalize the y-axis of each plot by dividing by the number of steps at the first batch size, we can see the phase transitions more clearly. As a result, we find that the regions of diminishing returns and maximal data parallelism appear no earlier when training sparse networks than the dense network (see the 4th column in Figure 1). This is quite surprising in that one could have easily guessed that the general optimization difficulty incurred by sparsification may influence the data parallelism too, at least to some degree; however, it turns out that the effects of data parallelism on sparse network training remain no worse than the dense case. Moreover, notice that in many cases the breakdown of linear scaling regime occurs even much later at larger batch sizes for a higher sparsity case; this is especially evident for Momentum and Nesterov optimizers (e.g., compare training 90% sparse network using Momentum against 0% dense network). In other words, for sparse networks, a critical batch size can be larger, and hence, when it comes to training sparse neural networks, one can increase the batch size (or design a parallel computing system for distributed optimization) more effectively, while better exploiting given resources. We find this result particularly promising since SGD with momentum is often the method of choice in practice.\nWe further show that momentum optimizers being capable of exploiting large batch sizes hold the same across different sparsity levels by displaying all plots together in Figure 2. Overall, we believe that it is important to confirm the robustness of the data parallelism in sparse neural network training, which has been unknown thus far and difficult to estimate a priori." }, { "heading": "3.2 ANALYZING METAPARAMETER SEARCH", "text": "In this section, we analyze the metaparameter search used to measure the effect of data parallelism. We specifically investigate the workload {MNIST, Simple-CNN, Momentum} where there are two metaparameters to tune (i.e., learning rate and momentum), to visualize all metaparameters easily in a 2D figure (see Appendix D.2 for other results). The results are presented in Figure 3, and we summarize our key findings below:\n• Our quasi-random search samples metaparameters efficiently, so that they are distributed evenly (without being cluttered in a log-space) and flexibly (rather than sitting in a grid with fixed spac-\ning) within the search spaces. Also, the best metaparameters to yield lowest steps (marked by gold star ?) are located in the middle of the search ranges rather than sitting at the search boundaries across different batch sizes and sparsity levels. This means that our experiments are designed reasonably well, and the results are reliable. • There are two distinguished regions (i.e., complete ( ) and incomplete (N)) being separated by a seemingly linear boundary as per the relationship between learning rate and momentum. This indicates that the optimization is being done by an interplay between these two metaparameters; if one metaparameter is not chosen carefully with respect to the other (e.g., increase learning rate for fixed momentum), the optimizer may be stuck in a region spending time oscillating and eventually results in incomplete runs. This highlights the importance of performing metaparameter search, although it is expensive, rather than relying on predetermined heuristic training strategies, in order to accurately measure the effect of data parallelism and avoid potentially suboptimal results. • The successful region (filled with blue circles ) becomes larger as with increasing batch size, showing that large batch training reaches a given goal error in less number of training iterations than small batch training, and hence, yields more complete runs. Notably, the best learning rate tends to increase as with increasing batch size too. This aligns well with the classic result in learning theory that large batch training allows using bigger learning rates (Robbins & Monro, 1951; Bottou, 1998; Krizhevsky, 2014)." }, { "heading": "4 UNDERSTANDING THE EFFECTS OF DATA PARALLELISM AND SPARSITY", "text": "So far, we have focused on measuring the effects of data parallelism and sparsity on neural network training, and as a result found two distinctive global phenomena across various workloads: scaling trend between batch size and steps-to-result, and training difficulty under sparsity. While our findings align well with previous observations (Shallue et al., 2019; Zhang et al., 2019; Lee et al., 2020), it remains unclear as to why it occurs, and whether it will generalize. To this end, we establish theoretical results that precisely account for such phenomena based on convergence properties of generalized stochastic gradient methods in this section." }, { "heading": "4.1 CONVERGENCE ANALYSIS FOR THE GENERAL EFFECTS OF DATA PARALLELISM", "text": "Let us begin with reviewing the convergence properties of stochastic gradient methods as the choice of numerical algorithms for solving optimization problems. Consider a generic optimization problem where the objective is to minimize empirical risk with the objective function f : Rm → R, a\nprediction function h : Rdx × Rm → Rdy , and a loss function l : Rdy × Rdy → R which yields the loss l(h(x;w),y) given an input-output pair (x,y), where w ∈ Rm is the parameters of the prediction model h, and dx and dy denote the dimensions of input x and output y, respectively. A generalized stochastic gradient method to solve this problem can be of the following form:\nwk+1 := wk − ηkg(wk, ξk) , (1)\nwhere ηk is a scalar learning rate, g(wk, ξk) ∈ Rm is a stochastic vector (e.g., unbiased estimate of the gradient∇f ) with ξk denoting a random variable to realize data samples, either a single sample as in the prototypical stochastic gradient method (Robbins & Monro, 1951) or a set of samples as in the mini-batch version (Bottou, 1991). Given an initial iterate w1, it finds a solution by performing the above update iteratively until convergence.\nUnder the assumptions2 of Lipschitz smoothness of f and bounded variance of g, the convergence rate result states that for such generic problem with nonconvex objective and optimization method with a fixed 3 learning rate ηk = η̄ for all k satisfying 0 < η̄ ≤ µLMG , the expected average squared norm of gradients of the objective function is guaranteed to satisfy the following inequality for all K ∈ N (Bottou et al., 2018):\nE\n[ 1\nK K∑ k=1 ‖∇f(wk)‖22\n] ≤ η̄LM\nµ + 2(f(w1)− f∞) Kµη̄ . (2)\nHere, f(w1), f∞, ∇f(wk) refer to the objective function’s value at w1, lower bound, gradient at wk, respectively. Also, L is the Lipschitz constant of ∇f , and µ, M , MG denote scalar bounds in the assumption on the second moment of g(wk, ξk). Note here that M is linked to batch size B as M ∝ 1/B. In addition, if g(wk, ξk) is an unbiased estimate of ∇f(wk), which is the case for ξk being i.i.d. samples as in our experiments, then simply µ = 1 (Bottou et al., 2018). In essence, this result shows that the average squared gradient norm on the left-hand side is bounded above by asymptotically decreasing quantity as per K, indicating a sublinear convergence rate of the method. We note further that the convergence rate for the mini-batch stochastic optimization of nonconvex loss functions is studied previously (Ghadimi et al., 2016; Wang & Srebro, 2017), and yet, here we reconsider it to analyze the effects of data parallelism.\nWe now reformulate this result, such that it is translated into a form that matches our experiment settings and reveals the relationship between batch size and steps-to-result. We start by recognizing that the quantity on the left-hand side, the expected average squared norm of∇f(wk) during the first K iterations, indicates the degree of convergence; for example, it gets smaller as training proceeds with increasing K. Thus, this quantity is directly related to a goal error to reach in our experiments, which is set to be fixed across different batch sizes for a given workload. This effectively means that training has stopped, and K will no longer contribute to decrease the bound of the quantity. Also, recall that we select the optimal learning rate η̄?, out of extensive metaparameter search, to record the lowest number of steps to reach the given goal error, i.e., steps-to-result K?. Next, notice that the only factors that constitute the inequality in Eq. (2) are the Lipschitz constant L and the variance bound M , and if they are assumed to be tight in the worst case, the inequality becomes tight. Now we are ready to provide the relationship between batch size (B) and steps-to-result (K?) as follows:\nProposition 4.1. Let ε = E [\n1 K? ∑K? k=1 ‖∇f(wk)‖22 ] denote a degree of convergence achieved after\nthe first K? iterations and η̄? denote the optimal learning rate used to yield the lowest number of steps K? to reach ε. Then,\nK? ≈ c1 B + c2 , where c1 = ∆Lβ µ2ε2 and c2 = ∆ η̄?µε , (3)\nwhere ∆ = 2(f(w1)−f∞), β is the initial variance bound at batch sizeB = 1 for a given workload.\nProof. This result is obtained by recasting Eq. (2) as outlined above. The proof is in Appendix A.\n2(i) f is differentiable and satisfies ‖∇f(w)−∇f(w̄)‖2 ≤ L‖w − w̄‖2, ∀{w, w̄} ⊂ Rm, and (ii) there exist scalars M ≥ 0, MG ≥ µ2 > 0 such that Eξk [ ‖g(wk, ξk)‖22 ] ≤M +MG‖∇f(wk)‖22.\n3We also consider the general decaying learning rate case and prove the same result in Appendix A.2.\nThis result precisely illustrates the relationship between batch size and steps-to-result. For example, whenB is small,K? ≈ c1B , fitting the linear scaling regime (e.g.,B → 2B makesK\n? → (1/2)K?), whereas when B is large and asymptotically dominates the right-hand side, K? ≈ c2, indicating the maximal data parallelism as K? remains constant. In general, for moderate batch sizes, scaling B → 2rB results in K? → 12rK ? + (1− 12r )c2 (rather than 1 2rK ?), indicating diminishing returns.\nMoreover, we prove the same relationship between B and K? (with different constant terms) for the general decaying learning rate case (see Appendix A.2). Therefore, this result not only well accounts for the scaling trend observed in the experiments, but also describes it more precisely and generally. Notably, the effect of data parallelism, which has only been addressed empirically and thus remained as debatable, is now theoretically verified and applicable to general nonconvex objectives. We will further relate our result to sparse networks via smoothness analysis in Section 4.2." }, { "heading": "4.2 LIPSCHITZ SMOOTHNESS FOR THE DIFFICULTY OF TRAINING SPARSE NETWORKS", "text": "Another distinct phenomenon observed in our experiments is that the number of steps required to reach the same goal error for sparse networks is consistently higher than that for dense networks regardless of batch size (i.e., a whole data parallelism curve shifts upwards when introducing sparsity). This indicates the general difficulty of training sparse neural networks, and that sparsity degrades the training speed. In this section, we investigate what may cause this difficulty, and find a potential source of the problem by inspecting our theory of the effect of data parallelism to this end.\nLet us begin with our result for the effect of data parallelism in Proposition 4.1. Notice that it is the coefficient c1 (= ∆Lβ/µ2ε2) that can shift a whole data parallelism curve vertically, by the same factor across different batch sizes. Taking a closer look, we realize that it is the Lipschitz constant L that can vary quite significantly by introducing sparsity and hence affect c1; ε and µ are fixed, and ∆ and β can change by sparsity in a relatively minor manner (we explain this in detail in Appendix C). Specifically, L refers to the bound on the rate of change in ∇f and is by definition a function of f . Also, sparsity introduced by pruning changes the prediction function h which is linked to f via the loss function l. Therefore, we posit that a sparse neural network obtained by pruning will be less smooth (with a higher Lipschitz constant) than the non-pruned dense network. To verify our hypothesis, we empirically measure the Lipschitz constant for networks with different sparsity levels over the course of the entire training process. The results are presented in Figure 4.\nAs we can see, it turns out that the Lipschitz constant increases as with increasing sparsity level, and further, is consistently higher for sparse networks than for the dense network throughout training. This means that pruning results in sparse networks whose gradient changes relatively too quickly compared to the dense network; in other words, the prediction function h becomes less smooth after pruning. This is potentially what hinders training progress, and as a result, sparse networks require more time (i.e., steps-to-result) to reach the same goal error.\nEvidence of increased Lipschitz constant for sparse networks can be found further in metaparameter search results presented in Figure 5. Notice that for each batch size, the size of the range for successful learning rate η̄ decreases when switching from 0 to 90% sparsity level. This is because the learning rate bound satisfying the convergence rate theory becomes 0 < η̄ ≤ 1/L for a fixed batch size, and increased L due to sparsity shrinks the range of η̄.\nWe note that our findings of increased Lipschitz constant for sparse networks are consistent with the literature on over-parameterized networks such as Li & Liang (2018), which can be seen as the opposite of sparsity. The more input weights a neuron has, the less likely it is that a single parameter significantly changes the resulting activation pattern, and that wide layers exhibit convexity-like properties in the optimization landscape Du et al. (2019). This even extends to non-smooth networks with ReLU activations, which are still shown to exhibit pseudo-smoothness in the overparameterized regime Li & Liang (2018). We further show that our theory precisely explains the difficulty of training sparse networks due to decreased smoothness based on a quantitative analysis in Appendix C.\nIn addition, we provide in Figure 6 the training logs of the networks used for the Lipschitz smoothness analysis, in order to show the correlation between the Lipschitz smoothness of a network and its training performance; i.e., sparsity incurs low smoothness of gradients (high L; see Figure 4) and hence the poor training performance." }, { "heading": "5 DISCUSSION", "text": "Data parallelism with sparsity could have promising complementary benefits, and yet, little has been studied about their effects on neural network training thus far. In this work, we accurately measured their effects, and established theoretical results that precisely account for the general characteristics of data parallelism and sparsity based on the convergence properties of stochastic gradient methods and Lipschitz smoothness analysis. We believe our results are significant, in that these phenomena, which have only been addressed partially and empirically, are now theoretically verified with more accurate descriptions and applied to general nonconvex settings. While our findings render positive impacts to practitioners and theorists alike, there are remaining challenges. First, our experiments are bounded by available computing resources, and the cost of experiments increases critically for more complex workloads. Also, the lack of convergence guarantees for existing momentum schemes in nonconvex and stochastic settings hinders a further theoretical analysis. We hypothesize that ultimate understanding of the effect of data parallelism should be accompanied by a study of the generalization capability of optimization methods. Nonetheless, these are beyond the scope of this work, and we intend to explore these directions as future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1, the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016), and the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program (UNIST)). We would also like to acknowledge the Royal Academy of Engineering and FiveAI." }, { "heading": "A PROOF OF THE GENERAL EFFECT OF DATA PARALLELISM", "text": "This section provides the missing proofs in Section 4. The goal is to derive the relationship between batch size B and steps-to-result K? from the convergence rates of generalized stochastic gradient methods for both fixed and decaying learning rate cases. The result serves to account for the effect of data parallelism as a general phenomenon that must appear naturally at neural network training.\nA.1 FIXED LEARNING RATE CASE\nWe start from the convergence rate result in Eq. (2). We first recognize that the expected average squared gradient norm on the left-hand side indicates the degree of convergence. Then, this quantity is directly related to the concept of goal error to reach in our experiments, and hence, reduces to be a small constant as soon as it is implemented as a pre-defined goal error for a given workload. Thus, it follows that\n≤ η̄LM µ + 2(f(w1)− f∞) Kµη̄ . (4)\nNotice that by fixing = E [ 1 K ∑K k=1 ‖∇f(wk)‖22 ] , it effectively means that the training process has stopped, and therefore, K on the right-hand side will no longer contribute to decrease the bound of the quantity for a particular learning rate η̄ and batch size B.\nAlso, we select the optimal4 learning rate η̄?, out of extensive metaparameter search, to record the lowest number of steps to reach the given goal error, which we denote as steps-to-result K?.\n4Here, optimal simply refers to the sense of yielding the lowest steps-to-result.\nPlugging in these yields the following:\n≤ η̄ ?LM\nµ + 2(f(w1)− f∞) K?µη̄ . (5)\nNext, notice that the only factors that constitute the inequality in Eq. (2) come from the assumptions made to derive the convergence rate result, which are on the Lipschitz smoothness L and the variance bound M , and if they are assumed to be tight in the worst case, the inequality becomes tight. Then, after making algebraic manipulation and taking the first-order Taylor approximation while substituting M = β/B since the variance bound M is related to batch size B as M ∝ 1/B (Bottou et al., 2018), we obtain the following result:\nK? ≈ ∆ η̄?µ − (η̄?)2LM\n(6)\n≈ ∆ η̄?µ + ∆LM µ2 2\n= ∆Lβ\nµ2 2B +\n∆\nη̄?µ\n= c1 B + c2 , where c1 = ∆Lβ µ2 2 and c2 = ∆ η̄?µ .\nHere, ∆ = 2(f(w1)−f∞), β is the initial variance bound at batch sizeB = 1 for a given workload. Notice that , ∆, L, β, µ η̄? all are constant or become fixed for a given workload. Also, the degree of metaparameter search quality is assumed to be the same across different batch sizes, and hence, the result can be reliably used to interpret the relationshp between batch size and steps-to-result.\nA.2 DECAYING LEARNING RATE CASE\nWe could extend the convergence rate for a fixed learning rate ηk = η̄ in Eq. (2) to any sequence of decaying learning rates ηk satisfying ∑∞ k=1 ηk =∞ and ∑∞ k=1 η 2 k <∞ based on Theorem 4.10 in Bottou et al. (2018) as follows:\nE\n[ 1\nK K∑ k=1 ηk‖∇f(wk)‖22 ] ≤ LM Kµ K∑ k=1 η2k + 2(E[f(w1)]− f∞) Kµ . (7)\nThe requirements on learning rate ηk are the classical conditions (Robbins & Monro, 1951) that are assumed for convergence of any (sub)gradient methods, and cover nearly all standard and/or heuristical learning rate schedules employed in practice.\nNow, the relationship between batch size and steps-to-result can be derived similarly as before. First, applying ̃ = E [ 1 K ∑K k=1 ηk‖∇f(wk)‖22 ] for the degree of convergence, and replacing with ∆ = 2(E[f(w1)]− f∞) for simplicity, Eq. (7) can be rewritten as follows:\ñ ≤ LM Kµ K∑ k=1 η2k + ∆ Kµ . (8)\nPlugging H = ∑K k=1 η 2 k for a finite constant from decaying learning rate, and further, H\n? and K? for selected H by metaparameter search and steps-to-result, respectively, it becomes:\ñ ≤ LMH ?\nK?µ +\n∆\nK?µ . (9)\nFinally, using the worst case tightness on L and M , substituting M = β/B, and rearranging the terms, the effect of data parallelism for decaying learning rate case can be written as follows:\nK? ≈ LMH ?\nµ̃ +\n∆ µ̃ (10)\n= LH?β\nµ̃B +\n∆\nµ̃\n= c̃1 B + c̃2 , where c̃1 = LH?β µ̃ and c̃2 = ∆ µ̃ .\nHere, ̃, ∆, L, H?, β, µ all are constant or become fixed for a given workload.\nThis result, along with the case of fixed learning rate, establishes the theoretical account for the effect of data parallelism, by precisely and generally describing the relationship between batch size B and steps-to-result K?. Further analysis of these results is referred to the main paper." }, { "heading": "B SCALE OF OUR EXPERIMENTS", "text": "For a given workload of {data set, network model, optimization algorithm} and for a study setting of {batch size, sparsity level}, we execute 100 training runs with different metaparameters to measure steps-to-result. At each run, we evaluate the intermediate models at every 16 (for MNIST) or 32 (for CIFAR-10) iterations, on the entire validation set, to check if it reached a goal error. This means that, in order to plot the results for the workload of {MNIST, Simple-CNN, SGD} for example, one would need to perform, 14 (batch sizes; 21 to 214) × 4 (sparsity levels; 0, 50, 70, 90%) × 100 (runs) × 40, 000 (max training iteration preset) / 16 (evaluation interval) = 14, 000, 000 number of evaluations. Assuming that evaluating the Simple-CNN model on the entire MNIST validation set takes only a second on a modern GPU, it will take 14, 000, 000 (evaluations) × 1 (second per evaluation) / 3600 (second per hour) ≈ 3888 hours or 162 days. Of course, there are multiple ways to reduce this cost; for instance, we may decide to stop as soon as the run hits the goal error without running until the max training iteration limit. Or, simply reducing any factor listed above that contributes to increasing the experiment cost (e.g., number of batch sizes) can help to reduce the time, however, in exchange for the quality of experiments. We should also point out that this is only for one workload where we assumed that the evaluation takes only a second. The cost can increase quite drastically if the workload becomes more complex and requires more time for evaluation and training (e.g., CIFAR-10). Not to mention, we have tested for multiple workloads besides the above example, in order to confirm the generality of the findings in this work." }, { "heading": "C MORE ON LIPSCHITZ SMOOTHNESS ANALYSIS", "text": "We measure the local Lipschitz constant of ∇f based on a Hessian-free method as used in Zhang et al. (2020). Precisely, the local smoothness (or Lipschitz constant of the gradient) at iteration k is estimated as in the following:\nL̂(wk) = max γ∈{δ,2δ,..,1} ‖∇f(wk + γd)−∇f(wk)‖2 ‖γd‖2 , (11)\nwhere d = wk+1 − wk and δ ∈ (0, 1) for which we set to be δ = 0.1. The expected gradient ∇f is computed on the entire training set, and we measure L̂(wk) at every 100 iterations throughout training. This method searches the maximum bound on the smoothness along the direction between ∇f(wk+1) and ∇f(wk) based on the intuition that the degree of deviation of the linearly approximated objective function is bounded by the variation of gradient between wk+1 and wk. Furthermore, while ReLU networks (e.g., Simple-CNN, ResNet-8) can only be piecewise smooth, the smoothness can still be measured for the same reason that we measure gradient (i.e., it only requires differentiability).\nWe also empirically measure the changes in ∆ and β by introducing sparsity. Recall that these are the other elements in c1 that can be affected by sparsity along with Lipschitz constant L. When we measure these quantities for Simple-CNN, we obtain the following results: ∆s/∆d ≈ 4.68/4.66 ≈ 1.00, and βs/βd ≈ 107.39/197.06 ≈ 0.54; more precisely, ∆ does not change much since neither f(w1) or f(w∞) changes much, and βs/βd can be measured by the ratio of the variances of gradients between sparse and dense networks at B = 1. We have already provided Ls, Ld in Figure 4, which makes Ls/Ld ≈ 1.76/0.57 ≈ 3.09. Here, s and d denote sparse (90%) and dense, respectively. Notice that if we combine all the changes in ∆, β, L due to sparsity, and compute c1,s/c1,d, it becomes 1.00× 0.54× 3.09 ≈ 1.67. Importantly, c1,s/c1,d > 1 means that c1 has increased by sparsity, and since the increase in Lipschitz constant L played a major role therein, these results indicate that the general difficulty of training sparse networks is indeed caused by reduced smoothness. We further note that this degree of change fits roughly the range of k?s/k ? d as shown in Figure 12." }, { "heading": "D ADDITIONAL RESULTS", "text": "In this section, we provide additional experiemental results that are not included in the main paper. In Section D.1, we supplement more results for the effects of data parallelism and sparsity in Figures 7, 8, 9, 10, 11. In Figure 12 we present the difference in ratio between sparse (90%) and dense networks across different batch sizes for all workloads presented in this work. This result shows how much increase in steps-to-result is induced by introducing sparsity, and therefore, is used to study the general difficulty of training sparse neural networks. In Section D.2, we provide metaparameter search results for a subset of workloads studied in this work.\nD.1 EFFECTS OF DATA PARALLELISM AND SPARSITY\nD.2 METAPARAMETER SEARCH RESULTS" }, { "heading": "1e4 S: 0%, B: 256", "text": "10−4 10−3 10−2 10−1 100 Learning rate\n2\n3\n4\n5\n6\n7\n8 9 St ep s 1e4 S: 0%, B: 4\n10−4 10−3 10−2 10−1 100 Learning rate\n0\n2\n4\n6\n8\nSt ep\ns\n1e4 S: 0%, B: 32" }, { "heading": "10−4 10−3 10−2 10−1 100", "text": "Learning rate\n0\n2\n4\n6\n8\nSt ep\ns\n1e4 S: 0%, B: 256\n10−4 10−3 10−2 10−1 100 Learning rate\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nSt ep\ns\n1e5 S: 0%, B: 2048\n10−4 10−3 10−2 10−1 100 Learning rate\n0\n2\n4\n6\n8\nSt ep\ns\n1e4 S: 0%, B: 8192" }, { "heading": "1e5 S: 90%, B: 256", "text": "E IMPLEMENTATION DETAILS\nData parallelism and sparsity. By data parallelism, we refer to utilizing a parallel computing system where the training data is distributed to multiple processors for gradient computations, so that the training process can be accelerated. For the purpose of this work, we consider the simplest setting of synchronized distributed systems, in which the degree of parallelism equates to the size of mini-batch used for training on a regular single-node system. This effectively means that the effect of data parallelism can be measured by increasing batch size. By sparsity, we refer to pruning parameters in a neural network model, such that the remaining parameters are distributed sparsely on the network. For the purpose of this work, we employ a recent pruning-at-initialization method to obtain sparse networks, since they must not undergo any training beforehand so as to measure the effects of data parallelism while training from scratch.\nSoftware and hardware. We used TensorFlow libraries (Abadi et al., 2016) and a compute cluster with multiple nodes of CPUs (Intel Xeon Gold 5120 CPU @ 2.20GHz with 28 cores; 4 in total) and GPUs (Tesla P100 and V100; 16GB; 28 in total)." } ]
2,021
null
SP:e30f87da31dcb7e7ee9dd0abd503731d11d5160a
[ "This paper studies the self-supervised code functional representation learning and proposes a method called ContraCode. ContraCode utilizes some code functionality invariant transformations to generate positive pairs from the same code and negative pairs from different codes. After that, these codes pairs will be used to do the contrastive pre-training. Experiment results based on two tasks are reported." ]
Machine-aided programming tools such as automated type predictors and autocomplete are increasingly learning-based. However, current approaches predominantly rely on supervised learning with task-specific datasets. We propose Contrastive Code Representation Learning (ContraCode), a self-supervised algorithm for learning task-agnostic semantic representations of programs via contrastive learning. Our approach uses no human-provided labels, only the raw text of programs. ContraCode optimizes for a representation that is invariant to semantic-preserving code transformations. We develop an automated source-to-source compiler that generates textually divergent variants of source programs. We then train a neural network to identify variants of anchor programs within a large batch of non-equivalent negatives. To solve this task, the network must extract features representing the functionality, not form, of the program. In experiments, we pre-train ContraCode with 1.8M unannotated JavaScript methods mined from GitHub, then transfer to downstream tasks by fine-tuning. Pre-training with ContraCode consistently improves the F1 score of code summarization baselines and top-1 accuracy of type inference baselines by 2% to 13%. ContraCode achieves 9% higher top-1 accuracy than the current state-of-the-art static type analyzer for TypeScript. Finally, representations learned through a hybrid contrastive and reconstruction objective transfer in zero-shot to code clone detection with +10% AUROC over a static text similarity measure and +5% over reconstruction alone.
[]
[ { "authors": [ "Miltiadis Allamanis" ], "title": "The adverse effects of code duplication in machine learning models of code", "venue": "In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software,", "year": 2019 }, { "authors": [ "Miltiadis Allamanis", "Hao Peng", "Charles Sutton" ], "title": "A convolutional attention network for extreme summarization of source code", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Earl T. Barr", "Soline Ducousso", "Zheng Gao" ], "title": "Typilus: Neural type hints", "venue": "In Programming Language Design and Implementation (PLDI),", "year": 2020 }, { "authors": [ "Uri Alon", "Shaked Brody", "Omer Levy", "Eran Yahav" ], "title": "code2seq: Generating sequences from structured representations of code", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "code2vec: Learning distributed representations of code", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2019 }, { "authors": [ "Sorav Bansal", "Alex Aiken" ], "title": "Automatic generation of peephole superoptimizers", "venue": "Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS XII,", "year": 2006 }, { "authors": [ "Tal Ben-Nun", "Alice Shoshana Jakobovits", "Torsten Hoefler" ], "title": "Neural code comprehension: A learnable representation of code semantics", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Samuel Benton", "Ali Ghanbari", "Lingming Zhang" ], "title": "Defexts: A curated dataset of reproducible real-world bugs for modern jvm languages", "venue": "IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion),", "year": 2019 }, { "authors": [ "Pavol Bielik", "Martin Vechev" ], "title": "Adversarial robustness for code", "venue": null, "year": 2020 }, { "authors": [ "Gavin Bierman", "Martín Abadi", "Mads Torgersen" ], "title": "Understanding typescript", "venue": "In European Conference on Object-Oriented Programming (ECOOP),", "year": 2014 }, { "authors": [ "Jane Bromley", "Isabelle Guyon", "Yann LeCun", "Eduard Säckinger", "Roopak Shah" ], "title": "Signature verification using a\" siamese\" time delay neural network", "venue": "In Advances in neural information processing systems,", "year": 1994 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In International Conference on Macnine Learning,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Annual Conference of the North American Chapter of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pierre-Antoine Manzagol", "Pascal Vincent", "Samy Bengio" ], "title": "Why does unsupervised pre-training help deep learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Hongchao Fang", "Pengtao Xie" ], "title": "CERT: Contrastive self-supervised learning for language understanding", "venue": "arXiv preprint arXiv:2005.12766,", "year": 2020 }, { "authors": [ "Zhangyin Feng", "Daya Guo", "Duyu Tang", "Nan Duan", "Xiaocheng Feng", "Ming Gong", "Linjun Shou", "Bing Qin", "Ting Liu", "Daxin Jiang" ], "title": "CodeBERT: A pre-trained model for programming and natural languages", "venue": "arXiv preprint arXiv:2002.08155,", "year": 2020 }, { "authors": [ "Rudolf Ferenc", "Zoltán Tóth", "Gergely Ladányi", "István Siket", "Tibor Gyimóthy" ], "title": "A public unified bug dataset for Java", "venue": "In Proceedings of the 14th International Conference on Predictive Models and Data Analytics in Software Engineering,", "year": 2018 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "John M. Giorgi", "Osvald Nitski", "Gary D. Bader", "Bo Wang" ], "title": "DeCLUTR: Deep contrastive learning for unsupervised textual representations", "venue": "arXiv preprint arXiv:2006.03659,", "year": 2020 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Yaru Hao", "Li Dong", "Furu Wei", "Ke Xu" ], "title": "Visualizing and understanding the effectiveness of bert", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Vincent J Hellendoorn", "Christian Bird", "Earl T Barr", "Miltiadis Allamanis" ], "title": "Deep learning type inference", "venue": "In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering,", "year": 2018 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "S.M. Ali Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Zhiheng Huang", "Wei Xu", "Kai Yu" ], "title": "Bidirectional lstm-crf models for sequence tagging", "venue": "arXiv preprint arXiv:1508.01991,", "year": 2015 }, { "authors": [ "Hamel Husain", "Ho-Hsiang Wu", "Tiferet Gazit", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "CodeSearchNet challenge: Evaluating the state of semantic code search", "venue": null, "year": 1909 }, { "authors": [ "Yasir Hussain", "Zhiqiu Huang", "Yu Zhou", "Senzhang Wang" ], "title": "Deep transfer learning for source code modeling", "venue": "International Journal of Software Engineering and Knowledge Engineering,", "year": 2020 }, { "authors": [ "Srinivasan Iyer", "Ioannis Konstas", "Alvin Cheung", "Luke Zettlemoyer" ], "title": "Summarizing source code using a neural attention model", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Rajeev Joshi", "Greg Nelson", "Keith Randall" ], "title": "Denali: A goal-directed superoptimizer", "venue": "In Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation,", "year": 2002 }, { "authors": [ "T. Kamiya", "S. Kusumoto", "K. Inoue" ], "title": "Ccfinder: a multilinguistic token-based code clone detection system for large scale source code", "venue": "IEEE Transactions on Software Engineering,", "year": 2002 }, { "authors": [ "Aditya Kanade", "Petros Maniatis", "Gogul Balakrishnan", "Kensen Shi" ], "title": "Pre-trained contextual embedding of source", "venue": "code. ArXiv,", "year": 2020 }, { "authors": [ "Rafael-Michael Karampatsis", "Charles Sutton" ], "title": "SCELMo: Source code embeddings from language models", "venue": "arXiv preprint arXiv:2004.13214,", "year": 2020 }, { "authors": [ "Miryung Kim", "Thomas Zimmermann", "Nachiappan Nagappan" ], "title": "A field study of refactoring challenges and benefits", "venue": "In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering,", "year": 2012 }, { "authors": [ "Taku Kudo" ], "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Henry Massalin" ], "title": "Superoptimizer: A look at the smallest program", "venue": "Proceedings of the Second International Conference on Architectual Support for Programming Languages and Operating Systems,", "year": 1987 }, { "authors": [ "Sebastian McKenzie" ], "title": "Babel: compiler for writing next generation javascript", "venue": "https://github. com/babel/babel,", "year": 2020 }, { "authors": [ "Charith Mendis", "Alex Renda", "Saman Amarasinghe", "Michael Carbin" ], "title": "Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Dana Movshovitz-Attias", "William Cohen" ], "title": "Natural language models for predicting programming comments. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume", "venue": null, "year": 2013 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Irene Vlassi Pandi", "Earl T. Barr", "Andrew D. Gordon", "Charles Sutton" ], "title": "Opttyper: Probabilistic type inference by optimising logical and natural constraints, 2020", "venue": null, "year": 2020 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proc. of NAACL,", "year": 2018 }, { "authors": [ "Michael Pradel", "Koushik Sen" ], "title": "Deepbugs: A learning approach to name-based bug detection", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2018 }, { "authors": [ "Michael Pradel", "Georgios Gousios", "Jason Liu", "Satish Chandra" ], "title": "Typewriter: Neural type prediction with search-based validation", "venue": "arXiv preprint arXiv:1912.03768,", "year": 2019 }, { "authors": [ "Md. Rafiqul Islam Rabin", "Mohammad Amin Alipour" ], "title": "Evaluation of generalizability of neural program analyzers under semantic-preserving transformations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Henry Gordon Rice" ], "title": "Classes of recursively enumerable sets and their decision problems", "venue": "Transactions of the American Mathematical Society,", "year": 1953 }, { "authors": [ "Fábio Santos" ], "title": "Terser: Javascript parser, mangler and compressor toolkit for es6+", "venue": "https: //github.com/terser/terser,", "year": 2020 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Mike Schuster", "Kuldip Paliwal" ], "title": "Bidirectional recurrent neural networks", "venue": "Signal Processing, IEEE Transactions on,", "year": 1997 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "arXiv preprint arXiv:1508.07909,", "year": 2015 }, { "authors": [ "Richard Shin", "Neel Kant", "Kavi Gupta", "Christopher Bender", "Brandon Trabucco", "Rishabh Singh", "Dawn Song" ], "title": "Synthetic datasets for neural program synthesis", "venue": null, "year": 1912 }, { "authors": [ "Jeffrey Svajlenko", "Judith F. Islam", "Iman Keivanloo", "Chanchal K. Roy", "Mohammad Mamun Mia" ], "title": "Towards a big data curated benchmark of inter-project code clones", "venue": "In Proceedings of the 2014 IEEE International Conference on Software Maintenance and Evolution,", "year": 2014 }, { "authors": [ "Wilson L Taylor" ], "title": "Cloze procedure”: A new tool for measuring readability", "venue": "Journalism Quarterly,", "year": 1953 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ke Wang", "Mihai Christodorescu" ], "title": "COSET: A benchmark for evaluating neural program embeddings", "venue": "arXiv preprint arXiv:1905.11445,", "year": 2019 }, { "authors": [ "Ke Wang", "Zhendong Su" ], "title": "Learning blended, precise semantic program embeddings", "venue": "ArXiv,", "year": 2019 }, { "authors": [ "Jiayi Wei", "Maruth Goyal", "Greg Durrett", "Isil Dillig" ], "title": "Lambdanet: Probabilistic type inference using graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mike Wu", "Chengxu Zhuang", "Milan Mosse", "Daniel Yamins", "Noah Goodman" ], "title": "On mutual information in contrastive learning for visual representations, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "He" ], "title": "ContraCode pretraining The InfoNCE objective (1) is minimized with temperature t = 0.07 following He et al. (2019)", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Programmers increasingly rely on machine-aided programming tools to aid software development (Kim et al., 2012). However, the wide diversity of programs encountered in practice limits the generalization of hand-written rules. Catching semantic bugs such as naming errors requires deeper language understanding, motivating learning-based programming tools. Recent work uses machine learning for bug detection (Pradel & Sen, 2018) and optimization (Mendis et al., 2019). Consider predicting the type of the variable declaration “var median = ...;”. Static analysis fails as the type is underspecified, but the variable name indicates the statement is a float.\nProgramming language datasets suffer from scarce annotations due to the time and expertise required to label. State-of-the-art approaches generally rely on either (1) synthetic supervised datasets or (2) self-supervised pre-training. Synthetic auto-generated labels have been used for method naming (Alon et al., 2019a;b) and bug detection (Ferenc et al., 2018; Benton et al., 2019; Pradel & Sen, 2018). However, synthetic code datasets suffer from duplication issues (Allamanis, 2019) and biases (Shin et al., 2019) which degrade generalization. Moreover, auto-generated data does not cover the diverse program behaviors encountered in the wild.\nIn contrast, self-supervised learning can leverage large open-source repositories such as GitHub with limited or no annotations. Inspired by the success of pre-training in natural language processing, recent work uses self-supervision to learn code representations. Authors have explored context-based token embeddings (Ben-Nun et al., 2018) and masked language modeling, where tokens are corrupted and reconstructed (Feng et al., 2020; Kanade et al., 2020) However, reconstruction focuses on superficial language reasoning and does not explicitly address the underlying program functionality. The resulting models attend to program implementation specifics such as variable names.\nWe hypothesize that programs with the same functionality should have the same underlying representation for downstream code understanding tasks, a principle illustrated in Fig. 1. While it is time\nintensive to identify equivalent programs in a large corpus, it is cheap to leverage static compiler transformations to automatically generate many equivalent versions of a particular source program.\nIn this work, we develop ContraCode, a self-supervised representation learning algorithm that uses source-to-source compiler transformation techniques (e.g., dead code elimination, obfuscation and constant folding) to generate syntactically diverse but functionally equivalent programs. ContraCode uses these equivalent programs to construct a challenging discriminative pretext task that requires the model to identify equivalent programs out of a large dataset of distractors. In doing so, it has to embed the functionality, not the form, of the code. In essence, the domain knowledge from our code transformations induces the knowledge of the structure of programs onto learned representations. The contributions of our work include:\n1. the novel use of compiler-inspired transformations as data augmentations for code,\n2. the concept of program representation learning based on functional equivalence, and\n3. a detailed analysis of architectures, code transforms and pre-train strategies, where ContraCode improves static type inference top-1 accuracy by 9%, learned inference by 2% – 13%, summarization F1 score by up to 8% and clone detection AUROC by 5% – 10%." }, { "heading": "2 RELATED WORK", "text": "Self-supervised learning (SSL) is a general representation learning strategy where some dimensions or attributes of a datapoint are predicted from the remaining parts. These methods are unsupervised in the sense that they do not rely on labels, but SSL tasks often adapt losses and architectures designed for supervised learning. Self-supervised pre-training has yielded large improvements in both NLP (Howard & Ruder, 2018; Devlin et al., 2018; Radford et al., 2018; 2019) and computer vision (Mahajan et al., 2018) by improving generalization (Erhan et al., 2010; Hao et al., 2019). Weak visual features, such as orientation (Gidaris et al., 2018), color (Zhang et al., 2016), and context (Pathak et al., 2016), are meaningful signals for representations (Mahajan et al., 2018).\nContrastive learning unifies many past SSL approaches that compare pairs or collections of similar and dissimilar items (Hadsell et al., 2006). Rather than training the network to predict labels or reconstruct data, contrastive methods minimize the distance between the representations of similar examples (positives) while maximizing the distance between dissimilar examples (negatives). Examples include Siamese networks (Bromley et al., 1994) and triplet losses (Schroff et al., 2015). Contrastive predictive coding (Oord et al., 2018; Hénaff et al., 2019) learns to encode chunks of sequential data to predict of future chunks with the InfoNCE loss, a variational lower bound on mutual information between views of the data (Tian et al., 2019; Wu et al., 2020) inspired by noiseconstrastive estimation (Gutmann & Hyvärinen, 2010). In instance discrimination tasks (Wu et al., 2018), views and not pieces of an entire image are compared. SimCLR (Chen et al., 2020a) and Momentum Contrast (He et al., 2019; Chen et al., 2020b) recently made progress by using many negatives for dense loss signal. Beyond images, InfoNCE has been applied to NLP (Chuang et al., 2020; Giorgi et al., 2020), but may require supervision (Fang & Xie, 2020).\nCode representation learning There has been substantial work on architectures and tasks for machine learning on code (Allamanis et al., 2018). We adopt the summarization task of Alon et al.\nfunction x(maxLine) { const section = { text: '', data };\nfor (; i < maxLine; i += 1) { section.text += `${lines[i]}\\n`; }\nif (section) { parsingCtx.sections.push(section); } }\nOriginal JavaScript method\nfunction x(t) { const n = { 'text': '', 'data': data }; for (;i < t; i += 1) { n.text += lines[i] + '\\n'; } n && parsingCtx.sections.push(n); }\nRenamed variables, explicit object style, explicit concatenation, inline conditional\nfunction x(t){const n={'text':'','data':data};for(;i<t;i+= 1)n.text+=lines[i] +'\\n';n&&parsingCtx.sections.push(n)}\nMangled source with compressed whitespace\nFigure 2: A JavaScript method from the unlabeled training set with two automatically generated semantically-equivalent programs. The original method is from the StackEdit Markdown editor.\n(2019a), and the variable type inference task of DeepTyper (Hellendoorn et al., 2018). Other authors have explored summarization (Movshovitz-Attias & Cohen, 2013; Allamanis et al., 2016; Iyer et al., 2016) and type inference (Pradel et al., 2019; Pandi et al., 2020; Wei et al., 2020; Allamanis et al., 2020; Bielik & Vechev, 2020) with different languages and datasets. The tree or graph structure of code can be exploited to encode invariances in the representation. Inst2vec (Ben-Nun et al., 2018) locally embeds individual statements in LLVM IR by processing a contextual flow graph with a context prediction objective (Mikolov et al., 2013). Tree-Based CNN embeds the Abstract Syntax Tree (AST) nodes of high-level source code. Code2seq (Alon et al., 2019a) embeds AST paths with an attention-based encoder and LSTM decoder for supervised sequence-to-sequence tasks. Kanade et al. (2020); Feng et al. (2020) pre-train the Transformer (Vaswani et al., 2017) on code using the masked language modeling objective (Devlin et al., 2018), an instance of the cloze task (Taylor, 1953) where the model reconstructs corrupted tokens. Recurrent networks have also been pre-trained on code (Hussain et al., 2020) as language models (Peters et al., 2018; Karampatsis & Sutton, 2020). Wang & Christodorescu (2019); Wang & Su (2019) assess the stability of program analyzers under semi-automated program transformations. Concurrent work by Rabin & Alipour (2020) found that code2vec and code2seq often change their classifications when statements are permuted, variables are renamed, or other-semantic preserving transformations are applied." }, { "heading": "3 METHOD: CONTRASTIVE CODE REPRESENTATION LEARNING", "text": "Understanding program functionality and global structure is important for difficult tasks like summarizing code in natural language. For these problems, learned code representations should be similar for functionally equivalent programs and dissimilar for non-equivalent programs (Figure 1). The principle of contrastive learning offers a simple objective for learning such representations if data can be organized into pairs of positives and negatives. We use each pair to shape representation space, drawing positives together and pushing negatives apart. However, a major question remains: given an unlabeled corpus of programs, how do we identify or generate similar programs? We address this question in Sec. 3.1, then introduce our learning framework in Sec. 3.2." }, { "heading": "3.1 COMPILATION AS DATA AUGMENTATION", "text": "Modern programming languages afford great flexibility to software developers, allowing them to implement the same desired functionality in different ways. Crowdsourced datasets mined from developers, such as GitHub repositories, have many near-duplicates in terms of textual similarity (Allamanis, 2019), and are bound to contain even more functional equivalences for common tasks. Satisfiability solvers can identify these equivalent programs (Joshi et al., 2002; Bansal & Aiken, 2006), but functional equivalence is also undecidable in general (Rice, 1953). Also, formal documentation of semantics is required. Programs can instead be compared approximately using test-cases (Massalin, 1987), but this is costly and requires executing untrusted code.\nInstead of searching for equivalences, we propose correct by construction data augmentation. Our insight is to apply source-to-source compiler transformations to unlabeled code to generate many variants with the same functionality. For example, dead-code elimination (DCE) is a common compiler optimization that removes operations that leave the output of a function unchanged. While\nCode compression Identifier modification 3 Reformatting (R) 3 Variable renaming (VR) 3 Beautification (B) 3 Identifier mangling (IM) 3 Compression (C) Regularization 3 Dead-code elimination (DCE) 3 Dead-code insertion (DCI) 3 Type upconversion (T) 3 Subword regularization (SW) 3 Constant folding (CF) 7 Line subsampling (LS)\n3 = semantics-preserving transformation 7 = lossy transformation\nTable 1: We augment programs with 11 automated source-tosource compiler transformations. 10 of the 11 transformations are correct-by-construction and do not modify operational semantics. More details are in Section A.3.\nDCE preserves program functionality, Wang & Christodorescu (2019) find that up to 12.7% of the predictions of current algorithm classification models change after DCE—supervised datasets were not enough to acquire the domain knowledge that DCE does not matter.\nA particular source code sequence, e.g. “W*x + b” is parsed unambiguously into a tree-structured representation “(+ (* W x) b)”. This tree is then transformed by automated traversal algorithms. A rich body of prior programming language work explores parsing then tranforming Abstract Syntax Trees to optimize a program prior to machine code generation. If source code is output rather than machine code, this is called source-to-source transformation. Source-to-source transformations are common for optimization and obfuscation purposes in dynamic languages like JavaScript. If each transformation preserves code functionality, then any composition also preserves code functionality.\nWe leverage the Babel and Terser compiler infrastructure tools for JavaScript (McKenzie et al., 2020; Santos et al., 2020) to parse code into an Abstract Syntax Tree (AST) and then perform correctnesspreserving transformations on method bodies. Table 1 and Appendix A.3 list all transformations, but we broadly group program transformations into three categories. Code compression changes the syntactic structure of code and performs correct-by-construction transformations such as precomputing constant expressions at compile time. Identifier modification substitutes method and variable names with random tokens, thereby masking part of the semantic information in programs. Finally, transformations for Regularization improve model generalization by reducing the number of trivial positive pairs with high text overlap; this group potentially modifies program semantics through the line subsampling pass." }, { "heading": "3.2 CONTRASTIVE PRE-TRAINING", "text": "Representations of semantically equivalent programs (positives) should have representations that each are closer to each other than semantically dissimilar programs (negatives). Contrastive learning is a natural framework to induce invariances into a model by attracting positives while repelling negatives. To adapt recent contrastive learning objectives for images to code representation learning, we leverage the augmentations discussed in Section 3.1.\nWe extend the Momentum Contrast method (He et al., 2019) that was designed for image representation learning. Our training procedure is depicted in Figure 4. Each transformation is a function τ : P → P , where the space of programs P is composed of both the set of valid ASTs and the set of programs in source form. At the beginning of an iteration, a batch of programs is sampled from a large database. Each program x in the batch is transformed twice using two different, random subsets of transformations to derive textually different query programs and key programs according to Algorithm 1. Unlike computer vision data augmentations such as random cropping that are stochastic, our compiler-based transformations are deterministic.\nTo produce a diverse set of transformed programs, we randomly apply a subset of available compiler passes in a pre-specified order, applying transform τi with probability pi. Intermediate programs are converted between AST and source form as needed. As all augmentations are precomputed, we deduplicate programs variants before pre-training. Figure 3 measures this diversity. 89% of the JavaScript functions in our dataset have more than one alternative after applying 20 random sequences of transformations. The remaining programs without syntactically distinct alternatives\ninclude one-line functions that are obfuscated. We apply subword regularization (Kudo, 2018) as a final transformation to derive different tokenizations every batch, so pairs will still differ. All transformations are fast; our compiler transforms 300 functions per second on a single CPU core.\nTo reduce memory consumption during pre-training, we enqueue past batches to cache activations for negative samples. These cached samples are valid negatives if the queue is smaller than the dataset size. Following He et al. (2019), the query encoder fq is trained via gradient descent while the key encoder fk is trained slowly via an exponential moving average (EMA) of the query encoder parameters. The EMA update stabilizes the pre-computed key embeddings across training iterations. Since keys are only embedded once per epoch, we use a very large set of negatives, over 100K, with minimal additional computational cost and no explicit hard negative mining.\nContraCode supports different encoder architectures. We evaluate contrastive pre-training of Transformer (Vaswani et al., 2017) and BiLSTM (Schuster & Paliwal, 1997; Huang et al., 2015) architectures, with specific details in Section 4.\nPre-training objective The contrastive objective maximizes the similarity of positives without collapsing onto a single representation. Like He et al. (2019), we use InfoNCE (Oord et al., 2018), a tractable objective that frames contrastive learning as a classification task: can the positives be identified among a batch of sampled negatives? InfoNCE computes the probability of classifying the positive (transformed program) by taking the softmax of representation similarities across a batch of negatives. Equation (1) shows the InfoNCE loss for instance discrimination from He et al. (2019), a function whose value is low when q is similar to the positive key embedding k+ and dissimilar to negative key embeddings k−. t is a temperature hyperparameter proposed by Wu et al. (2018).\nLq,k+,k− = − log exp(q · k+/t) exp(q · k+/t) +∑k− exp(q · k−/t) (1) The query representation q = fq(xq) is computed by the encoder network fq, and xq is a query program. Likewise, k = fk(xk) using the EMA key encoder fk. Views xq, xk depend on the specific domain and pretext task. In our case, the views are tokenized representations of the augmented programs, and the summation ∑ k− in the normalizing denominator is taken over the queue of pre-computed negatives as well as other non-matching keys in the batch.\nTransfer learning After pre-training converges, the encoder fq is transferred to downstream tasks. As the output space of the task can differ from the encoder, we add a task-specific MLP or Transformer decoder after fq , then train the resulting network end-to-end on task data." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate whether self-supervised pre-training with ContraCode improves JavaScript and TypeScript code analysis. We benchmark on (1) extreme code summarization (Allamanis et al., 2016)\nAlgorithm 1 Stochastic augmentation of programs with two possible encodings (AST or source). 1: Input: Program source x, transformation functions τ1, . . . τk, transform probabilities p1, . . . pk 2: V ← {x}, a set of augmented program variants 3: for SAMPLE i← 1 . . . N do 4: x′ ← x 5: for transform t← 1 . . . k do 6: Sample yt ∼ Bernoulli(pt) 7: if yt = 1 then 8: if REQUIRESAST(τt(·)) and ¬ISAST(x′) then x′ ← PARSETOAST(x′) 9: else if ¬REQUIRESAST(τt(·)) and ISAST(x′) then x′ ← LOWERTOSOURCE(x′) 10: x′ ← τt(x′) 11: end if 12: end for 13: if ISAST(x′) then x′ ← LOWERTOSOURCE(x′) 14: V ← V ∪ {x′} 15: end for 16: return V\nand (2) TypeScript type inference (Hellendoorn et al., 2018). ContraCode improves accuracy on both tasks. As a baseline self-supervised approach, we pre-train a RoBERTa model with the masked language modeling (MLM) loss on our augmented dataset, then fine-tune it on each downstream task. Contrastive pre-training with our compiler-based augmentations outperforms baseline supervised learning methods as well as MLM self-supervision. To probe the semantic content of representations learned with MLM, ContraCode, and a hybrid model combining both objectives, we evaluate zero-shot performance of code clone detection (Kamiya et al., 2002), a binary classification task that reveals that contrastive and hybrid representations are highly predictive of program functionality in-the-wild. Further, we find it is better to augment the large set of unlabeled programs during pretraining rather than augmenting smaller supervised datasets. As ContraCode makes no modifications to model architecture, we find that contrastive pre-training can be applied to diverse baselines while improving accuracy across the board.\nWe pre-train over a large corpus of methods extracted from popular GitHub repositories. The CodeSearchNet dataset collected by Husain et al. (2019) contains 1,843,099 JavaScript programs. Only 81,487 methods have both a documentation string and a method name. The asymmetry between labeled and unlabeled programs stems from JavaScript coding practices where anonymous functions are widespread. The pre-training dataset described in Section 3.1 is the result of augmenting CodeSearchNet’s 1.8m programs." }, { "heading": "4.1 IMPACT OF CONTRACODE PRE-TRAINING ON TYPE INFERENCE", "text": "JavaScript is a dynamically typed language, where variable types are determined at runtime based on the values they represent. However, annotating code with types helps tools flag possible bugs before runtime by statically detecting incompatible types. These annotations also help programmers document and understand code. However, maintaining type annotations is tedious. Type inference tools automatically predict variable types from context.\nTo learn to infer types, we use the same annotated dataset of TypeScript programs from DeepTyper (Hellendoorn et al., 2018), without GitHub repos that were made private or deleted since publication. The training set consists of 15,570 TypeScript files from 187 projects with 6,902,642 total tokens. Validation and test sets are from held-out repositories. For additional supervision during training, additional types are inferred by static analysis to augment user-defined types as targets. All type annotations are removed from the input to the model. We evaluate a 2-layer Bidirectional LSTM, as used by DeepTyper, and a 6-layer Transformer, modified from RoBERTa to have a comparable parameter count. A 2-layer MLP head predicts types from the model’s embedding of each token. We perform early stopping based on validation set top-1 accuracy.\nBenefiting from pre-training is challenging because it requires knowledge transfer across dialects. Our models are pre-trained on JavaScript, not TypeScript. TypeScript supports a superset of the JavaScript grammar, adding type annotations and syntactic sugar that must be learned during finetuning. Further, the pre-training dataset consists of methods, while the DeepTyper dataset includes\nentire modules. Table 2 summarizes results. Contrastive pre-training outperforms all baseline learned methods, showing meaningful transfer. Our best-performing model (bottom row) achieves +8.3% higher top-1 accuracy than a supervised Transformer model trained from scratch, +13.2% higher than a pre-trained RoBERTa model and +2.3% higher than DeepTyper.\nContraCode can also be applied in a drop-in fashion to each of the baselines without modifying model architecture. Simply pre-training each baseline with our contrastive objective and data augmentations yields absolute accuracy improvements of +1.2%, +6.3%, +2.3% top-1 and +1.8%, +5.7%, +2.8% top-5 over the Transformer, RoBERTa, and DeepTyper, respectively. The RoBERTa baseline may perform poorly since its masked language modeling (MLM) objective focuses on token reconstruction that is overly sensitive to local syntactic structure. To combine the approaches, we minimized our loss in addition to MLM as a hybrid local-global objective during pre-training.\nLearning outperforms static analysis by a large margin. Overall, ContraCode achieves +8.9% higher top-1 accuracy than the best static type inference system, the built-in TypeScript CheckJS system, showing the promise of learned code analysis. Surfacing multiple candidate types can be useful to users. While CheckJS only produces a single prediction which is often incorrect, one of the top-5 predictions of ContraCode is correct for 85.55% of labeled tokens." }, { "heading": "4.2 IMPACT OF CONTRACODE PRE-TRAINING ON EXTREME CODE SUMMARIZATION", "text": "The extreme code summarization task asks a model to predict the name of a method given its body (Allamanis et al., 2016). Tokenized method names often contain a short summary of functionality, such as reverseString(...). Summarization models could explain obfuscated or poorly documented code. We create a JavaScript summarization dataset using the 81,487 labeled methods in the CodeSearchNet dataset. The method name is masked in the declaration of the function and then predicted by a sequence-to-sequence model with an autoregressive decoder trained to maximize log likelihood of the ground-truth name, a form of abstractive summarization. All models overfit, so we use early stopping according to validation loss. As proposed by Allamanis et al. (2016), we evaluate model predictions by precision, recall and F1 scores over the set of method name tokens.\nTable 3 shows code summarization results in four settings: (1) supervised training using baseline tree-structured architectures that analyze the AST (code2vec, code2seq), (2) pre-training on all 1.84M programs using masked language modeling followed by fine-tuning on the labeled programs (RoBERTa), (3) supervised training from scratch with a Transformer architecture and (4) contrastive pre-training with all 1.84M programs followed by fine-tuning with augmentations (ContraCode).\nContrastive pre-training with fine-tuning outperforms the prior code2seq model, a competitive supervised baseline, by 8.2% in test precision, 7.3% in recall, and 7.9% in F1 score. The tree-based\ncode2seq architecture is a way to encode code-specific invariances into the model, while contrastive pre-training induces domain invariances through data augmentation; reduced inductive biases in the Transformer model architecture leads to better performance. ContraCode outperforms self-supervised pre-training with RoBERTa by 4.8% F1. ContraCode also achieves higher performance than the Transformer learned from scratch with the same network architecture. While this improvement is relatively smaller, code summarization is a difficult task. Naming conventions aren’t consistent between programmers, and the metric measures exact token matches." }, { "heading": "4.3 PROBING REPRESENTATIONS OF FUNCTIONALITY: ZERO-SHOT CODE CLONE DETECTION", "text": "ContraCode learns to match variants of programs with similar functionality. While these transformations produce highly diverse token sequences (Section A.4), they are artificial and do not change the underlying algorithm. Human programmers can solve a problem with many data structures, algorithms and programming models. Are pre-trained representations consistent across programs written by different people? We benchmark on the code clone detection task, a binary classification task to distinguish pairs of programs solving the same problem from pairs solving different ones. This is useful for deduplicating and refactoring code, or checking approximate code correctness.\nDatasets exist like BigCloneBench (Svajlenko et al., 2014), but to the best of our knowledge, there is no benchmark for the JavaScript programming language. We collected 274 in-the-wild JavaScript programs correctly solving 33 problems from the HackerRank interview preparation website. There are 2065 pairs solving the same problem and 70K pairs solving different problems, which we randomly subsample to 2065 to balance the classes. Since we probe zero-shot performance, there is no training set. Traditional code analysis methods for clone detection measure textual similarity. As a baseline heuristic classifier, we threshold the dissimilarity score (Eq. 2), a scaled edit distance between two normalized and tokenized programs (to exclude formatting changes). For continuous representations, we threshold cosine similarity uT v/‖u‖‖v‖. Table 4 shows results according to the area under the ROC curve (AUROC) and average precision (AP, area under precision-recall). Continuous representation improves clone detection over the heuristic. However, self-supervision through masked language modeling for nearly 100 epochs of pre-training does not help, indicating that MLM is a poor fit for representing functionality. Contrastive pre-training achieves +6.21% higher AUROC than the baseline. A hybrid objective combining both the contrastive loss and MLM has the best performance with +10% AUROC (+5.14% over MLM alone)." }, { "heading": "4.4 UNDERSTANDING THE IMPORTANCE OF DATA AUGMENTATION", "text": "We first analyze the effect of our proposed augmentations on supervised learning without a pre-training phase. We then study the importance of individual augmentations during pre-training.\nSupervised learning with data augmentation As a baseline, we re-train models from scratch with compiler transforms during supervised learning rather than pre-training. Data augmentation\nTable 5: Compiler data augmentations degrade performance when training supervised models from scratch.\nCode summarization F1\nTransformer (Table 3) 16.86 w/ LS,SW,VR,DCI aug. 15.65\nType Inference Acc@1\nTransformer (Table 2) 45.66 w/ SW reg. 43.96 w/ LS,SW aug. 44.14\nDeepTyper (Table 2) 51.73 w/ SW reg. 49.93 w/ LS,SW aug. 50.93 w/ stronger LS,SW aug. 50.33\nartificially expands labeled training sets. For sequence-tosequence summarization, we apply a variety of augmentations; these all preserve the method name label. For type inference, labels are aligned to input tokens, so they must be realigned after transformation. We apply all token-level transformations that track label locations.\nTable 5 shows results. Compiler-based data augmentations degrade supervised models, perhaps by creating a training distribution not reflective of evaluation programs. However, as shown in 4.1 – 4.3, augmenting during ContraCode pre-training yields a more robust model. Our contrastive learning framework also allows learning over large numbers of unlabeled programs that supervised learning alone cannot leverage. The ablation indicates that augmentations do not suffice, and contrastive learning is important.\nAblating data pre-training augmentations Some data augmentations may be more valuable than others for learning a representation via instance discrimination. Empirically, pre-training converges faster with a smaller set of augmentations at the same batch size since the positives are syntactically more similar, but this hurts downstream performance. Table 6 shows that type inference accuracy degrades when different groups of augmentations are removed. Semantics-preserving code compression passes that require code analysis are the most important, improving top-1 accuracy by 1.95% when included. Line subsampling serves as a regularizer, but changes program semantics. LS is relatively less important, but does help accuracy. Identifier modification passes preserve semantics, but remove potentially useful naming information. Removing these hurts accuracy the least.\nAdditional results We perform additional ablations in Section A.1 by transferring different parts of the network to downstream tasks, computing the contrastive objective with representations taken from different encoder layers, varying architecture, and tuning the pre-training procedure. These experiments suggest that as many parameters as possible should be transferred to the downstream task. Details of the pre-training strategy are also important. Computing the contrastive objective using a “global” representation q summarizing the whole input sequence xq outperforms more a “local” representation based on aggregating token representations. Further, a large batch size is helpful to stabilize pre-training. Section A.2 includes qualitative results." }, { "heading": "5 CONCLUSIONS", "text": "Large-scale unannotated repositories of code like GitHub are a powerful resource for learning machine-aided programming tools. However, most current approaches to code representation learning do not leverage unannotated data. We propose ContraCode, a contrastive self-supervised algorithm that learns representations that are invariant to code transformations. Our method optimizes for this invariance via novel compiler-based data augmentations for code. ContraCode significantly improves the accuracy of extreme code summarization baselines (+2.3% to +13.2%), TypeScript type inference models (up to +7.9% F1) and code clone detection (+5 to +10% AUROC). ContraCode outperforms self-supervised RoBERTa pre-training. Moreover, contrastive pre-training outperforms supervised training with our augmentations. As ContraCode makes no modifications to model architecture and simply adds a training phase, it consistently improves accuracies when applied to diverse baselines." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ADDITIONAL RESULTS AND ABLATIONS", "text": "Code clone detection ROC and PR curves Figure 5 plots true postive rate vs false positive rate and precision vs recall for different zero-shot classifiers on the code clone detection downstream tasks. These classifiers threshold a similarity score given by token-level edit distance for the heuristic approach or cosine similarity for the neural network representations. The hybrid self-supervised model combining ContraCode’s contrastive objective and masked language modeling achieves better tradeoffs than the other approaches.\nWhich part of the model should be transferred? SimCLR (Chen et al., 2020a) proposed using a small MLP head to reduce the dimensionality of the representation used in the InfoNCE loss during pre-training, and did not transfer the MLP to the downstream image-classification task. In contrast, we find it beneficial to transfer part of the contrastive MLP head to type inference, showing a 2% improvement in top-5 accuracy over transferring the encoder only (Table 7). We believe the improvement stems from fine-tuning both the encoder and MLP which allows feature adaptation, while SimCLR trained a linear model on top of frozen features. We only transferred the MLP when contrasting the mean of token embeddings during pre-training, not the terminal hidden states, as the dimensionality of the MLP head differs. These representations are compared next.\nShould we pre-train global or local representations? We compare pre-training DeepTyper with two variants of ContraCode. We either use the mean of token hidden states across the program (averaging local features), or the terminal hidden states as input to the MLP used to extract the contrastive representation q = fq(x) (global features). Token-level features might capture more syntactic details, but averaging pooling ignores order. Table 8 shows the accuracy of a BiLSTM pre-trained with each strategy. Using the global features for pre-training yields significantly improved performance, +2.38% acc@1 after 10K iterations of pre-training (not converged for the purposes of ablation). The global pre-training strategy achieves the best results in Table 2.\nDo pre-trained encoders help more with shallow decoders? For the sequence-to-sequence code summarization task, ContraCode only pre-trains the encoder of the Transformer. In Table 9, we ablate the depth of the decoder to understand how much shallow decoders benefit from contrastive pre-training of the encoder. Similar experiments were performed in a vision context by Erhan et al.\n(2010), where different numbers of layers of a classifier are pretrained. After 45k pre-training steps, the 4-layer decoder achieves 0.50% higher precision, 0.64% higher recall and 0.77% higher F1 score than the 1-layer model, so additional decoder depth is helpful for the downstream task. The 1-layer decoder model also benefits significantly from longer pre-training, with a 6.3% increase in F1 from 10k to 45k iterations. This large of an improvement indicates that ContraCode could be more helpful for pre-training when the number of randomly initialized parameters at the start of fine-tuning is small. For larger decoders, more parameters must be optimized during-finetuning, and the value of pre-training is diminished.\nContrastive representation learning strategies In Figure 6, we compare two strategies of refreshing the MoCo queue of key embeddings (the dictionary of negative program representations assumed to be non-equivalent to the batch of positives). In the first strategy, we add 8 items out of the batch to the queue (1×), while in the second we add 96 items (12×). In addition, we use a larger queue (65k versus 125k keys) and a slightly larger batch size (64 versus 96). We observe that for the baseline queue fill rate, the accuracy decreases for the first 8125 iterations as the queue fills. This decrease in accuracy is expected as the task becomes more difficult due to the increasing number of negatives during queue warmup. However, it is surprising that accuracy grows so slowly once the queue is filled. We suspect this is because the key encoder changes significantly over thousands of iterations: with a momentum term m = 0.999, the original key encoder parameters are decayed by a factor of 2.9× 10−4 by the moving average. If the queue is rapidly refreshed, queue embeddings are predicted by recent key encoders, not old parameters. This also indicates that a large diversity of negative, non-equivalent programs are helpful for rapid convergence of ContraCode pre-training." }, { "heading": "A.2 QUALITATIVE RESULTS", "text": "t-SNE visualization of representations We qualitatively inspect the structure of the learned representation space by visualizing self-supervised representations of variants of 28 programs using t-SNE (Maaten & Hinton, 2008) in Figure 7. Representations of transformed variants of the same program are plotted with the same color. ContraCode (BiLSTM) clusters variants closely together. Indeed, contrastive learning learns representations that are invariant to a wide class of automated compiler-based transformations. In comparison, the representations learned by masked language modeling (RoBERTa) show more overlap between different programs, and variants do not cleanly cluster.\nTSNE1\nTS NE\n2\nRoBERTA embeddings\nTSNE1\nTS NE\n2\nContraCode embeddings\nTSNE1\nTS NE\n2\nRoBERTA + ContraCode embeddings\nFigure 7: t-SNE (Maaten & Hinton, 2008) plot of program representations learned with masked language modeling (RoBERTa), contrastive learning (ContraCode), and a hybrid loss (RoBERTa + ContraCode). Transformed variants of the same program share the same color, though colors may be similar across different programs.\nWith a hybrid loss combining masked language modeling and contrastive learning, representations of variants of the same program once again cluster.\nCode summaries Figure 8 shows a qualitative example of predictions for the code summarization task. The JavaScript method is not seen during training. A Transformer pretrained with ContraCode predicts the correct method name as the most likely decoding through beam search. The next four predictions are reasonable, capturing that the method processes an image. The 2nd and 3rd most likely decodings, getImageItem and createImage, use get and create as synonyms for load, though the final two unlikely decodings include terms not mentioned in the method body.\nfunction x(url, callback, error) { var img = new Image(); img.src = url; if(img.complete){ return callback(img);\n} img.onload = function(){ img.onload = null; callback(img); }; img.onerror = function(e){ img.onerror = null; error(e);\n}; }\nGround truth: loadImage Prediction: loadImage\nOther predictions:\n1. getImageItem\n2. createImage\n3. loadImageForBreakpoint\n4. getImageSrcCSS\nFigure 8: A JavaScript program from the CodeSearchNet dataset not seen during training and the predicted method names from a Transformer pre-trained with ContraCode. ContraCode predicts the correct method name as its most likely decoding.\nType inferences We can also visualize outputs of the type inference model. Figure 9 shows two TypeScript programs from the held-out test set. User-provided type annotations are removed from the programs, and the model is provided with a tokenized form without access to dependencies. We visualize predictions from a variant of DeepTyper pretrained with ContraCode, the best-performing model in Table 8. In the first program, our model consistently predicts the correct return and parameter type. While a tool based on static analysis could infer the void return types, the type of the message argument is ambiguous without access to the imported write method signature. Still, the model correctly predicts with high confidence that the variable message is a string. In the second program, ContraCode correctly predicts 4 of 8 types including the ViewContainerRef and ChangeDetectorRef types, each imported from the AngularJS library. As this sample is held-out from the training set, these predictions show generalization from other repositories using AngularJS." }, { "heading": "A.3 PROGRAM TRANSFORMATION DETAILS", "text": "We use the Babel compiler infrastructure (McKenzie et al., 2020) and the terser JavaScript library for AST-based program transformations. We perform variable renaming and dead code insertion (variable declaration insertion) using custom Babel transforms, subword regularization with sentencepiece Python tokenization library, line subsampling using JavaScript string manipulation primatives and other transformations with terser. Terser has two high-level transformation modes, mangling and compression, each with finer grained controls such as formatting, comment and log removal, and dead code elimination. We show an example merge sort with example equivalent variants in Figure 11.\nReformatting, beautification, compression (R, B, C): Personal coding conventions do not affect the semantics of code; auto-formatting normalizes according to a style convention.\nDead-code elimination (DCE): In this pass, all unused code with no side effects are removed. Various statements can be inlined or removed as stale or unneeded functionality.\nType upconversion (T): In JavaScript, some types are polymorphic & can be converted between each other. As an example, booleans can be represented as true or as 1.\nConstant folding (CF): During constant folding, all expressions that can be pre-computed at compilation time can be inlined. For example, the expression (2 + 3) * 4 is replaced with 20.\nVariable renaming, identifier mangling (VR, IM): Arguments can be renamed with random word sequences and identifiers can be replaced with short tokens to make the model robust to naming choices. Program behavior is preserved despite obfuscation.\nDead-code insertion (DCI): Commonly used no-ops such as comments and logging are inserted.\nSubword regularization (SW): From Kudo (2018), text is tokenized in several different ways, with a single word (_function) or subtokens (_func tion).\nLine subsampling (LS): We randomly sample (p = 0.9) lines from a method body. While not semantics-preserving, line subsampling serves as a regularizer.\nWhile compilers are generally deterministic, we require a variety of alternatives to each program for contrastive representation learning. Algorithm 1 samples N augmented variants of a source program x using a set of deterministic compiler transformations τi. Stochasticity is introduced by randomly toggling each transformation according to Bernoulli samples with probabilities pi. When adding a program to the set of variants V , uniqueness is determined by string comparison." }, { "heading": "A.4 HOW SIMILAR ARE TRANSFORMED PROGRAMS?", "text": "To understand the diversity created by program transformations, we compute the Levenshtein minimum edit distance between positive pairs in the precomputed pre-training dataset, i.e. transformed variants of the same source method. For comparison, we also compute the edit distance between negative pairs, i.e. transformed variants of different programs. The edit distance D(xq, xk) computes the minimum number of token insertions, deletions or substitutions needed to transform the tokenized query progrm xq into the key program xk. To normalize by sequence length | · |, let\ndissimilarityD(xq, xk) = D(xq, xk)\nmax(|xq|, |xk|) (2)\nDissimilarity ranges from 0% for programs with the same token sequence such as positives before applying our transformations, to 100% for programs without any shared tokens. Note that whitespace transformations do not affect the metric because the tokenizer collapses repeated whitespace. For the positives, we estimate dissimilarity by sampling one pair per source program in the CodeSearchNet dataset (1.6M source programs with at least one pair). We sample the same number of negative pairs.\nFigure 10 shows a histogram of token dissimilarity. Positive pairs have 65% mean dissimilarity, while negatives have 86% mean dissimilarity. Negatives are more dissimilar on average as source sequences could have different lengths, idioms and functionality. Still, the transformations generated quite different positive sequences, with less than half of their tokens shared. The 25th, median and 75th percentile dissimilarity is 59%, 66% and 73% for positives, and 82%, 87% and 90% for negatives." }, { "heading": "A.5 EXPERIMENTAL SETUP", "text": "Architectures The Transformer has 6 encoder layers (23M parameters) in all experiments, and 4 decoder layers for method name prediction in Table 3. We leverage the default positional embedding function (sin, cos) as used in the original Transformer architecture. The network originally proposed in DeepTyper (Hellendoorn et al., 2018) had 11M parameters with a 300 dimensional hidden state. We increase the hidden state size to 512 to increase model capacity, so our BiLSTM for type prediction has 17.5M parameters. During fine-tuning, across all experiments, we optimize parameters using Adam with linear learning rate warmup and decay. For the Transformer, the learning rate is linearly increased for 5,000 steps from 0 to a maximum of 10−4. For the bidirectional LSTM, the learning rate is increased for between 2,500 and 10,000 steps to a maximum of 10−3. Type inference hyperparameters are selected by validation top-1 accuracy.\nContraCode pretraining The InfoNCE objective (1) is minimized with temperature t = 0.07 following He et al. (2019). Also following He et al. (2019), the key encoder’s parameters are computed with the momentum update equation θk ← mθk + (1−m)θq, equivalent to an EMA of the query encoder parameters θq . To pretrain a Transformer using the ContraCode objective, we first embed each token in the program using the Transformer. However, the InfoNCE objective is defined in terms of a single embedding for the full program. The ContraCode Transformer is pretrained with a batch size of 96. Our model averages the 512-dimensional token embeddings across the sequence, then applies a two-layer MLP with 512 hidden units and a ReLU activation to extract a 128-dimensional program embedding for the loss.\nThe DeepTyper bidirectional LSTM architecture offers two choices for extracting a global program representation. We aggregate a 1024-dimensional global representation of the program by concatenating its four terminal hidden states (from two sequence processing directions and two stacked LSTM layers), then apply the same MLP architecture as before to extract a 128-dimensional program representation. Alternatively, we can average the hidden state concatenated from each direction across the tokens in the sequence before applying the MLP head. We refer to the hidden-state configuration as a global representation and the sequence averaging configuration as a local representation in Table 8. We pre-train the BiLSTM with large batch size of 512 and apply weight decay.\nType prediction Following DeepTyper (Hellendoorn et al., 2018), our regenerated dataset for type prediction has 187 training projects with 15,570 TypeScript files, totaling 6,902,642 tokens. We tune hyperparameters on a validation set of 23 distinct projects with 1,803 files and 490,335 tokens, and evaluate on a held-out test set of 24 projects with 2,206 files and 958,821. The training set is smaller than originally used in DeepTyper as several projects were made private or deleted from GitHub before May 2020 when we downloaded the data, but we used the same commit hashes for available projects so our splits are a subset of the original. We have released the data with our open-source code to facilitate further work on a stable benchmark as more repositories are deleted over time. We perform early stopping to select the number of training epochs. We train each model for 100 epochs and select the checkpoint with the minimum accuracy@1 metric (all types, including\nany) on the validation set. Except for the model learned from scratch, the Transformer architectures are pre-trained for 240K steps. Models with the DeepTyper architecture converge faster on the pre-training tasks and are pre-trained for 20K iterations (unless otherwise noted).\nExtreme code summarization by method name prediction We train method prediction models using the labeled subset of CodeSearchNet. Neither method names nor docstrings are provided as input to the model: the docstring is deleted, and the method name is replaced with the token ‘x’. Thus, the task is to predict the method name using the method body and comments alone. To decode method names from all models except the code2vec and code2seq baselines which implement their own decoding procedures, we use a beam search with a beam of size 5 and a maximum target sequence length of 20 subword tokens. We detail the cumulative distribution of program lengths in Figure 12. The ContraCode summarization Transformer only needed to be pre-trained for 20K iterations, with substantially faster convergence than RoBERTa (240K iterations). During fine-tuning, we apply the LS,SW,VR,DCI augmentations to ContraCode." }, { "heading": "A.6 BASELINES", "text": "Baselines for code summarization and type prediction trained their models on an inconsistent set of programming languages and datasets. In order to normalize the effect of datasets, we selected several diverse state-of-the-art baselines and reimplemented them on the JavaScript dataset.\nAST-based models The authors of code2vec (Alon et al., 2019b) and code2seq (Alon et al., 2019a), AST-based code understanding models, made both data and code available, but train their model on the Java programming language. In order to extend the results in their paper to JavaScript for comparison with our approach, we generated an AST path dataset for the CodeSearchNet dataset. The sensitivity of path-mining embeddings to different datasets is documented in prior work, so published F1 scores are not directly comparable; F1 scores for code2vec (Alon et al., 2019b) vary between 19 (Alon et al., 2019a) and 43 (Alon et al., 2019b) depending on the dataset used. Therefore, we use the same dataset generation code as the authors for fair comparison. We first parse the source functions using the Babel compiler infrastructure. Using the original code on these ASTs, up to 300 token-to-token (leaf-to-leaf) paths are extracted from each function’s AST as a precomputed dataset. Then, we generate a token and AST node vocabulary using the same author-provided code, and train the models for 20 epochs, using early stopping for code2seq. We observed that code2vec overfits after 20 epochs, and longer training was not beneficial.\nDeepTyper (Hellendoorn et al., 2018) DeepTyper uses a two layer GRU with a projection over possible classes, with an embedding size of 300 and hidden dimension of 650. However, we found improved performance by replacing the GRU with a bidirectional LSTM (BiLSTM). We normalize the LSTM parameter count to match our model, and therefore use a hidden dimension size of 512. We also use subword tokenization rather than space delimited tokens according to Kudo (2018), as subword tokenization is a key part of state-of-the-art models for NLP (Sennrich et al., 2015).\nRoBERTa We pre-trained an encoder using RoBERTa’s masked language modeling loss on our augmented version of CodeSearchNet, the same data used to pretrain ContraCode. This model is then fine-tuned on downstream datasets. Unlike the original BERT paper which cuBERT (Kanade et al., 2020) is based on, hyperparameters from RoBERTa have been found to produce better results during pre-training. RoBERTa pre-trains using a masked language modeling (MLM) objective, where 15% of tokens in a sentence are masked or replaced and are reconstructed by the model. We did not use the BERT Next Sentence Prediction (NSP) loss which RoBERTa finds to be unnecessary. We normalize baseline parameter count by reducing the number of Transformer layers from 24 to 6 for a total of 23M parameters." } ]
2,020
null
SP:5682a82e8671bdd5dee966273b981f63b4eebf2d
[ "The submission addresses the problem of representation evaluation from the perspective of efficient learning of downstream predictors. Leveraging the introduced loss-data curve framework, the paper studies and demonstrates the limitations of the existing methods in terms of their implicit dependency on evaluation dataset size. Motivated by practicality and interpretability of the measures for choosing the best representations, the paper introduces two novel methods, $\\epsilon$ sample complexity ($\\epsilon$SC) and surplus description length (SDL), which are well-motivated and supported both theoretically and empirically. The paper also delivers efficient implementation." ]
We consider the problem of evaluating representations of data for use in solving a downstream task. We propose to measure the quality of a representation by the complexity of learning a predictor on top of the representation that achieves low loss on a task of interest. To this end, we introduce two measures: surplus description length (SDL) and ε sample complexity (εSC). To compare our methods to prior work, we also present a framework based on plotting the validation loss versus dataset size (the “loss-data” curve). Existing measures, such as mutual information and minimum description length, correspond to slices and integrals along the dataaxis of the loss-data curve, while ours correspond to slices and integrals along the loss-axis. This analysis shows that prior methods measure properties of an evaluation dataset of a specified size, whereas our methods measure properties of a predictor with a specified loss. We conclude with experiments on real data to compare the behavior of these methods over datasets of varying size.
[]
[ { "authors": [ "G. Alain", "Y. Bengio" ], "title": "Understanding intermediate layers using linear classifier probes", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "P. Bachman", "R.D. Hjelm", "W. Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "L. Blier", "Y. Ollivier" ], "title": "The description length of deep learning models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "J. Bradbury", "R. Frostig", "P. Hawkins", "M.J. Johnson", "C. Leary", "D. Maclaurin", "S. WandermanMilne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http: //github.com/google/jax", "year": 2018 }, { "authors": [ "T.B. Brown", "B. Mann", "N. Ryder", "M. Subbiah", "J. Kaplan", "P. Dhariwal", "A. Neelakantan", "P. Shyam", "G. Sastry", "A. Askell", "S. Agarwal", "A. Herbert-Voss", "G. Krueger", "T. Henighan", "R. Child", "A. Ramesh", "D.M. Ziegler", "J. Wu", "C. Winter", "C. Hesse", "M. Chen", "E. Sigler", "M. Litwin", "S. Gray", "B. Chess", "J. Clark", "C. Berner", "S. McCand lish", "A. Radford", "I. Sutskever", "D. Amodei" ], "title": "Language Models are Few-Shot Learners", "venue": "arXiv preprint arXiv:2005.14165,", "year": 2020 }, { "authors": [ "T. Chen", "S. Kornblith", "M. Norouzi", "G.E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "A. Conneau", "G. Kruszewski", "G. Lample", "L. Barrault", "M. Baroni" ], "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "J. Devlin", "Chang", "M.-W", "K. Lee", "K. Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In North American Chapter of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "T. Erven", "P. Grünwald", "S. Rooij" ], "title": "Catching up faster by switching sooner: A predictive approach to adaptive stimation with an application to the aic-bic dilemma", "venue": "Journal of the Royal Statistical Society. Series B (Statistical Methodology),", "year": 2012 }, { "authors": [ "A. Ettinger", "A. Elgohary", "P. Resnik" ], "title": "Probing for semantic evidence of composition by means of simple classification tasks", "venue": "In Workshop on Evaluating Vector-Space Representations for NLP,", "year": 2016 }, { "authors": [ "P. Grünwald" ], "title": "A tutorial introduction to the minimum description length principle", "venue": "arXiv preprint math:0406077,", "year": 2004 }, { "authors": [ "K. He", "H. Fan", "Y. Wu", "S. Xie", "R. Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "O.J. Hénaff", "A. Razavi", "C. Doersch", "S.M.A. Eslami", "A. van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "J. Hewitt", "P. Liang" ], "title": "Designing and interpreting probes with control tasks", "venue": "Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2015 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Y. LeCun" ], "title": "Deep learning & convolutional networks. URL: https:// www.hotchips.org/wp-content/uploads/hc_archives/hc27/HC27", "venue": "24-Monday-Epub/HC27.24.19-Key1-Neural-Nets-Epub/HC27.24. 190-Convolutional-Neural-LeCun-Facebook.pdf,", "year": 2015 }, { "authors": [ "Y. Liu", "M. Ott", "N. Goyal", "J. Du", "M. Joshi", "D. Chen", "O. Levy", "M. Lewis", "L. Zettlemoyer", "V. Stoyanov" ], "title": "RoBERTa: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "D. McAllester", "K. Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin", "N. Gimelshein", "L. Antiga", "A. Desmaison", "A. Köpf", "E. Yang", "Z. DeVito", "M. Raison", "A. Tejani", "S. Chilamkurthy", "B. Steiner", "L. Fang", "J. Bai", "S. Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "M. Peters", "M. Neumann", "M. Iyyer", "M. Gardner", "C. Clark", "K. Lee", "L. Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In North American Chapter of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "T. Pimentel", "J. Valvoda", "R.H. Maudslay", "R. Zmigrod", "A. Williams", "R. Cotterell" ], "title": "Informationtheoretic probing for linguistic structure", "venue": "arXiv preprint arXiv:2004.03061,", "year": 2020 }, { "authors": [ "C. Raffel", "N. Shazeer", "A. Roberts", "K. Lee", "S. Narang", "M. Matena", "Y. Zhou", "W. Li", "P.J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "C. Resnick", "Z. Zhan", "J. Bruna" ], "title": "Probing the state of the art: A critical look at visual representation evaluation", "venue": null, "year": 1912 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "J. Rissanen" ], "title": "Modeling by shortest data", "venue": "description. Automatica,", "year": 1978 }, { "authors": [ "C. Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell Syst. Tech. J.,", "year": 1948 }, { "authors": [ "X. Shi", "I. Padhi", "K. Knight" ], "title": "Does string-based neural MT learn source syntax", "venue": "In Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "A. Talmor", "Y. Elazar", "Y. Goldberg", "J. Berant" ], "title": "oLMpics–on what language model pre-training captures", "venue": "arXiv preprint arXiv:1912.13283,", "year": 2019 }, { "authors": [ "A. van den Oord", "Y. Li", "O. Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "E. Voita", "I. Titov" ], "title": "Information-theoretic probing with minimum description length", "venue": "arXiv preprint arXiv:2003.12298,", "year": 2020 }, { "authors": [ "Y. Xu", "S. Zhao", "J. Song", "R. Stewart", "S. Ermon" ], "title": "A theory of usable information under computational constraints", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "D. Yogatama", "d’Autume", "C. d. M", "J. Connor", "T. Kocisky", "M. Chrzanowski", "L. Kong", "A. Lazaridou", "W. Ling", "L. Yu", "C Dyer" ], "title": "Learning and evaluating general linguistic intelligence", "venue": "arXiv preprint arXiv:1901.11373,", "year": 2019 }, { "authors": [ "K. Zhang", "S. Bowman" ], "title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis", "venue": "In EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Rezende" ], "title": "VAE. The VAE (variational autoencoder", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the first steps in building a machine learning system is selecting a representation of data. Whereas classical machine learning pipelines often begin with feature engineering, the advent of deep learning has led many to argue for pure end-to-end learning where the deep network constructs the features (LeCun et al., 2015). However, huge strides in unsupervised learning (Hénaff et al., 2019; Chen et al., 2020; He et al., 2019; van den Oord et al., 2018; Bachman et al., 2019; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2019; Brown et al., 2020) have led to a reversal of this trend in the past two years, with common wisdom now recommending that the design of most systems start from a pretrained representation. With this boom in representation learning techniques, practitioners and representation researchers alike have the question: Which representation is best for my task?\nThis question exists as the middle step of the representation learning pipeline. The first step is representation learning, which consists of training a representation function on a training set using an objective which may be supervised or unsupervised. The second step, which this paper considers, is representation evaluation. In this step, one uses a measure of representation quality and a labeled evaluation dataset to see how well the representation performs. The final step is deployment, in which the practitioner or researcher puts the learned representation to use. Deployment could involve using the representation on a stream of user-provided data to solve a variety of end tasks (LeCun, 2015), or simply releasing the trained weights of the representation function for general use. In the same way that BERT (Devlin et al., 2019) representations have been applied to a whole host of problems, the task or amount of data available in deployment might differ from the evaluation phase.\nWe take the position that the best representation is the one which allows for the most efficient learning of a predictor to solve the task. We will measure efficiency in terms of either number of samples or information about the optimal predictor contained in the samples. This position is motivated by practical concerns; the more labels that are needed to solve a task in the deployment phase, the more expensive to use and the less widely applicable a representation will be.\nWe build on a substantial and growing body of literature that attempts to answer the question of which representation is best. Simple, traditional means of evaluating representations, such as the validation accuracy of linear probes (Ettinger et al., 2016; Shi et al., 2016; Alain & Bengio, 2016), have been widely criticized (Hénaff et al., 2019; Resnick et al., 2019). Instead, researchers have taken\nup a variety of alternatives such as the validation accuracy (VA) of nonlinear probes (Conneau et al., 2018; Hénaff et al., 2019), mutual information (MI) between representations and labels (Bachman et al., 2019; Pimentel et al., 2020), and minimum description length (MDL) of the labels conditioned on the representations (Blier & Ollivier, 2018; Yogatama et al., 2019; Voita & Titov, 2020).\nWe find that these methods all have clear limitations. As can be seen in Figure 1, VA and MDL are liable to choose different representations for the same task when given evaluation datasets of different sizes. Instead we want an evaluation measure which depends on the data distribution, not a particular dataset or dataset size. Furthermore, VA and MDL lack a predefined notion of success in solving a task. In combination with small evaluation datasets, these measures may lead to premature evaluation by producing a judgement even when there is not enough data to solve the task or meaningfully distinguish one representation from another. Meanwhile, MI measures the lowest loss achievable by any predictor irrespective of the complexity of learning it. We note that while these methods do not correspond to our notion of best representation, they may be correct for different notions of “best”.\nTo eliminate these issues, we propose two measures. In both of our measures, the user must specify a tolerance ε so that a population loss of less than ε qualifies as solving the task. The first measure is the surplus description length (SDL) which modifies the MDL to measure the complexity of learning an ε-loss predictor rather than the complexity of the labels in the evaluation dataset. The second is the ε-sample complexity (εSC) which measures the sample complexity of learning an ε-loss predictor.\nTo facilitate our analysis, we also propose a framework called the loss-data framework, illustrated in Figure 1, that plots the validation loss against the evaluation dataset size (Talmor et al., 2019; Yogatama et al., 2019; Voita & Titov, 2020). This framework simplifies comparisons between measures. Prior work measures integrals (MDL) and slices (VA and MI) along the data-axis. Our work proposes instead measuring integrals (SDL) and slices (εSC) along the loss-axis. This illustrates how prior work makes tacit choices about the function to learn based on the choice of dataset size. Our work instead makes an explicit, interpretable choice of threshold ε and measures the complexity of solving the task to ε error. We experimentally investigate the behavior of these methods, illustrating the sensitivity of VA and MDL, and the robustness of SDL and εSC, to dataset size.\nEfficient implementation. To enable reproducible and efficient representation evaluation for representation researchers, we have developed a highly optimized open source Python package (see\nsupplementary materials). This package enables construction of loss-data curves with arbitrary representations and datasets and is library-agnostic, supporting representations and learning algorithms implemented in any Python ML library. By leveraging the JAX library (Bradbury et al., 2018) to parallelize the training of probes on a single accelerator, our package constructs loss-data curves in around two minutes on one GPU." }, { "heading": "2 THE LOSS-DATA FRAMEWORK FOR REPRESENTATION EVALUATION", "text": "In this section we formally present the representation evaluation problem, define our loss-data framework, and show how prior work fits into the framework.\nNotation. We use bold letters to denote random variables. A supervised learning problem is defined by a joint distribution D over observations and labels (X,Y) in the sample space X × Y with density denoted by p. Let the random variable Dn be a sample of n i.i.d. (X,Y) pairs, realized by Dn = (Xn, Y n) = {(xi, yi)}ni=1. Let R denote a representation space and φ : X → R a representation function. The methods we consider all use parametric probes, which are neural networks p̂θ : R → P (Y) parameterized by θ ∈ Rd that are trained onDn to estimate the conditional distribution p(y | x). We often abstract away the details of learning the probe by simply referring to an algorithm A which returns a predictor: p̂ = A(φ(Dn)). Abusing notation, we denote the composition of A with φ by Aφ. Define the population loss and the expected population loss for p̂ = Aφ(Dn), respectively as\nL(Aφ, Dn) = E (X,Y) − log p̂(Y | X), L(Aφ, n) = E Dn L(Aφ,Dn). (1)\nIn this section we will focus on population quantities, but note that any algorithmic implementation must replace these by their empirical counterparts.\nThe representation evaluation problem. The representation evaluation problem asks us to define a real-valued measurement of the quality of a representation φ for solving solving the task defined by (X,Y). Explicitly, each method defines a real-valued function m(φ,D,A,Ψ) of a representation φ, data distribution D, probing algorithm A, and some method-specific set of hyperparameters Ψ. By convention, smaller values of the measure m correspond to better representations. Defining such a measurement allows us to compare different representations." }, { "heading": "2.1 DEFINING THE LOSS-DATA FRAMEWORK.", "text": "The loss-data framework is a lens through which we contrast different measures of representation quality. The key idea, demonstrated in Figure 1, is to plot the loss L(Aφ, n) against the dataset size n. Explicitly, at each n, we train a probing algorithmA using a representation φ to produce a predictor p̂, and then plot the loss of p̂ against n. Similar analysis has appeared in Voita & Titov (2020); Yogatama et al. (2019); Talmor et al. (2019). We can represent each of the prior measures as points on the curve at fixed x (VA, MI) or integrals of the curve along the x-axis (MDL). Our measures correspond to evaluating points at fixed y (εSC) and integrals along the y-axis (SDL)." }, { "heading": "2.2 EXISTING METHODS IN THE LOSS-DATA FRAMEWORK", "text": "Nonlinear probes with limited data. A simple strategy for evaluating representations is to choose a probe architecture and train it on a limited amount of data from the task and representation of interest (Hénaff et al., 2019; Zhang & Bowman, 2018). On the loss-data curve, this corresponds to evaluation at x = n, so that\nmVA(φ,D,A, n) = L(Aφ, n). (2)\nMutual information. Mutual information (MI) between a representation φ(X) and targets Y is another often-proposed metric for learning and evaluating representations (Pimentel et al., 2020; Bachman et al., 2019). In terms of entropy, mutual information is equivalent to the information gain about Y from knowing φ(X):\nI(φ(X);Y) = H(Y)−H(Y | φ(X)). (3)\nIn general mutual information is intractable to estimate for high-dimensional or continuous-valued variables (McAllester & Stratos, 2020), and a common approach is to use a very expressive model for p̂ and maximize a variational lower bound:\nI(φ(X);Y) ≥ H(Y) + E (X,Y) log p̂(Y | φ(X)). (4)\nSince H(Y) is not a function of the parameters, maximizing the lower bound is equivalent to minimizing the negative log-likelihood. Moreover, if we assume that p̂ is expressive enough to represent p and take n→∞, this inequality becomes tight. As such, MI estimation can be seen a special case of nonlinear probes as described above, where instead of choosing some particular setting of n we push it to infinity. We formally define the mutual information measure of a representation as\nmMI(φ,D,A) = lim n→∞ L(Aφ, n). (5)\nA decrease in this measure reflects an increase in the mutual information. On the loss-data curve, this corresponds to evaluation at x =∞.\nMinimum description length. Recent studies (Yogatama et al., 2019; Voita & Titov, 2020) propose using the Minimum Description Length (MDL) principle (Rissanen, 1978; Grünwald, 2004) to evaluate representations. These works use an online or prequential code (Blier & Ollivier, 2018) to encode the labels given the representations. The codelength ` of Y n given φ(Xn) is then defined as\n`(Y n | φ(Xn)) = − n∑ i=1 log p̂i(yi | φ(xi)), (6)\nwhere p̂i is the output of running a pre-specified algorithm A on the dataset up to element i: p̂i = Aφ(Xn1:i, Y n1:i). Taking an expectation over the sampled datasets for each i, we define a population variant of the MDL measure (Voita & Titov, 2020) as\nmMDL(φ,D,A, n) = E [ `(Yn | φ(Xn)) ] = n∑ i=1 L(A, i). (7)\nThus, mMDL measures the area under the loss-data curve on the interval x ∈ [0, n]." }, { "heading": "3 LIMITATIONS OF EXISTING METHODS", "text": "Each of the prior methods, VA, MDL, and MI, have limitations that we attempt to solve with our methods. In this section we present these limitations." }, { "heading": "3.1 SENSITIVITY TO DATASET SIZE IN VA AND MDL", "text": "As seen in Section 2.2, the representation quality measures of VA and MDL both depend on n, the size of the evaluation dataset. Because of this dependence, the ranking of representations given by these evaluation metrics can change as n increases. Choosing to deploy one representation rather than another by comparing these metrics at arbitrary n may lead to premature decisions in the machine learning pipeline since a larger dataset could give a different ordering.\nA theoretical example. Let s ∈ {0, 1}d be a fixed binary vector and consider a data generation process where the {0, 1} label of a data point is given by the parity on s, i.e., yi = 〈xi, s〉 mod 2 where yi ∈ {0, 1} and xi ∈ {0, 1}d. Let Y n = {yi}ni=1 be the given labels and consider the following two representations: (1) Noisy label: zi = 〈xi, s〉 + ei mod 2, where ei ∈ {0, 1} is a random bit with bias α < 1/2, and (2) Raw data: xi.\nFor the noisy label representation, guessing yi = zi achieves validation accuracy of 1− α for any n, which, is information-theoretically optimal. On the other hand, the raw data representation will achieve perfect validation accuracy once the evaluation dataset contains d linearly independent xi’s. In this case, Gaussian elimination will exactly recover s. The probability that a set of n > d random vectors in {0, 1}d does not contain d linearly independent vectors decreases exponentially in n− d. Hence, the expected validation accuracy for n sufficiently larger than d will be exponentially close\nto 1. As a result, the representation ranking given by validation accuracy and description length favors the noisy label representation when n d, but the raw data representation will be much better in these metrics when n d. This can be misleading. Although this is a concocted example for illustration purposes, our experiments in Section 5 show dependence of representation rankings on n." }, { "heading": "3.2 INSENSITIVITY TO REPRESENTATION QUALITY & COMPUTATIONAL COMPLEXITY IN MI", "text": "MI considers the lowest validation loss achievable with the given representation and ignores any concerns about statistical or computational complexity of achieving such accuracy. This leads to some counterintuitive properties which make MI an undesirable metric:\n1. MI is insensitive to statistical complexity. Two random variables which are perfectly predictive of one another have maximal MI, though their relationship may be sufficiently complex that it requires exponentially many samples to verify (McAllester & Stratos, 2020).\n2. MI is insensitive to computational complexity. For example, the mutual information between an intercepted encrypted message and the enemy’s plan is high (Shannon, 1948; Xu et al., 2020), despite the extreme computational cost required to break the encryption.\n3. MI is insensitive to representation. By the data processing inequality (Cover & Thomas, 2006), any φ applied to X can only decrease its mutual information with Y; no matter the query, MI always reports that the raw data is at least as good as the best representation." }, { "heading": "3.3 LACK OF A PREDEFINED NOTION OF SUCCESS", "text": "All three prior methods lack a predefined notion of successfully solving a task and will always return some ordering of representations. When the evaluation dataset is too small or all of the representations are poor, it may be that no representation can yet solve the task. Since the order of representations can change as more data is added, any judgement would be premature. Indeed, there is often an implicit minimum requirement for the loss a representation should achieve to be considered meaningful. As we show in the next section, our methods makes this requirement explicit.\n4 SURPLUS DESCRIPTION LENGTH & ε SAMPLE COMPLEXITY\nThe methods discussed above measure a property of the data, such as the attainable accuracy on n points, by learning an unspecified function. Instead, we propose to precisely define the function of interest and measure its complexity using data. Fundamentally we shift from making a statement about the inputs of an algorithm, like VA and MDL do, to a statement about the outputs." }, { "heading": "4.1 SURPLUS DESCRIPTION LENGTH (SDL)", "text": "Imagine trying to efficiently encode a large number of samples of a random variable e which takes values in {1 . . .K} with probability p(e). An optimal code for these events has expected length1 E[`(e)] = Ee[− log p(e)] = H(e). If this data is instead encoded using a probability distribution p̂, the expected length becomes H(e) +DKL ( p || p̂ ) . We call DKL ( p || p̂ ) the surplus description length (SDL) from encoding according to p̂ instead of p:\nDKL ( p || p̂ ) = E\ne∼p [log p(e)− log p̂(e)] . (8)\nWhen the true distribution p is a delta, the entire length of a code under p̂ is surplus since log 1 = 0.\nRecall that the prequential code for estimating MDL computes the description length of the labels given observations in a dataset by iteratively creating tighter approximations p̂1 . . . p̂n and integrating the area under the curve. Examining Equation (7), we see that\nmMDL(φ,D,A, n) = n∑ i=1 L(Aφ, i) ≥ n∑ i=1 H(Y | φ(X)). (9)\n1in nats\nIf H(Y | φ(X)) > 0, MDL grows without bound as the size of the evaluation dataset n increases. Instead, we propose to measure the complexity of a learned predictor p(Y | φ(X)) by computing the surplus description length of encoding an infinite stream of data according to the online code instead of the true conditional distribution. Definition 1 (Surplus description length of online codes). Given random variables X,Y ∼ D, a representation function φ, and a learning algorithm A, define\nmSDL(φ,D,A) = ∞∑ i=1 [ L(Aφ, i)−H(Y | X) ] . (10)\nWe generalize this definition to measure the complexity of learning an approximating conditional distribution with loss ε, rather than the true conditional distribution only: Definition 2 (Surplus description length of online codes with an arbitrary baseline). Take random variables X,Y ∼ D, a representation function φ, a learning algorithm A, and a loss tolerance ε ≥ H(Y | X). Let [c]+ denote max(0, c) and then we define\nmSDL(φ,D,A, ε) = ∞∑ i=1 [ L(Aφ, i)− ε ] + . (11)\nIn our framework, the surplus description length corresponds to computing the area between the loss-data curve and a baseline set by y = ε. Whereas MDL measures the complexity of a sample of n points, SDL measures the complexity of a function which solves the task to ε tolerance.\nEstimating the SDL. Naively computing SDL would require unbounded data and the estimation of L(Aφ, i) for every i. However, if we assume that algorithms are monotonically improving so that L(A, i + 1) ≤ L(A, i), SDL only depends on i up to the first point where L(A, n) ≤ ε. Approximating this integral can be done efficiently by taking a log-uniform partition of the dataset size and computing the Riemann sum as in Voita & Titov (2020). Crucially, if the tolerance ε is set too low or the maximum amount of available data is insufficient, an implementation is able to report that the given complexity estimate is only a lower bound. In Appendix A we provide a detailed algorithm for estimating SDL, along with a theorem proving its data requirements.\n4.2 ε SAMPLE COMPLEXITY (εSC)\nIn addition to surplus description length we introduce a second, conceptually simpler measure of representation quality: ε sample complexity. Definition 3 (Sample complexity of an ε-loss predictor). Given random variables X,Y ∼ D, a representation function φ, a learning algorithm A, and a loss tolerance ε ≥ H(Y | φ(X)), define\nmεSC(φ,D,A, ε) = min { n ∈ N : L(Aφ, n) ≤ ε } . (12)\nSample complexity measures the complexity of learning an ε-loss predictor by the number of samples it takes to find it. In our framework, sample complexity corresponds to taking a horizontal slice of the loss-data curve at y = ε, analogous to VA. VA makes a statement about the data (by setting n) and reports the accuracy of some function given that data. In contrast, sample complexity specifies the desired function and determines its complexity by how many samples are needed to learn it.\nEstimating the εSC. Given an assumption that algorithms are monotonically improving such that L(A, n+ 1) ≤ L(A, n), εSC can be estimated efficiently. With n finite samples in the dataset, an algorithm may estimate εSC by splitting the data into k uniform-sized bins and estimating L(A, ik/n) for i ∈ {1 . . . k}. By recursively performing this search on the interval which contains the transition from L > ε to L < ε, we can rapidly reach a precise estimate or report that mεSC(φ,D,A, ε) > n. A more detailed examination of the algorithmic considerations of estimating εSC is in Appendix B.\nUsing objectives other than negative log-likelihood. Our exposition of εSC uses negative loglikelihood for consistency with other methods, such as MDL, which require it. However, it is straightforward to extend εSC to work with whatever objective function is desired under the assumption that said objective is monotone with increasing data when using algorithm A.\n4.3 SETTING ε\nA value for the threshold ε corresponds to the set of ε-loss predictors that a representation should make easy to learn. Choices of ε ≥ H(Y | X) represent attainable functions, while selecting ε < H(Y | X) leads to unbounded SDL and εSC for any choice of the algorithm A. For evaluating representation learning methods in the research community, we recommend using SDL and establishing benchmarks which specify (1) a downstream task, in the form of a dataset; (2) a criterion for success, in the form of a setting of ε; (3) a standard probing algorithm A. The setting of ε can be done by training a large model on the raw representation of the full dataset and using its validation loss as ε when evaluating other representations. This guarantees that ε ≥ H(Y | X) and the task is feasible with a good representation; in turn, this ensures that SDL is bounded.\nIn practical applications, ε should be a part of the design specification for a system. As an example, a practitioner might know that an object detection system with 80% per-frame accuracy is sufficient and labels are expensive. For this task, the best representation would be one which enables the most sample efficient learning of a predictor with error ε = 0.2 using a 0 – 1 loss." }, { "heading": "5 EXPERIMENTS", "text": "We empirically show the behavior of VA, MDL, SDL, and εSC with two sets of experiments on real data. For the first, shown in Figure 2, we evaluate three representations on MNIST classification: (1) the last hidden layer of a small convolutional network pretrained on CIFAR-10; (2) raw pixels; and (3) a variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) trained on MNIST. For the second experiment, shown in Figure 3, we compare the representations given by different layers of a pretrained ELMo model (Peters et al., 2018) using the part-of-speech task introduced by Hewitt & Liang (2019) and implemented by Voita & Titov (2020) with the same probe architecture and other hyperparameters as those works. Note that in each experiment we omit MI as for any finite amount of data, the MI measure is the same as validation loss. Details of the experiments, including representation training, probe architectures, and hyperparameters, are available in Appendix C.\nThese experiments demonstrate that the issue of sensitivity to evaluation dataset size in fact occurs in practice, both on small problems (Figure 2) and at scale (Figure 3): VA and MDL both choose different representations when given evaluation sets of different sizes. Because these measures are a function of the dataset size, making a decision about which representation to use with a small evaluation dataset would be premature. By contrast, SDL and εSC are functions only of the data distribution, not a finite sample. Once they measure the complexity of learning an ε-loss function, that measure is invariant to the size of the evaluation dataset. Crucially, since these measures contain\na notion of success in solving a task, they are able to avoid the issue of premature evaluation and notify the user if there is insufficient data to evaluate and return a lower bound instead." }, { "heading": "6 RELATED WORK", "text": "Zhang & Bowman (2018) and Hewitt & Liang (2019) propose random baselines for linguistic tasks to provide context for how much linguistic structure is readily accessible in representations. To show separation between the validation accuracy achieved by these random baselines and representations pretrained on genuine linguistic labels, they have to limit the amount of training data or restrict the capacity of probes. As an alternative, Voita & Titov (2020) propose using the MDL framework, which measures the description length of the labels given the observations, to demonstrate the separation between pretrained representations and random baselines. An earlier work by Yogatama et al. (2019) also uses prequential codes to evaluate representations for linguistic tasks. Foundational work by Blier & Ollivier (2018) introduces prequential codes as a measure of the complexity of a deep learning model. Talmor et al. (2019) look at the loss-data curve (called “learning curve” in their work) and use a weighted average of the validation loss at various training set sizes to evaluate representations." }, { "heading": "7 DISCUSSION", "text": "In this work we have introduced the loss-data framework for comparing representation evaluation measures and used it to diagnose the issue of sensitivity to evaluation dataset size in the validation accuracy and minimum description length measures. We proposed two measures, surplus description length and ε sample complexity, which eliminate this issue by measuring the complexity of learning a predictor which solves the task of interest to ε tolerance. Empirically we showed that sensitivity to evaluation dataset size occurs in practice for VA and MDL, while SDL and εSC are robust to the amount of available data and are able to report when it is insufficient to make a judgment.\nEach of these measures depends on a choice of algorithm A, including hyperparameters such as probe architecture, which could make the evaluation procedure less robust. To alleviate this, future work might consider a set of algorithms A = {Ai}Ki=1 and a method of combining them, such as the model switching technique of Blier & Ollivier (2018); Erven et al. (2012) or a Bayesian prior.\nFinally, while existing measures such as VA, MI, and MDL do not measure our notion of the best representation for a task, under other settings they may be the correct choice. For example, if only a fixed set of data will ever be available, selecting representations using VA might be a reasonable choice; and if unbounded data is available for free, perhaps MI is the most appropriate measure. However, in many cases the robustness and interpretability offered by SDL and εSC make them a practical choice for practitioners and representation researchers alike." }, { "heading": "APPENDIX A ALGORITHMIC DETAILS FOR ESTIMATING SURPLUS", "text": "DESCRIPTION LENGTH\nRecall that the SDL is defined as\nmSDL(φ,D,A, ε) = ∞∑ n=1 [ L(Aφ, n)− ε ] +\n(13)\nFor simplicity, we assume that L is bounded in [0, 1]. Note that this can be achieved by truncating the cross-entropy loss.\nAlgorithm 1: Estimate surplus error Input: tolerance ε, max iterations M , number of datasets K, representation φ, data distribution D, algorithm A Output: Estimate m̂ of m(φ,D, ε,A) and indicator I of whether this estimate is tight or lower bound Sample K datasets DkM ∼ D of size M + 1 for n = 1 to M do\nFor each k ∈ [K], run A on DkM [1 : n] to produce a predictor p̂kn Take K test samples (xk, yk) = DkM [M + 1] Evaluate L̂n = 1K ∑K k=1 `(p̂ k n, xk, yk)\nSet m̂ = ∑M n=1[L̂n − ε]+\nif L̂M ≤ ε/2 then Set I = tight else Set I = lower bound; return m̂, I\nIn our experiments we replace DkM [1 : n] of Algorithm 1 with sampled subsets of size n from a single evaluation dataset. Additionally, we use between 10 and 20 values of n instead of evaluating L(Aφ, n) at every integer between 1 and M . This strategy, also used by Blier & Ollivier (2018) and Voita & Titov (2020), corresponds to the description length under a code which updates only periodically during transmission of the data instead of after every single point.\nTheorem 4. Let the loss function L be bounded in [0, 1] and assume that it is decreasing in n. With (M + 1)K datapoints, if the sample complexity is less than M , the above algorithm returns an estimate m̂ such that with probability at least 1− δ\n|m̂−m(φ,D, ε,A)| ≤M √ log(2M/δ)\n2K . (14)\nIf K ≥ log(1/δ)2ε2 and the algorithm returns tight then with probability at least 1 − δ the sample complexity is less than M and the above bound holds.\nProof. First we apply a Hoeffding bound to show that each L̂n is estimated well. For any n, we have\nP (∣∣L̂n − L(Aφ, n)∣∣ >√ log(2M/δ) 2K ) ≤ 2 exp ( − 2K log(2M/δ) 2K ) = 2 δ 2M = δ M (15)\nsince each `(p̂kn, xk, yk) is an independent variable, bounded in [0,1] with expectation L(Aφ, n).\nNow when sample complexity is less than M , we use a union bound to translate this to a high probability bound on error of m̂, so that with probability at least 1− δ:\n|m̂−m(φ,D, ε,A)| = ∣∣∣∣ M∑ n=1 [L̂n − ε]+ − [L(Aφ, n)− ε]+ ∣∣∣∣ (16)\n≤ M∑ n=1 ∣∣∣∣[L̂n − ε]+ − [L(Aφ, n)− ε]+∣∣∣∣ (17) ≤\nM∑ n=1 ∣∣∣∣L̂n − L(Aφ, n)∣∣∣∣ (18) ≤M √ log(2M/δ)\n2K (19)\nThis gives us the first part of the claim.\nWe want to know that when the algorithm returns tight, the estimate can be trusted (i.e. that we set M large enough). Under the assumption of large enough K, and by an application of Hoeffding, we have that\nP ( L(Aφ,M)− L̂M > ε/2 ) ≤ exp ( − 2Kε2 ) ≤ exp ( − 2 log(1/δ) 2ε2 ε2 ) = δ (20)\nIf L̂M ≤ ε/2, this means that L(Aφ,M) ≤ ε with probability at least 1− δ. By the assumption of decreasing loss, this means the sample complexity is less than M , so the bound on the error of m̂ holds." }, { "heading": "APPENDIX B ALGORITHMIC DETAILS FOR ESTIMATING SAMPLE COMPLEXITY", "text": "Recall that ε sample complexity (εSC) is defined as mεSC(φ,D,A, ε) = min { n ∈ N : L(Aφ, n) ≤ ε } . (21)\nWe estimate mεSC via recursive grid search. To be more precise, we first define a search interval [1, N ], where N is a large enough number such that L(Aφ, N) ε. Then, we partition the search interval in to 10 sub-intervals and estimate risk of hypothesis learned from Dn ∼ Dn with high confidence for each sub-interval. We then find the leftmost sub-interval that potentially contains mεSC and proceed recursively. This procedure is formalized in Algorithm 2 and its guarantee is given by Theorem 5. Theorem 5. Let the loss function L be bounded in [0, 1] and assume that it is decreasing in n. Then, Algorithm 2 returns an estimate m̂ that satisfies mεSC(φ,D,A, ε) ≤ m̂ with probability at least 1− δ.\nProof. By Hoeffding, the probability that |L̂n − L(Aφ, n)| ≥ ε/2, where L̂ is computed with S = 2 log(20k/δ)/ε2 independent draws of Dn ∼ Dn and (x, y) ∼ D, is less than δ/(10k). The algorithm terminates after evaluating L̂ on at most 10k different n’s. By a union bound, the probability that |L̂n − L(Aφ, n)| ≤ ε/2 for all n used by the algorithm is at least 1 − δ. Hence, L̂n ≤ ε/2 implies L(Aφ, n) ≤ ε with probability at least 1− δ." }, { "heading": "APPENDIX C EXPERIMENTAL DETAILS", "text": "In each experiment we first estimate the loss-data curve using a fixed number of dataset sizes n and multiple random seeds, then compute each measure from that curve. Reported values of SDL correspond to the estimated area between the loss-data curve and the line y = ε using Riemann sums with the values taken from the left edge of the interval. This is the same as the chunking procedure of Voita & Titov (2020) and is equivalent to the code length of transmitting each chunk of data using a\nAlgorithm 2: Estimate sample complexity via recursive grid search Input: Search upper limit N , parameters ε, confidence parameter δ, data distribution D, and learning algorithm A. Output: Estimate m̂ such that mεSC(φ,D,A, ε) ≤ m̂ with probability 1− δ. let S = 2 log(20k/δ)/ε2, and let [`, u] be the search interval initialized at ` = 1, u = N . for r = 1 to k do\nPartition [`, u] into 10 equispaced bins and let ∆ be the length of each bin. for j = 1 to 10 do\nSet n = `+ j∆. Compute L̂n = 1S ∑S i=1 `(A(Dni ), xi, yi) for S independent draws of Dn and test\nsample (x, y). if L̂n ≤ ε/2 then\nSet u = n and ` = n−∆. break\nreturn m̂ = u, which satisfies mεSC(φ,D,A, ε) ≤ m̂ with probability 1− δ, where the randomness is over independent draws of Dn and test samples (x, y).\nfixed model and switching models between intervals. Reported values of εSC correspond to the first measured n at which the loss is less than ε.\nAll of the experiments were performed on a single server with 4 NVidia Titan X GPUs, and on this hardware no experiment took longer than an hour. All of the code for our experiments, as well as that used to generate our plots and tables, is included in the supplement.\nC.1 MNIST EXPERIMENTS\nFor our experiments on MNIST, we implement a highly-performant vectorized library in JAX to construct loss-data curves. With this implementation it takes about one minute to estimate the loss-data curve with one sample at each of 20 settings of n. We approximate the loss-data curves at 20 settings of n log-uniformly spaced on the interval [10, 50000] and evaluate loss on the test set to approximate the population loss. At each dataset size n we perform the same number of updates to the model; we experimented with early stopping for smaller n but found that it made no difference on this dataset. In order to obtain lower-variance estimates of the expected risk at each n, we run 8 random seeds for each representation at each dataset size, where each random seed corresponds to a random initialization of the probe network and a random subsample of the evaluation dataset.\nProbes consist of two-hidden-layer MLPs with hidden dimension 512 and ReLU activations. All probes and representations are trained with the Adam optimizer (Kingma & Ba, 2015) with learning rate 10−4.\nEach representation is normalized to have zero mean and unit variance before probing to ensure that differences in scaling and centering do not disrupt learning. The representations of the data we evaluate are implemented as follows.\nRaw pixels. The raw MNIST pixels are provided by the Pytorch datasets library (Paszke et al., 2019). It has dimension 28× 28 = 784.\nCIFAR. The CIFAR representation is given by the last hidden layer of a convolutional neural network trained on the CIFAR-10 dataset. This representation has dimension 784 to match the size of the raw pixels. The network architecture is as follows:\nnn.Conv2d(1, 32, 3, 1), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 64, 3, 1), nn.ReLU(), nn.MaxPool2d(2),\nnn.Flatten(), nn.Linear(1600, 784) nn.ReLU() nn.Linear(784, 10) nn.LogSoftmax()\nVAE. The VAE (variational autoencoder; Kingma & Welling (2014); Rezende et al. (2014)) representation is given by a variational autoencoder trained to generate the MNIST digits. This VAE’s latent variable has dimension 8. We use the mean output of the encoder as the representation of the data. The network architecture is as follows:\nself.encoder_layers = nn.Sequential( nn.Linear(784, 400), nn.ReLU(), nn.Linear(400, 400), nn.ReLU(), nn.Linear(400, 400), nn.ReLU(),\n) self.mean = nn.Linear(400, 8) self.variance = nn.Linear(400, 8)\nself.decoder_layers = nn.Sequential( nn.Linear(8, 400), nn.ReLU(), nn.Linear(400, 400), nn.ReLU(), nn.Linear(400, 784),\n)\nC.2 PART OF SPEECH EXPERIMENTS\nWe follow the methodology and use the official code2 of Voita & Titov (2020) for our part of speech experiments using ELMo (Peters et al., 2018) pretrained representations. In order to obtain lowervariance estimates of the expected risk at each n, we run 4 random seeds for each representation at each dataset size, where each random seed corresponds to a random initialization of the probe network and a random subsample of the evaluation dataset. We approximate the loss-data curves at 10 settings of n log-uniformly spaced on the range of the available data n ∈ [10, 106]. To more precisely estimate εSC, we perform one recursive grid search step: we space 10 settings over the range which in the first round saw L(Aφ, n) transition from above to below ε. Probes consist of the MLP-2 model of Hewitt & Liang (2019); Voita & Titov (2020) and all training parameters are the same as in those works.\n2https://github.com/lena-voita/description-length-probing" } ]
2,020
null
SP:193b21c862d83fd412bd5a07f49ca62e7285f62d
[ "The paper presents a method that attacks existing out-of-distribution (OOD) detection methods. Most of the existing OOD detection methods perform detection using a latent representation. Main motivation of the paper is that the size of the latent representation is much smaller than the input images which results mapping both OOD and in-distribution images to the same place in the latent space and diminishing OOD detection performance. With this motivation, the proposed method perturbs input images to obtain an image whose latent representation is similar to the latent representation of an in-distribution image. Since such perturbations can be obtained for any OOD image, existing OOD detection algorithms fails distinguishing such OOD samples. The paper contains experiments on multiple dataset to demonstrate that the proposed method obtains a latent representation similar to the representation of an in-distribution image." ]
Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm because of dimensionality reduction in the DNN models. We also show that Glow likelihood-based OOD detection is breakable as well.
[]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon" ], "title": "Uncertainty in the variational information bottleneck", "venue": "UAI 2018 - Uncertainty in Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Alvaro Arcos-Garcia", "Juan A Alvarez-Garcia", "Luis M Soria-Morillo" ], "title": "Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Raghavendra Chalapathy", "Sanjay Chawla" ], "title": "Deep learning for anomaly detection: A survey", "venue": "arXiv preprint arXiv:1901.03407,", "year": 2019 }, { "authors": [ "Hyunsun Choi", "Eric Jang", "Alexander A Alemi" ], "title": "Waic, but why? generative ensembles for robust anomaly detection", "venue": "arXiv preprint arXiv:1810.01392,", "year": 2018 }, { "authors": [ "Joseph Paul Cohen", "Paul Bertin", "Vincent Frappier" ], "title": "Chester: A web delivered locally computed chest x-ray disease prediction system", "venue": null, "year": 1901 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Max-margin adversarial (mma) training: Direct input space margin maximization through adversarial training", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning visual classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Patrick McDaniel", "Nicolas Papernot" ], "title": "Making machine learning robust against adversarial inputs", "venue": "Communications of the ACM,", "year": 2018 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like", "venue": "one. International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Daniel S Kermany", "Michael Goldbaum", "Wenjia Cai", "Carolina CS Valentim", "Huiying Liang", "Sally L Baxter", "Alex McKeown", "Ge Yang", "Xiaokang Wu", "Fangbing Yan" ], "title": "Identifying medical diagnoses and treatable diseases", "venue": "by image-based deep learning. Cell,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alexander Meinke", "Matthias Hein" ], "title": "Towards neural networks that provably know when they don’t know", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Jie Ren", "Peter J Liu", "Emily Fertig", "Jasper Snoek", "Ryan Poplin", "Mark Depristo", "Joshua Dillon", "Balaji Lakshminarayanan" ], "title": "Likelihood ratios for out-of-distribution detection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chandramouli Shama Sastry", "Sageev Oore" ], "title": "Detecting out-of-distribution examples with indistribution examples and gram matrices", "venue": "NeurIPS 2019 Workshop on Safety and Robustness in Decision Making,", "year": 2019 }, { "authors": [ "Joan Serrà", "David Álvarez", "Vicenç Gómez", "Olga Slizovskaia", "José F Núñez", "Jordi Luque" ], "title": "Input complexity and out-of-distribution detection with likelihood-based generative models", "venue": "International Conference on Learning Representations", "year": 2019 }, { "authors": [ "Feng Shi", "Jun Wang", "Jun Shi", "Ziyan Wu", "Qian Wang", "Zhenyu Tang", "Kelei He", "Yinghuan Shi", "Dinggang Shen" ], "title": "Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19", "venue": "IEEE reviews in biomedical engineering,", "year": 2020 }, { "authors": [ "Eduardo Soares", "Plamen Angelov", "Sarah Biaso", "Michele Higa Froes", "Daniel Kanda Abe" ], "title": "Sarscov-2 ct-scan dataset: A large dataset of real patients ct scans for sars-cov-2 identification", "venue": "medRxiv,", "year": 2020 } ]
[ { "heading": null, "text": "Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm because of dimensionality reduction in the DNN models. We also show that Glow likelihood-based OOD detection is breakable as well." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs), especially convolutional neural networks (CNNs), have become the method of choice for image classification. Under the i.i.d. (independent and identically distributed) assumption, a high-performance DNN model can correctly-classify an input sample as long as the sample is “generated” from the distribution of training data. If an input sample is not from this distribution, which is called Out-Of-Distribution (OOD), then the predicted class label from the model is meaningless. It would be great if the model has the ability to distinguish OOD samples from in-distribution samples. OOD detection is needed especially when applying DNN models in life-critical applications, e.g., vision-based self-driving or image-based medical diagnosis.\nIt was shown by Nguyen et al. (2015) (Nguyen et al., 2015) that DNN classifiers can be easily fooled by OOD data, and an evolutionarily algorithm was used to generate OOD samples such that DNN classifiers had high output confidence on these samples. Since then, many methods have been proposed for OOD detection using classifiers or encoders (Hendrycks & Gimpel, 2017; Hendrycks et al., 2019; Liang et al., 2018; Lee et al., 2018b;a; Alemi et al., 2018; Hendrycks & Gimpel, 2017). For instance, Hendrycks et al. (Hendrycks & Gimpel, 2017) show that a classifier’s prediction probabilities of OOD examples tend to be more uniform, and therefore the maximum predicted class probability from the softmax layer was used for OOD detection. Regardless of the details of these methods, every method needs a classifier or an encoder, which takes an image x as input and compresses it into a vector z in the laten space; after some further transform, z is converted to an OOD detection score τ . This computing process can be expressed as: z = f(x) and τ = d(z). To perform OOD detection, a detection threshold needs to be specified, and then x is OOD if τ is smaller/larger than the threshold. For the evaluation of OOD detection methods, (Hendrycks & Gimpel, 2017), an OOD detector is usually trained on a dataset (e.g. Fashion-MNIST as indistribution) and then it is tested on another dataset (e.g. MNIST as OOD).\nAs will be shown in this study, the above mentioned classification-based OOD detection methods are practically breakable. As an example (more details in Section 3), we used the Resnet-18 model (He et al., 2016) pre-trained on the ImageNet dataset. Let xin denote a 224×224×3 image (indistribution sample) in ImageNet and xout denote an OOD sample which could be any kinds of images (even random noises) not belonging to any category in ImageNet. Let z denote the 512- dimensional feature vector in Resnet-18, which is the input to the last fully-connected linear layer before the softmax operation. Thus, we have zin = f(xin) and zout = f(xout). In Fig. 1, xin\nis the image of Santa Claus, and xout could be a chest x-ray image or a random-noise image, and “surprisingly”, zout ∼= zin which renders OOD detection score to be useless: d(zout) ∼= d(zin). In Section 2, we will introduce an algorithm to generate OOD samples such that zout ∼= zin. In Section 3, we will show the evaluation results on publicly available datasets, including ImageNet subset, GTSRB, OCT, and COVID-19 CT. Since some generative models (e.g. Glow (Kingma & Dhariwal, 2018)) can approximate the distribution of training samples (i.e. p(xin)), likelihood-based generative models were utilized for OOD detection (Nalisnick et al., 2019). It has been shown that likelihoods derived from generative models may not distinguish between OOD and training samples (Nalisnick et al., 2019; Ren et al., 2019; Choi et al., 2018), and a fix to the problem could be using likelihood ratio instead of raw likelihood score (Serrà et al., 2019). Although not the main focus of this study, we will show that the OOD sample’s likelihood score from the Glow model (Kingma & Dhariwal, 2018; Serrà et al., 2019) can be arbitrarily manipulated by our algorithm (Section 2.1) such that the output probability p(xin) ∼= p(xout), which further diminishes the effectiveness of any Glow likelihood-based detection methods." }, { "heading": "2 METHODOLOGY", "text": "" }, { "heading": "2.1 OOD ATTACK ON DNN ENCODER", "text": "We introduce an algorithm to perform OOD attack on a DNN encoder z = f(x) which takes an image x as input and transforms it into a feature vector z in a latent space. Preprocessing on x can be considered as the very first layer inside of the model f(x). The algorithm needs a weak assumption that f(x) is sub-differentiable. A CNN classifier can be considered a composition of a feature encoder z = f(x) and a feature classifier p = g(z) where p is the softmax probability distribution over multiple classes.\nLet’s consider an in-distribution sample xin and an OOD sample x′out, and apply the model: zin = f(xin) and z′out = f(x ′ out). Usually, z ′ out 6= zin. However, if we add a relatively small amount of noise δ to x′out, then it could be possible that f (x ′ out + δ) = zin and x ′ out + δ is still OOD. This idea is realized in Algorithm 1, OOD Attack on DNN Encoder.\nThe clip operation in Algorithm 1 is very important: it will limit the difference between xout and x′out so that xout may be OOD. The algorithm is inspired by the method called projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018) which is used for adversarial attacks. We note that the term “adversarial attack” usually refers to adding a small perturbation to a clean sample x in a dataset such that a classifier will incorrectly-classify the noisy sample while being\nAlgorithm 1 OOD Attack on DNN Encoder Input: An in-distribution sample xin in a dataset. An OOD sample x′out not similar to any sample\nin the dataset. f , the neural network feature encoder. , the maximum perturbation measured by Lp norm. N , the total number of iterations. α the learning rate of the optimizer.\nOutput: an OOD sample xout s.t. f(xout) ∼= f(xin) Process:\n1: Generate a random noise ξ with ||ξ|| ≤ 2: Initialize xout = x′out + ξ 3: Setup loss J(xout) = ||f(xout)− f(xin)||2 (L2 norm) 4: for n from 1 to N do 5: xout ← clip(xout − α · h(J ′(xout))), where J ′(x) = ∂J/∂x. 6: end for\nNote: The clip operation ensures that ||xout − x′out||p ≤ . The clip operation also ensures that pixel values stay within the feasible range (e.g. 0 to 1). If L-inf norm is used, h (J ′) is the sign function; and if L2 norm is used, h (J ′) is a function that normalizes J ′ by its L2 norm. Adamax optimizer is used in the implementation\nable to correctly-classify the original clean sample x. Thus, OOD attack and adversarial attack are completely different things.\nIn practice, the Algorithm 1 can be repeated many times to find the best solution. Random initialization is performed in line-1 and line-2 of the algorithm process. By adding initial random noise ξ to x′out, the algorithm will have a better chance to avoid local minima caused by a bad initialization." }, { "heading": "2.2 DIMENSIONALITY REDUCTION AND OOD ATTACK", "text": "Recall that in a classification-based OOD Detection approach, a DNN encoder transforms the input to a feature vector, i.e., z = f(x), and an OOD detection score is computed by another transform on z, i.e., and τ = d(z). If zout ∼= zin, then d(zout) ∼= d(zin) which breaks the OOD detector regardless of the transform d. Usually, a DNN encoder makes dimensionality reduction: the dimension of z is significantly smaller than the dimension of x. In the example shown in Fig. 1, z is a 512-dimensional feature vector (dim(z) = 512) in Resnet-18, and the dimension of x is 150528 (224× 224× 3). Dimensionality reduction in an encoder provides the opportunity for the existence of the mapping of OOD and in-distribution samples to the same locations in the latent space. This is simply because the vectors in a lower-dimensional space cannot represent all of the vectors/objects in a higher-dimensional space, which is the Pigeonhole Principle. Let’s do an analysis on the Resnet-18 example in Fig. 1. A pixel of the color image x has 8-bits. In the 150528-dimension discrete input space, there are 256224×224×3 different images/vectors, which defines the size of the input space. float32 data type is usually used in computation, a float32 variable can roughly represent 232 unique real numbers. Thus, in the 512-dimensional latent space, there are 232×512 unique vectors/objects, which defines the size of the latent space. The ratio is ( 232×512\n256224×224×3\n) 1, and it shows that the\nlatent space is significantly smaller than the input space. Thus, for some sample x in the dataset, we can find another sample x′ such that f (x′ ) = f (x) as long as dim(z) < dim(x). A question arises: will the x′ be in-distribution or OOD? To answer this question, let’s partition the input discrete space Ω into two disjoint regions (Ω = Ωin ∪ Ωout), Ωin of in-distribution samples and Ωout of OOD samples. |Ω| denotes the size of Ω. Usually, the training set is only a subset of Ωin, and the size of Ωout is significantly larger than the size of Ωin. For example, if Ωin is ImageNet, then Ωout contains medical images, noise images, and other weird images. If Ωin contains human face images, then Ωout contains non-face images and then |Ωin| |Ωout|. The latent space (z-space) is denoted by F and partitioned into two subspaces: F = Fin ∪Fout. An encoder is applied such that Ωin → Fin and Ωout → Fout. If there is overlap Fin∩Fout 6= ∅, then the encoder is vulnerable to OOD attack. Usually, the encoder is a part of a classifier trained to classify in-distribution samples into different classes, and therefore the encoder cannot guarantee that there is no overlap between Fin andFout. What is the size ofFin∩Fout or what is the probability P (|Fin ∩ Fout| ≥ a)? While it is hard to calculate it for an arbitrary encoder and dataset, we can do a worst-case-scenario analy-\nsis. Assuming that every OOD sample is i.i.d. mapped to the latent space with a uniform distribution over a number of |F| spots, then the probability of OOD samples covering the entire latent space is P (Fout = F) = |F|!×Stirling(|Ωout| , |F|)/ |F||Ωout| → 1 as |F| / |Ωout| → 0, where Stirling is the Stirling number of the second kind. Noting that |F| / |Ωout| = 2 32×512\n256224×224×3−1.4×107 ≈ 0 and 1.4 × 107 being the number of samples in ImageNet, then it could be true that almost (with probability close to 1) the entire latent space of Resnet-18 is covered by the z vectors of OOD samples.\nNext, we discuss how to construct OOD samples to fool neural networks. First, let’s take a look at one-layer linear network: z = Wx, and make notations: an in-distribution input x ∈ RM , latent code z ∈ RK and K M . W is a K ×M matrix, and rank (W ) ≤ K. The null space of W is Ωnull = {η; Wη = 0}. Now, let’s take out the basis vectors of this space, η1, η2, . . ., ηM−K , and compute x′ = ∑ i λiηi + x where λi is a non-zero scalar. Obviously, z\n′ = Wx′ = z. We can set the magnitude of the “noise” ∑ i λiηi to be arbitrarily large such that x\n′ will look like garbage and become OOD, which is another explanation of the existence of OOD samples. Then, we can try to apply this attack method to multi-layer neural network. If the neural network only uses ReLU activation, then the input-output relationship can be exactly expressed as a piecewise-linear mapping (Ding et al., 2020), a similar approach can be applied layer by layer. If ReLU is not used, a new method is needed. We note that the filter bank of a convolution layer can be converted to a weight matrix. We have examined the state-of-the-art CNN models that are pre-trained on ImageNet and available in Pytorch, and dimensionality reduction is performed in most of the layers (except 1 or 2 layers near the input), i.e. |F| ≤ |Ωin| |Ωout|. Instead of constructing an OOD sample by adding perturbations to an in-distribution sample, in Algorithm-1, we construct an OOD sample paired with an in-distribution sample by starting from an initial sample that is OOD.\nCould an encoder be made robust to the OOD attack by including OOD samples in training set for supervised binary classification: in vs out? Usually |Ωin| |Ωout| and we will have to collect and label “enough” samples in Ωout, which is infeasible considering the large size of Ωout ≈ Ω. As a comparison, to enhance DNN classifier robustness against adversarial noises, it is very effective to include noisy samples in the training set, i.e. Ωin = Ωin clean∪Ωin noisy . It is known as adversarial training (Goodfellow et al., 2018) and computationally feasible as |Ωin noisy| |Ωout|." }, { "heading": "2.3 PROBLEM OF GLOW LIKELIHOOD-BASED OOD DETECTION", "text": "Generative models have been developed to approximate the training data distribution. Glow (Kingma & Dhariwal, 2018) is one of these models, and it has a very special property: it is bijective and the latent space dimension is the same as the input space dimension, i.e., no dimensionality reduction, which is the reason that we studied this model.\nSeveral studies have found the problem of Glow-based OOD detection: likelihoods derived from Glow may not distinguish between OOD and training samples (Ren et al., 2019; Choi et al., 2018), and a possible fix to the issue could be using likelihood ratio (Serrà et al., 2019). In this study, we further show that negative log-likelihood (NLL) from the Glow model can be arbitrarily manipulated by our algorithm in which f(x) denotes NLL. The results on CelebA face image dataset are in Section 3. We think the major reason causing Glow’s vulnerability to OOD attack is that we do not have enough training data in high dimensional space. Glow is a mapping: xin → zin → p(zin) → p(xin), the probability of xin. For an OOD sample xout, the mapping is xout → zout → p(zout)→ p(xout). Since the number of training samples is significantly smaller than the size of the space, there are a huge number of “holes” in the latent space (i.e., regions that no training samples are mapped to), and it is easy to put zout in one of these “holes” close to zin such that p(zout) ∼= p(zin)." }, { "heading": "2.4 RECONSTRUCTION-BASED OOD DETECTION", "text": "Auto-encoder style OOD detection has been developed for anomaly detection (Chalapathy & Chawla, 2019; Cohen et al., 2019) based on reconstruction error. The data flow of an auto-encoder is x → z → x̂ where x̂ is the reconstruction of x. The OOD detection score can be the difference between x and x̂, e.g., the Lp distance ||x − x̂||p or Mahalanobis Distance. This type of method has two known issues. The first issue is that auto-encoder may well reconstruct OOD samples, i.e., xout ≈ x̂out. Thus, one needs to make sure it has large reconstruction errors on OOD samples, which can be done by limiting the capacity of auto-encoder or saturating it with in-distribution samples.\nThe second issue is that pixel-to-pixel distance is not a good measurement of image dissimilarity, especially for medical images. For example, x could be a CT image of a heart and x̂ could be the image of the same heart that deforms a little bit, but the pixel-to-pixel distance between x and x̂ can be very large. Thus, a robust image similarity measurement is needed.\nInterestingly, the proposed OOD attack algorithm has no effect on this type of method. Let’s consider the data flow: xin → zin → x̂in and xout → zout → x̂out. If zout = zin, then x̂out = x̂in . Then, it is easy to find out that xout is OOD because ||xout − x̂out||p = ||xout − x̂in||p which is very large. Ironically, in this case, the attack algorithm helps to identify the OOD sample. In future work, we will evaluate the effectiveness of combining the proposed algorithm and auto-encoder for OOD detection." }, { "heading": "3 EXPERIMENTS", "text": "We applied the proposed algorithm to attack state-of-the-art DNN models on image datasets. For each in-distribution sample xin, an OOD sample xout is generated by the algorithm. To measure attack strength, mean absolute percentage error is calculated by MAPE(zout) = mean(|zout − zin|)/max(|zin|). Here, zout = f (xout) and zin = f(xin). |zout − zin| is an error vector, and mean(|zout−zin|) is the average error. max(|zin|) is the maximum absolute value in the vector zin. We also applied the algorithm to attack the Glow model on CelebA dataset. In all of the evaluations, L2 norm was used in the proposed algorithm. Pytorch was used to implement the algorithm. Nvidia Titan V GPUs were used for model training and testing." }, { "heading": "3.1 EVALUATION ON A SUBSET OF IMAGENET", "text": "ILSVRC2012 ImageNet has over 1 million images in 1000 classes. Given the limited computing power, it is impractical to test the algorithm on the whole dataset. Instead, we used a subset of 1000 images in 200 categories. The size of each image is 224×224×3. Two CNN models pretrained on the ImageNet were evaluated, which are Resnet-18 and Densenet-121 available in Pytorch.\nResnet-18 latent space has 512 dimensions. Since ImageNet covers a variety of natural and artificial objects, we choose medical images and random-noise images to make sure that x′out is indeed OOD. Using each of the three initial OOD samples (chest x-ray, lung-CT, and random noise to be x′out), we generated 1000 OOD samples paired with the 1000 (in-distribution) samples in the dataset and calculated MAPE values. The three MAPE histograms are shown in Fig. 3. Most of the MAPE values are less than 0.1%.\nWe also evaluated another CNN, named Densenet-121, and obtained similar results. The latent space has 1024 dimensions. Again, using each of the three initial OOD samples, 1000 OOD samples are generated for the samples in the dataset, and then MAPE values are calculated. The three MAPE histograms are shown in Fig. 4. Most of the MAPE values are less than 0.1%, indicating strong OOD attack.\nFrom the results in Fig.2 to Fig. 4, it can be seen that each of the two CNN models mapped significantly different OOD samples and in-distribution samples to almost the same locations in the latent space. Dimensionality reduction leads to the existence of such mapping, and our algorithm can find such OOD samples out. In other words, the mapping from input space to the latent space is many-to-one, not bijective. And therefore, it is almost guaranteed that such OOD samples exist and they can break any OOD detector d that computes a detection score d(z) only from the latent space (z-space). We tested a classical OOD detection method using the maximum of softmax output as detection score (Hendrycks & Gimpel, 2017). The results are shown in Table-1, and the AUROC scores are close to 0.5, showing that the method is unable to tell the difference between the 1000 OOD samples and 1000 in-distribution samples." }, { "heading": "3.2 EVALUATION ON CELEBA DATASET", "text": "We tested the algorithm and the Glow model (Kingma & Dhariwal, 2018) on the CelebA dataset (human face images). The size of each image is 64×64×3. After training, the model was able to generate realistic face images. The model also outputs the negative log-likelihood (NLL) of the input sample, i.e., NNL (x) = −log (p (x)). By setting f (x) = NNL (x), our algorithm can make f (xout) to be close to 0 or very large to match any f (xin), which renders NLL score useless for OOD detection. To demonstrate the effectiveness of our algorithm, we randomly selected 160 (in-distribution) samples in the dataset. We used a color spiral image as the initial OOD sample x′out, and NNL (x ′ out) = 3.5268. The distributions of NLL(xin) from 160 in-distribution samples and NLL(xout) from 160 corresponding OOD samples, as well as OOD sample images are shown in Fig. 5. The two distributions are almost identical. More examples of OOD samples are shown in Fig. 6. In each row of Fig. 6, although the images have different NLL scores, they look like each other.\nWe have done more evaluations of our algorithm and OOD detection methods, please read the appendices." }, { "heading": "4 DISCUSSION", "text": "We hypothesized that dimensionality reduction in an encoder provides the opportunity for the existence of the mapping of OOD and in-distribution samples to the same locations in the latent space. We applied the OOD Attack algorithm to DNN classifiers on various datasets (see Appendices A and B), and the results (i.e. low MAPE values) confirmed our hypothesis. The results imply that classifier/encoder -based OOD detection methods may be vulnerable to the OOD attack.\nBy using our OOD Attack algorithm, we evaluated nine OOD detection methods (see Appendices C to J). The AUROC scores of these methods are close to 0.5 in our experiments, which means these methods could not distinguish between the in-distribution samples (e.g. CIFAR10) and the OOD samples generated by our algorithms. Our algorithm was unable to break a recent method named Certified Certain Uncertainty (Meinke & Hein, 2020), because this method utilizes Gaussian mixture models (GMMs) in the input space (note: no dimensionality reduction in GMMs). However, it is well known that GMMs have convergence issues for high dimensional data (e.g. medical images).\nCompared to adversarial attacks and defenses, it is much more difficult to defend against OOD attacks. Adversarial attacks and OOD attacks are doing completely different things to neural networks, although the attack algorithms may use similar optimization techniques. For image classification applications, an adversarial attack will add a small amount of noise to the input (clean) image, and the resulting noisy image is still human-recognizable. Therefore, the magnitudes of adversarial noises are constrained. For example, a noisy image of a panda is still an image of the panda. By the judgment of humans, the noisy image and the clean image are the images of the same object, and the two images should be classified into the same class. Compared to adversarial samples, OOD samples, which can be generated by our OOD Attack algorithm, have much more freedom (e.g. they can be random noises), as long as they do not look like in-distribution samples. Thus, OOD detection is very challenging.\nWe would like to point out that it is difficult to evaluate an OOD detector to “prove” that it can detect, say 90% of the OOD samples by experimentally testing it on Ωout because Ωout is too large to be tested on: |Ωin| |Ωout| ≈ |Ω|. For example, if Fashion-MNIST is used as in-distribution, then MINST and Omniglot are usually as OOD, which is the “standard” approach in the literature. Clearly, MINST and Omniglot cannot cover Ωout the space of OOD samples. If the image size is larger, then |Ωout| becomes much larger. Could we design an evaluation method (experimental or analytical) that does not rely on OOD samples?\nBefore the OOD detection issue is fully resolved, for life-critical applications, any machine learning system that uses DNN classifiers should not make decisions independently and can only serve as assistants to humans. The OOD Attack algorithm and the experimental results can serve as a reference for the evaluation of new OOD detection methods.\nWe will release the code on GitHub when the paper is accepted. All figures are in high-resolution, please zoom in." }, { "heading": "A APPENDIX", "text": "The parameters in each evaluation are listed.\n1. Parameters for evaluation on ImageNet subset Attacking resnet-18: ε = 5, N = 1e4, α = ε/100 with x-ray and CT; ε = 20, N = 1e4, α = ε/100 with random noise. Attacking densnet-121: ε = 5, N = 1e4, α = ε/100 with x-ray and CT; ε = 30, N = 1e4, α = ε/100 with random noise. We inspected the OOD images from random noise: they are not recognizable to human vision.\n2. Parameters for evaluation on OCT dataset ε = 10, N = 1e4, α = ε/100 with retinal fundus photography image ε = 20, N = 1e4, α = ε/100 with random noise.\n3. Parameters for evaluation on COVID-19 CT dataset ε = 20, N = 1e4, α = ε/100\n4. Parameters for evaluation on GTSRB dataset ε = 10, N = 1e4, α = ε/100\n5. Parameters for Evaluation on CelebA Dataset ε = 10, N = 1e4, α = ε/100" }, { "heading": "B APPENDIX", "text": "" }, { "heading": "B.1 EVALUATION ON OCT DATASET", "text": "We tested our algorithm and Resnet-18 on a retinal optical coherence tomography (OCT) dataset (Kermany et al., 2018), which has four classes. Each image is resized to 224×224. 1000 samples per class were randomly selected to obtain a training set of 4000 samples. The test set has 968 images. We modified Resnet-18 for this four-class classification task. The latent space has 512 dimensions. After training, the Resnet-18 model achieved a classification accuracy > 95% on the test set.\nWe used two references images as the initial OOD sample x′out. The first reference image is a grayscale retinal image converted from an RGB color retinal fundus photography image. Compared to this retinal fundus photography image, the OCT images have unique patterns of horizontal “white\nbands”. We selected this OOD image by purpose: there may be a chance that both types of images are needed for retinal diagnosis. The second reference image is generated from random noises. Examples are shown in Fig. 7, and the two MAPE histograms are shown in Fig. 8. The results confirm that the algorithm can generate OOD samples (968) which are mapped by the DNN model to the locations of the in-distribution samples (968) in the latent space, i.e., zout ∼= zin." }, { "heading": "B.2 EVALUATION ON COVID-19 CT DATASET", "text": "We also tested our algorithm and Resnet-18 on a public COVID-19 lung CT (2D) image dataset (Soares et al., 2020). It contains 1252 CT scans (2D images) that are positive for COVID-19 infection and 1230 CT scans (2D images) for patients non-infected by COVID-19, 2482 CT scans in total. From infected cases, we randomly selected 200 samples for testing, 30 for validation, and 1022 for training. From the uninfected cases, we randomly selected 200 for testing, 30 for validation and 1000 for training. Each image is resized to 224×224.\nWe modified the last layer of Resnet-18 for this binary classification task, infected vs uninfected. We also replaced batch normalization with instance normalization because it is known that batch normalization is not stable for small batch-size (Wu & He, 2018). The latent space still has 512 dimensions. We set batch-size to 32, the number of training epochs to 100, and used AdamW optimizer with the default parameters. After training, the model achieved a classification accuracy > 95% on test set.\nWe used two reference images as the initial OOD sample x′out, a chest x-ray image, and a randomnoise image. The two MAPE histograms are shown in Fig. 9 that most of the MAPE values are less than 0.1%. The results also confirm that the algorithm can generate OOD samples (400) which are mapped by the DNN model to the locations of the in-distribution samples (400) in the latent space, i.e., zout ∼= zin. Examples are shown in Fig. 10. As reported in the previous studies (Shi et al., 2020), infected regions in the images have a unique pattern called ground-glass opacity. The CT images in the 1st and 3rd rows show COVID-19 infections with ground-glass opacity on the upper-left area. The CT image in the 5th row does not show any signs of infection. It can be seen that the random-noise images and the COVID-19 CT images have the same feature vectors in the latent space, which is astonishing." }, { "heading": "B.3 EVALUATION ON GTSRB TRAFFIC SIGN DATASET", "text": "We tested our algorithm and a state-of-the-art traffic sign classifier on the GTSRB dataset. The classifier is similar to the one in (Arcos-Garcia et al., 2018), which has a spatial-transformer network. The size of each image is 32×32×3. The latent space has 128 dimensions. After training, the classifier achieved over 99% accuracy on the test set. We used a random-noise image as the initial OOD sample x′out to generate 12630 OOD samples paired with the 12630 in-distribution samples in the test set. The MAPE histogram is shown in Fig. 11, in which most of the MAPE values are less than 0.1%. Examples are shown in Fig. 12.\nIt can be seen that zout of random-noise images are almost the same as zin of the stop sign, the speed limit sign, and the turning signs. Not only the classifier cannot tell the difference between a real traffic sign and a generated noise image, but also any detectors based on zin for OOD detection will fail. We note that adversarial robustness of traffic sign classifiers has been studied (Eykholt et al., 2018), and after adding adversarial noises to the traffic sign images, the noisy images are still recognizable. OOD noises and adversarial noises are very different (discussed in Sections 2.1 and 2.2). Thus, it would be wise to disable any vision-based auto-pilot in your self-driving cars today until this issue is resolved." }, { "heading": "C APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method named ODIN (Liang et al., 2018)." }, { "heading": "C.1 SUMMARY OF THE ODIN METHOD", "text": "The method does temperature scaling on the logits (input to softmax) by logits/T , which is the Eq.(1) in the ODIN paper. The temperature T could be in the range of 1 to 1000. The ODIN method also does input preprocessing, which is the Eq.(2) in the ODIN paper. For preprocessing, the perturbation\nmagnitude (PM ) could be in the range of 0 to 0.004. The OOD score is defined to be the maximum of the softmax outputs from a neural network, given the preprocessed input. An OOD sample is expected to have a low OOD score." }, { "heading": "C.2 EVALUATION ON CIFAR10", "text": "Wide residual network with depth 28 and widen factor 10 is used in the ODIN paper. After training for 200 epochs, the model achieved the classification accuracy of 94.71 on CIFAR10 test set.\nIn our algorithm, we set f (x) to be the logits output from the model, given the preprocessed input. For the CIFAR10 dataset, the logits output contains 10 elements, which is significant dimensionality reduction compared to the size of an input color image: 32×32×3. In our algorithm, the parameters are ε = 10, α = ε/100, N = 100. The initial OOD sample is a random noise image. For every sample in the CIFAR10 test set, the algorithm generated an OOD sample to match the logits output. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the ODIN method.\nThe results are reported in Table 2. Fig. 13 shows the OOD score histograms of the in-distribution and OOD samples when T=1000 and PM=0.001. When T = 1 and PM = 0, ODIN becomes the Baseline method (Hendrycks & Gimpel, 2017)." }, { "heading": "D APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method named Mahalanobis (Lee et al., 2018b)." }, { "heading": "D.1 SUMMARY OF THE MAHALANOBIS METHOD", "text": "The method extracts the feature maps from multiple layers of a neural network and applies average pooling per channel to reduce each feature map into a 1D feature vector. Then, Mahalanobis distance is calculated between a feature vector and the corresponding mean vector. The distance values from all of the feature vectors are linearly combined together to produce a single distance, i.e., the OOD score. The OOD score of an OOD sample is expected to be large. To further improve the performance, the method does input preprocessing with a given perturbation magnitude (PM), and then OOD score of the preprocessed input is obtained. The weights to combine the Mahalanobis distances from multiple layers could be determined on a validation set of OOD samples. In practice, it is impossible to obtain such a validation set. In our evaluation, we simply take the average of the distance values, which gives the OOD score.\nAlthough the feature maps from multiple layers are utilized, the method still does dimensionality reduction to those feature maps (e.g. averaging). Therefore, the method is breakable by our OOD Attack algorithm." }, { "heading": "D.2 EVALUATION ON CIFAR10 AND CIFAR100", "text": "The neural network model is a residual network named Resnet34 in the Mahalanobis paper, and by changing the number of outputs, it can be used for CIFAR10 and CIFAR100. We used the pre-trained models that are available online at https://github.com/pokaxpoka/deep Mahalanobis detector/. The layers used for feature extraction, are exactly the same as those in the source code of the method.\nIn our algorithm, we have two different settings for f (x): (1) it can be the OOD score, and (2) it can be the concatenation of the feature vectors, given the original (not preprocessed) input. We did experiments with the two settings. In our algorithm, the parameters are ε = 10, α = ε/100, N = 1000 for all experiments. The initial OOD sample is a random noise image. For every sample in the test set, the algorithm generated an OOD sample to match the corresponding output. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the Mahalanobis method.\nThe results on the two datasets are reported in Table 3 and Table 4. Fig. 14 shows the OOD score histograms of the in-distribution and OOD samples, when the in-distribution dataset is CIFAR10, PM=0.01, and f (x) = OOD score.\nFig. 15 shows the OOD score histograms of the in-distribution and OOD samples, when the indistribution dataset is CIFAR10, PM=0.01, and f (x) = feature concatenation. It can be seen that the OOD samples have smaller distances, which is caused by input preprocessing." }, { "heading": "E APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method named Outlier Exposure (Hendrycks et al., 2019)." }, { "heading": "E.1 SUMMARY OF THE OUTLIER EXPOSURE METHOD", "text": "The method trains a neural network on not only the standard training set (i.e. in distribution) but also an auxiliary dataset of outliers (i.e. OOD samples). In the paper, it states that the OOD score is defined to be the cross-entropy between a uniform distribution and the softmax-output distribution.\nIn the actual implementation (i.e. source code of the method), the OOD score is defined to be the average of the logits minus the logsumexp of the logits. In the evaluation, we used the actual implementation in the source code." }, { "heading": "E.2 EVALUATION ON SVHN, CIFAR10 AND CIFAR100", "text": "Wide residual networks are used in the Outlier Exposure paper. We downloaded the source code and pre-trained weights from https://github.com/hendrycks/outlier-exposure. The models were trained from scratch using Outlier Exposure, and they were named “oe scratch” by the authors.\nIn our algorithm, we set f (x) to be the logits (input to softmax) from each model. The parameters are ε=10, α = ε/100, N=1e4 for all experiments. The initial OOD sample is a random noise image. For every sample in the test set, the algorithm generated an OOD sample to match the logits output. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the Outlier Exposure method.\nThe results are reported in Table 5. The OOD score histograms of the in-distribution and OOD samples are shown in Fig. 16, where the in-distribution dataset is CIFAR10." }, { "heading": "F APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method named Deep Ensemble (Lakshminarayanan et al., 2017)." }, { "heading": "F.1 SUMMARY OF THE DEEP ENSEMBLE METHOD", "text": "Deep Ensemble is a collection of neural network models working together for a classification task. The output of a Deep Ensemble is a probability distribution across the classes, which is the average of the probability/softmax outputs of individual models. In the experiments, the number of models is 5 in a Deep Ensemble. To further improve performance, adversarial training is applied to the\nmodels. The OOD score is defined to be the entropy of the probability distribution from the Deep Ensemble. The entropy is expected to be large for an OOD sample." }, { "heading": "F.2 EVALUATE ON CIFAR10", "text": "The authors of the Deep Ensemble method did not provide source code and trained models. Therefore, we used pre-trained models from a recent work on adversarial robustness (Ding et al., 2020), which presented a state-of-the-art adversarial training method. Six pre-trained models were downloaded from https://github.com/BorealisAI/mma training/tree/master/trained models. The names of the models are cifar10-L2-MMA-1.0-sd0, cifar10-L2-MMA-2.0-sd0, cifar10-L2-OMMA-1.0-sd0, cifar10-L2-OMMA-2.0-sd0, cifar10-Linf-MMA-12-sd0, cifar10-Linf-OMMA-12-sd0. The models were trained on CIFAR10 to be robust against adversarial noises in a large range. Classification accuracy of the ensemble on test set is 89.85%.\nIn our algorithm, we set f (x) to be the concatenation of the logits from each of the six models. The parameters are ε=10, α= ε/100, N=1e4 for all experiments. The initial OOD sample is a random noise image. For every sample in the test set, the algorithm generated an OOD sample to match the logits output. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the Deep Ensemble method.\nThe AUROC of the Deep Ensemble method is 0.500 on CIFAR10 vs OOD. The OOD score histograms of the in-distribution and OOD samples are shown in Fig. 17." }, { "heading": "G APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method that builds ConfidenceCalibrated Classifiers (Lee et al., 2018a)." }, { "heading": "G.1 SUMMARY OF THE OOD DETECTION METHOD", "text": "The method jointly trains a classification network and a generative neural network (i.e. GAN) that generates OOD samples for training the classification network. Given an input, the OOD score is defined to be the maximum of the softmax outputs from the classification network. The OOD score is expected to be low for an OOD sample." }, { "heading": "G.2 EVALUATION ON SVHN AND CIFAR10", "text": "The neural network model is VGG13, and the source code of the method is provided by the authors at https://github.com/alinlab/Confident classifier. We downloaded the code and trained a VGG13 model with a GAN on SVHN and another VGG13 model with a GAN on CIFAR10 by using the parameters in the source code. VGG13 has a feature module and a classifier module.\nIn our algorithm, we set f (x) to be the vector input to the classifier module of VGG13. The parameters are ε=10, α= ε/100, N=1e4 for all experiments. The initial OOD sample is a random noise image. For every sample in the test set, the algorithm generated an OOD sample to match the vector input to the classifier module. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the OOD detection method.\nThe results are reported in Table 6 The OOD score histograms of the in-distribution and OOD samples are shown in Fig. 18, where the in-distribution dataset is CIFAR10." }, { "heading": "H APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method named Gram (Sastry & Oore, 2019)." }, { "heading": "H.1 SUMMARY OF THE GRAM METHOD", "text": "The method extracts the feature map Fl from every layer of a network and then computes the p-th\norder Gram matrix Gpl = (F p l F p l T\n) 1/p\n. Gram matrices with different p values from different layers are then used to compute the OOD score, which is named Total Deviation of the input sample. An OOD sample is expected to have a high OOD score." }, { "heading": "H.2 RESOLVING A NUMERICAL PROBLEM", "text": "The formula of the p-th order Gram matrix can be written as A = (B)1/p. The Gram matrices caused gradients to be inf or nan during back-propagation in the OOD attack algorithm. To resolve this problem, we tried three tricks:\n(a) use double precision (float64)\n(b) rewrite A = exp( 1p log(B + eps)) where eps=1e-40\n(c) use the equation in (b) to generate images during OOD attack and use the original equation A = (B) 1/p to compute OOD scores.\nThe above tricks work for p in the range of 1 to 5. For larger p, we still get numerical problems (inf or nan). As shown in Fig. 2 of the Gram paper, the method has already achieved better performance compared to the Mahalanobis method when the max value of p is 5. Thus, we set the max value of p to 5 in our experiments." }, { "heading": "H.3 EVALUATION ON CIFAR10 AND CIFAR100", "text": "The source code and pre-trained Resnet models are provided by the authors at https://github.com/VectorInstitute/gram-ood-detection\nDue to the unique process of the method, it is very difficult to do parallel computing with minibatches, and we have to set batch size =1. The computing process is very time-consuming, and therefore we selected the first 500 samples in CIFAR10 test set and the first 500 samples in CIFAR100 test set in our experiments.\nIn our algorithm, we set f (x) to be the OOD score of x, and the parameters are ε=10, α= ε/100, N=100. The initial OOD sample is a random noise image. For every in-distribution sample, the algorithm generated an OOD sample to match the OOD score. The generated OOD samples look like random noises. The OOD scores of these samples were calculated by the Gram method.\nThe results are reported in Table 7. The OOD score histograms of the in-distribution and OOD samples are shown in Fig. 19, where the in-distribution dataset is CIFAR10. The results show that the OOD score from the Gram method can be arbitrarily manipulated by our algorithm." }, { "heading": "I APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method based on Glow (Serrà et al., 2019)." }, { "heading": "I.1 SUMMARY OF THE OOD DETECTION METHOD", "text": "The method combines Glow negative log-likelihood (NLL) and input-complexity. PNG compression is used to compress the input image. The input-complexity, L, is measured by bits per dimension, where the “bits” refers to the number of bits of the compressed image, and the dimension is the total number of pixels per image. The OOD score is NLL – L." }, { "heading": "I.2 EVALUATE ON CELEBA", "text": "The source code of the Glow model is downloaded from https://github.com/rosinality/glow-pytorch, and we trained it from scratch on CelbaA dataset, in which the size of each face image is 64643. After training, the model was able to generate realistic face images. For method evaluation, we randomly selected 160 samples in the dataset because the computation cost is very high.\nIn our algorithm, we set f (x) to be the OOD score of x, and the parameters are ε=10, α= ε/100, N=1e4. The initial OOD sample is a color spiral image. For every in-distribution sample, the algorithm generated an OOD sample to match the OOD score. The generated OOD samples look like color spirals, i.e., not face images. The OOD scores of these samples were calculated by the OOD detection method.\nAUROC of the method is 0.500 on CelbaA vs OOD. The OOD score histograms of the indistribution and OOD samples are shown in Fig. 20, The result indicates that NLL combined with input complexity can still be arbitrarily manipulated." }, { "heading": "J APPENDIX", "text": "We applied our OOD Attack algorithm to test the OOD detection method using an energy-based model named JEM (Grathwohl et al., 2020)." }, { "heading": "J.1 SUMMARY OF THE OOD DETECTION METHOD USING JEM", "text": "In the JEM paper, it was shown that a standard classifier can be trained to be an energy-based model (EBM). Using the EBM, three types of OOD scores can be obtained for an input sample, which are: (1) Log Likelihood logp(x), (2) the maximum of softmax classification output, i.e. maxyp (y|x), and (3) −||∂logp(x)∂x ||2. An OOD sample is expected to have a low OOD score." }, { "heading": "J.2 EVALUATION ON CIFAR10", "text": "A wide residual network pretrained on CIFAR10 is available at https://github.com/wgrathwohl/JEM.\nIn our algorithm, we set f (x) to be the logits (i.e. input to softmax for classification), and the parameters are ε=10, α= ε/100, N=1e3. The initial OOD sample is a color spiral image. For every in-distribution sample, the algorithm generated an OOD sample to match the logits output. The generated OOD samples look very weird, not in any of the 10 classes of CIFAR10. The OOD scores of these samples were calculated by the OOD detection method.\nWe note that we tried to use random noises as initial OOD samples, but many generated images look like images in CIFAR10 dataset, and therefore, we used a color spiral image as the initial OOD sample.\nThe results are reported in Table 8. The OOD score histograms of the in-distribution and OOD samples are shown in Fig. 21, Fig. 22, and Fig. 23, We note that when using −||∂logp(x)∂x ||2 as the OOD score, the AUROC is 0.203. One may think if we flip the sign of the OOD score, then AUROC will increase to 0.797. If we do so, then AUROC scores in the last row of Table 3 in the JEM paper will be close to 0 for the OOD detection experiments done by the authors.\nFig. 24 shows an example of the loss curve over 1000 iteration in the OOD attack algorithm.\nFig. 25 shows some of the generated images, which look like the images of Frankenstein’s monsters: randomly put some parts of objects together, twist/deform them, and then pour some paint on them. It may be difficult for neural networks to learn what is an object (e.g. airplane) just from images and class labels.\nEnergy-based models (EBMs), such as JEM, can generate OOD samples during training, which may explain why the OOD attack failed when the initial OOD sample was random noise. If we take a closer look at the sampling procedure (e.g. Langevin dynamics) and the objective function, it is easy to find out EBM training algorithm is trying to pull down the energy scores of positive (indistribution) samples and pull up the energy scores of negative (OOD) samples (Du & Mordatch, 2019), which is similar to the basic idea of adversarial training. From this perspective, OOD detection using EBMs could be a promising direction if the computation cost is acceptable, and the challenge is how to train a neural network to learn what is an object?" } ]
2,020
null
SP:19a28a50180cda10be3344064701fee76f354cf9
[ "This paper extends prior results, namely that VAEs are able to learn the principal components. The novelty is the extension to a new distribution: multinomial logistic-normal distribution. This is achieved by using the Isometric log-ratio (ILR) transform. While prior results were derived analytically, this paper provides (only) empirical evidence for the claim regarding the multinomial logistic-normal distribution. " ]
Covariance estimation on high-dimensional data is a central challenge across multiple scientific disciplines. Sparse high-dimensional count data, frequently encountered in biological applications such as DNA sequencing and proteomics, are often well modeled using multinomial logistic normal models. In many cases, these datasets are also compositional, presented item-wise as fractions of a normalized total, due to measurement and instrument constraints. In compositional settings, three key factors limit the ability of these models to estimate covariance: (1) the computational complexity of inverting high-dimensional covariance matrices, (2) the non-exchangeability introduced from the summation constraint on multinomial parameters, and (3) the irreducibility of the multinomial logistic normal distribution that necessitates the use of parameter augmentation, or similar techniques, during inference. Using real and synthetic data we show that a variational autoencoder augmented with a fast isometric log-ratio (ILR) transform can address these issues and accurately estimate principal components from multinomially logistic normal distributed data. This model can be optimized on GPUs and modified to handle mini-batching, with the ability to scale across thousands of dimensions and thousands of samples.
[]
[ { "authors": [ "John Aitchison" ], "title": "The statistical analysis of compositional data", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1982 }, { "authors": [ "Gregory B Gloor", "Jean M Macklaim", "Vera Pawlowsky-Glahn", "Juan J Egozcue" ], "title": "Microbiome datasets are compositional: and this is not optional", "venue": "Frontiers in microbiology,", "year": 2017 }, { "authors": [ "Michael A Skinnider", "Jordan W Squair", "Leonard J Foster" ], "title": "Evaluating measures of association for single-cell transcriptomics", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "Donald A Jackson" ], "title": "Compositional data in community ecology: the paradigm or peril of proportions? Ecology", "venue": null, "year": 1997 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational inference: A review for statisticians", "venue": "Journal of the American statistical Association,", "year": 2017 }, { "authors": [ "Steven C Althoen", "Renate Mclaughlin" ], "title": "Gauss-jordan reduction: A brief history", "venue": "The American mathematical monthly,", "year": 1987 }, { "authors": [ "Aravindh Krishnamoorthy", "Deepak Menon" ], "title": "Matrix inversion using cholesky decomposition. In 2013 signal processing: Algorithms, architectures, arrangements, and applications (SPA)", "venue": null, "year": 2013 }, { "authors": [ "James Lucas", "George Tucker", "Roger Grosse", "Mohammad Norouzi" ], "title": "Don’t Blame the ELBO! A Linear VAE Perspective on Posterior Collapse", "venue": null, "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Michael E Tipping", "Christopher M Bishop" ], "title": "Probabilistic principal component analysis", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 1999 }, { "authors": [ "Romain Lopez", "Jeffrey Regier", "Michael B. Cole", "Michael I. Jordan", "Nir Yosef" ], "title": "Deep generative modeling for single-cell transcriptomics", "venue": "Nature Methods,", "year": 2018 }, { "authors": [ "Min Oh", "Liqing Zhang" ], "title": "Deepmicro: deep representation learning for disease prediction based on microbiome data", "venue": "Scientific reports,", "year": 2020 }, { "authors": [ "Sam Sinai", "Eric Kelsic", "George M Church", "Martin A Nowak" ], "title": "Variational auto-encoding of protein sequences", "venue": "arXiv preprint arXiv:1712.03346,", "year": 2017 }, { "authors": [ "Xiaojie Guo", "Sivani Tadepalli", "Liang Zhao", "Amarda Shehu" ], "title": "Generating tertiary protein structures via an interpretative variational autoencoder", "venue": "arXiv preprint arXiv:2004.07119,", "year": 2020 }, { "authors": [ "Alex Hawkins-Hooker", "Florence Depardieu", "Sebastien Baur", "Guillaume Couairon", "Arthur Chen", "David Bikard" ], "title": "Generating functional protein variants with variational autoencoders", "venue": "BioRxiv,", "year": 2020 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "arXiv preprint arXiv:1702.08139,", "year": 2017 }, { "authors": [ "Daniel Kunin", "Jonathan M Bloom", "Aleksandrina Goeva", "Cotton Seed" ], "title": "Loss landscapes of regularized linear autoencoders", "venue": "arXiv preprint arXiv:1901.08168,", "year": 2019 }, { "authors": [ "Xuchan Bao", "James Lucas", "Sushant Sachdeva", "Roger Grosse" ], "title": "Regularized linear autoencoders recover the principal components, eventually", "venue": "arXiv preprint arXiv:2007.06731,", "year": 2020 }, { "authors": [ "Peter Filzmoser", "Karel Hron" ], "title": "Correlation analysis for compositional data", "venue": "Mathematical Geosciences,", "year": 2009 }, { "authors": [ "John Aitchison" ], "title": "Principal component analysis of compositional data", "venue": "Biometrika, 70(1):57–65,", "year": 1983 }, { "authors": [ "Juan José Egozcue", "Vera Pawlowsky-Glahn", "Glòria Mateu-Figueras", "Carles Barcelo-Vidal" ], "title": "Isometric logratio transformations for compositional data analysis", "venue": "Mathematical Geology,", "year": 2003 }, { "authors": [ "Peter Filzmoser", "Karel Hron", "Clemens Reimann" ], "title": "Principal component analysis for compositional data with outliers", "venue": "Environmetrics: The Official Journal of the International Environmetrics Society,", "year": 2009 }, { "authors": [ "Juan José Egozcue", "Vera Pawlowsky-Glahn", "Gregory B Gloor" ], "title": "Linear association in compositional data analysis", "venue": "Austrian Journal of Statistics,", "year": 2018 }, { "authors": [ "Zachary D Kurtz", "Christian L Müller", "Emily R Miraldi", "Dan R Littman", "Martin J Blaser", "Richard A Bonneau" ], "title": "Sparse and compositionally robust inference of microbial ecological networks", "venue": "PLoS Comput Biol,", "year": 2015 }, { "authors": [ "Jonathan Friedman", "Eric J Alm" ], "title": "Inferring correlation networks from genomic survey data", "venue": "PLoS Comput Biol,", "year": 2012 }, { "authors": [ "Julien Chiquet", "Mahendra Mariadassou", "Stéphane Robin" ], "title": "Variational inference for sparse network reconstruction from count data", "venue": null, "year": 2018 }, { "authors": [ "Huaying Fang", "Chengcheng Huang", "Hongyu Zhao", "Minghua Deng" ], "title": "gcoda: conditional dependence network inference for compositional data", "venue": "Journal of Computational Biology,", "year": 2017 }, { "authors": [ "Janko Tackmann", "João Frederico Matias Rodrigues", "Christian von Mering" ], "title": "Rapid inference of direct interactions in large-scale ecological networks from heterogeneous microbial sequencing data", "venue": "Cell systems,", "year": 2019 }, { "authors": [ "Wray L Buntine", "Aleks Jakulin" ], "title": "Applying discrete pca in data analysis", "venue": "arXiv preprint arXiv:1207.4125,", "year": 2012 }, { "authors": [ "Wray Buntine" ], "title": "Variational extensions to em and multinomial pca", "venue": "In European Conference on Machine Learning,", "year": 2002 }, { "authors": [ "David M Blei", "Andrew Y Ng", "Michael I Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of machine Learning research,", "year": 2003 }, { "authors": [ "Scott W. Linderman", "Matthew J. Johnson", "Ryan P. Adams" ], "title": "Dependent multinomial models made easy: Stick breaking with the Pólya-gamma augmentation", "venue": "Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Justin D. Silverman", "Kimberly Roche", "Zachary C. Holmes", "Lawrence A. David", "Sayan Mukherjee" ], "title": "Bayesian Multinomial Logistic Normal Models through Marginally Latent Matrix", "venue": "T Processes", "year": 2019 }, { "authors": [ "Travis E Gibson", "Georg K Gerber" ], "title": "Robust and scalable models of microbiome", "venue": "dynamics. arXiv preprint arXiv:1805.04591,", "year": 2021 }, { "authors": [ "Tarmo Äijö", "Christian L Müller", "Richard Bonneau" ], "title": "Temporal probabilistic modeling of bacterial compositions derived from 16s rrna", "venue": "sequencing. Bioinformatics,", "year": 2018 }, { "authors": [ "Tyler A Joseph", "Amey P Pasarkar", "Itsik Pe’er" ], "title": "Efficient and accurate inference of mixed microbial population trajectories from longitudinal count data", "venue": "Cell Systems,", "year": 2020 }, { "authors": [ "J Atchison", "Sheng M Shen" ], "title": "Logistic-normal distributions: Some properties and uses", "venue": null, "year": 1980 }, { "authors": [ "Justin D Silverman", "Heather K Durand", "Rachael J Bloom", "Sayan Mukherjee", "Lawrence A David" ], "title": "Dynamic linear models guide design and analysis of microbiota studies within artificial human", "venue": "guts. Microbiome,", "year": 2018 }, { "authors": [ "Neal S Grantham", "Yawen Guan", "Brian J Reich", "Elizabeth T Borer", "Kevin Gross" ], "title": "Mimix: A bayesian mixed-effects model for microbiome data from designed experiments", "venue": "Journal of the American Statistical Association,", "year": 2020 }, { "authors": [ "Dean Billheimer", "Peter Guttorp", "William F Fagan" ], "title": "Statistical interpretation of species composition", "venue": "Journal of the American statistical Association,", "year": 2001 }, { "authors": [ "David M. Blei", "John D. Lafferty" ], "title": "Correlated topic models", "venue": "Advances in Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "Vera Pawlowsky-Glahn", "Juan José Egozcue", "Raimon Tolosana-Delgado" ], "title": "Modeling and analysis of compositional data", "venue": null, "year": 2015 }, { "authors": [ "James T Morton", "Jon Sanders", "Robert A Quinn", "Daniel McDonald", "Antonio Gonzalez", "Yoshiki Vázquez-Baeza", "Jose A Navas-Molina", "Se Jin Song", "Jessica L Metcalf", "Embriette R Hyde" ], "title": "Balance trees reveal microbial niche differentiation", "venue": "MSystems,", "year": 2017 }, { "authors": [ "Justin D Silverman", "Alex D Washburne", "Sayan Mukherjee", "Lawrence A David" ], "title": "A phylogenetic transform enhances analysis of compositional microbiota", "venue": "data. Elife,", "year": 2017 }, { "authors": [ "Alex D Washburne", "Justin D Silverman", "Jonathan W Leff", "Dominic J Bennett", "John L Darcy", "Sayan Mukherjee", "Noah Fierer", "Lawrence A David" ], "title": "Phylogenetic factorization of compositional data yields lineage-level associations in microbiome", "venue": "datasets. PeerJ,", "year": 2017 }, { "authors": [ "Juan José Egozcue", "Vera Pawlowsky-Glahn" ], "title": "Groups of parts and their balances in compositional data analysis", "venue": "Mathematical Geology,", "year": 2005 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Patrick Kidger", "Terry Lyons" ], "title": "Universal approximation with deep narrow networks", "venue": "In Conference on Learning Theory,", "year": 2020 }, { "authors": [ "John C Gower" ], "title": "Generalized procrustes analysis", "venue": "Psychometrika, 40(1):33–51,", "year": 1975 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Ivo Danihelka", "Karol Gregor", "Daan Wierstra" ], "title": "One-shot generalization in deep generative models", "venue": "arXiv preprint arXiv:1603.05106,", "year": 2016 }, { "authors": [ "Anupriya Tripathi", "Alexey V Melnik", "Jin Xue", "Orit Poulsen", "Michael J Meehan", "Gregory Humphrey", "Lingjing Jiang", "Gail Ackermann", "Daniel McDonald", "Dan Zhou" ], "title": "Intermittent hypoxia and hypercapnia, a hallmark of obstructive sleep apnea, alters the gut microbiome", "venue": "and metabolome. MSystems,", "year": 2018 }, { "authors": [ "Shabnam Shalapour", "Xue-Jia Lin", "Ingmar N Bastian", "John Brain", "Alastair D Burt", "Alexander A Aksenov", "Alison F Vrbanac", "Weihua Li", "Andres Perkins", "Takaji Matsutani" ], "title": "Inflammationinduced iga+ cells dismantle anti-liver cancer immunity", "venue": "Nature, 551(7680):340–345,", "year": 2021 }, { "authors": [ "David Lovell", "Vera Pawlowsky-Glahn", "Juan José Egozcue", "Samuel Marguerat", "Jürg Bähler" ], "title": "Proportionality: a valid alternative to correlation for relative data", "venue": "PLoS Comput Biol,", "year": 2015 }, { "authors": [ "Ionas Erb", "Cedric Notredame" ], "title": "How should we measure proportionality on relative gene expression data", "venue": "Theory in Biosciences,", "year": 2016 }, { "authors": [ "Thomas P Quinn", "Mark F Richardson", "David Lovell", "Tamsyn M Crowley" ], "title": "propr: an rpackage for identifying proportionally abundant features using compositional data analysis", "venue": "Scientific reports,", "year": 2017 }, { "authors": [ "John Aitchison", "Carles Barceló-Vidal", "José Antonio Martı́n-Fernández", "Vera Pawlowsky- Glahn" ], "title": "Logratio analysis and compositional distance", "venue": "Mathematical Geology,", "year": 2000 }, { "authors": [ "Cameron Martino", "James T Morton", "Clarisse A Marotz", "Luke R Thompson", "Anupriya Tripathi", "Rob Knight", "Karsten Zengler" ], "title": "A novel sparse compositional technique reveals microbial", "venue": "perturbations. MSystems,", "year": 2019 }, { "authors": [ "Cameron Martino", "Liat Shenhav", "Clarisse A Marotz", "George Armstrong", "Daniel McDonald", "Yoshiki Vázquez-Baeza", "James T Morton", "Lingjing Jiang", "Maria Gloria Dominguez-Bello", "Austin D Swafford" ], "title": "Context-aware dimensionality reduction deconvolutes gut microbial community dynamics", "venue": "Nature biotechnology,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "WA Falcon" ], "title": "Pytorch lightning", "venue": "GitHub. Note: https://github.com/PyTorchLightning/pytorchlightning, 3", "year": 2019 }, { "authors": [ "Daniel McDonald", "Jose C Clemente", "Justin Kuczynski", "Jai Ram Rideout", "Jesse Stombaugh", "Doug Wendel", "Andreas Wilke", "Susan Huse", "John Hufnagle", "Folker Meyer" ], "title": "The biological observation matrix (biom) format or: how i learned to stop worrying and love the ome-ome", "venue": null, "year": 2012 }, { "authors": [ "John D Hunter" ], "title": "Matplotlib: A 2d graphics environment", "venue": "Computing in science & engineering,", "year": 2007 }, { "authors": [ "Pauli Virtanen", "Ralf Gommers", "Travis E Oliphant", "Matt Haberland", "Tyler Reddy", "David Cournapeau", "Evgeni Burovski", "Pearu Peterson", "Warren Weckesser", "Jonathan Bright" ], "title": "Scipy 1.0: fundamental algorithms for scientific computing in python", "venue": "Nature methods,", "year": 2020 }, { "authors": [ "Charles R Harris", "K Jarrod Millman", "Stéfan J van der Walt", "Ralf Gommers", "Pauli Virtanen", "David Cournapeau", "Eric Wieser", "Julian Taylor", "Sebastian Berg", "Nathaniel J Smith" ], "title": "Array programming with numpy", "venue": "arXiv preprint arXiv:2006.10256,", "year": 2020 }, { "authors": [ "William H Press", "Saul A Teukolsky", "Brian P Flannery", "William T Vetterling" ], "title": "Numerical recipes in Fortran 77: volume 1, volume 1 of Fortran numerical recipes: the art of scientific computing", "venue": null, "year": 1992 }, { "authors": [ "Jonathan Chung-Kuan Huang", "Tomasz Malisiewicz" ], "title": "Fitting a hierarchical logistic normal distribution", "venue": "Unpublished Manuscript,", "year": 2021 }, { "authors": [ "Antonio Gonzalez", "Jose A Navas-Molina", "Tomasz Kosciolek", "Daniel McDonald", "Yoshiki Vázquez-Baeza", "Gail Ackermann", "Jeff DeReus", "Stefan Janssen", "Austin D Swafford", "Stephanie B Orchanian" ], "title": "Qiita: rapid, web-enabled microbiome meta-analysis", "venue": "Nature methods,", "year": 2018 }, { "authors": [ "Daniel McDonald", "Benjamin Kaehler", "Antonio Gonzalez", "Jeff DeReus", "Gail Ackermann", "Clarisse Marotz", "Gavin Huttley", "Rob Knight" ], "title": "redbiom: a rapid sample discovery and feature characterization system", "venue": "mSystems,", "year": 2019 } ]
[ { "heading": "INFERRING PRINCIPAL COMPONENTS IN THE SIMPLEX WITH MULTINOMIAL VARIATIONAL AUTOENCODERS", "text": "Anonymous authors Paper under double-blind review" }, { "heading": "1 ABSTRACT", "text": "Covariance estimation on high-dimensional data is a central challenge across multiple scientific disciplines. Sparse high-dimensional count data, frequently encountered in biological applications such as DNA sequencing and proteomics, are often well modeled using multinomial logistic normal models. In many cases, these datasets are also compositional, presented item-wise as fractions of a normalized total, due to measurement and instrument constraints. In compositional settings, three key factors limit the ability of these models to estimate covariance: (1) the computational complexity of inverting high-dimensional covariance matrices, (2) the non-exchangeability introduced from the summation constraint on multinomial parameters, and (3) the irreducibility of the multinomial logistic normal distribution that necessitates the use of parameter augmentation, or similar techniques, during inference. Using real and synthetic data we show that a variational autoencoder augmented with a fast isometric log-ratio (ILR) transform can address these issues and accurately estimate principal components from multinomially logistic normal distributed data. This model can be optimized on GPUs and modified to handle mini-batching, with the ability to scale across thousands of dimensions and thousands of samples." }, { "heading": "2 INTRODUCTION", "text": "Many scientific disciplines that collect survey data, such as economics, psychology, political science and the biological sciences routinely deal with compositional data, where only relative information can be measured. These datasets are often in the form of counts, where the total counts within a sample are only indicative of the confidence of the measured proportions. The resulting proportions lie within a simplex and failing to account for the structure of this simplicial sample space can confound the interpretation of the measurements. As a result, there has been wide discussion across disparate disciplines (1; 2; 3; 4) concerning the reproducibility crisis that has arisen from the misinterpretation of compositional data. One of the obstacles to the appropriate analysis of compositional data is the difficulty of efficiently estimating the latent parameters that lie in the simplex.\nAccurately scaling probabilistic inference across high-dimensional count data is a major outstanding challenge (5). This problem is apparent in the social sciences and is particularly pronounced in biological fields where datasets can obtain observations on tens of thousands of features across hundreds or millions of samples. One major computational bottleneck with Gaussian distributed data is the inversion of a d-dimensional covariance matrix that has a runtime of O(d3) (6; 7). As a result, probabilistic covariance estimation for high-dimensional data is a computationally challenging problem.\nRecent theoretical developments (8) cementing the connection between Variational Autoencoders (VAEs) (9) and Probabilistic Principal Components Analysis (PPCA) (10) holds much promise for enabling accurate, scalable, low-rank approximations of large covariance matrices. Variational autoencoders were originally proposed as a generative model (9), but are now commonly deployed across scientific disciplines and have made contributions to single-cell RNA sequencing (11), microbiome modeling (12), protein modeling (13; 14; 15), natural language processing (16) and image processing (9). Following insights that connected regularized linear autoencoders and PCA (17), Lucas et al. (8) showed that carefully designed VAEs can recover the weights that are solved by PPCA. A computational advantage of VAEs is that they do not require the inversion of a covariance matrix, and the resulting runtime is O(ndkT ) for n samples, d dimensions, k latent dimensions and T epochs. While it has been noted that VAEs may take tens of thousands of epochs to estimate\nthe principal component (18), VAEs are easily parallelizable and can be accelerated with GPUs, presenting an attractive alternative to estimating principal components (17) and the resulting covariance matrix.\nThe connection between VAEs and PPCA is currently limited to Gaussian distributed data and not well-suited to a compositional setting. Showing that VAEs can recover the correct principal components from count data is nontrivial due to the non-conjugacy issues between the logistic normal distribution and count distributions such as the multinomial distribution. Furthermore, the parameters of the multinomial distribution are compositional; they are constrained within the simplex and the resulting covariance matrix is singular and non-invertible (1; 19). Aitchison (20) showed that PCA can be adapted to compositional data through the use of the center log-ratio (CLR) transform, which maintains isometry. However, this transformation is not isomorphic, requiring that the resulting log-ratios sum to zero, and as a result, CLR-transformed data will produce a singular covariance matrix and rank-deficient principal components. It has been shown that the isometric log-ratio (ILR) transform (21) satisfies both isomorphism and isometry and can handle this singularity issue (22; 23) while enabling the estimation of full-rank principal components. Here, we show that VAEs augmented with the ILR transform can infer principal components learned from PPCA on multinomially distributed data, beginning to address these critical shortcomings." }, { "heading": "3 RELATED WORK", "text": "In the microbiome literature, there have been a number of methods (24; 25; 26; 27; 28) that have attempted to model ecological networks through the estimation of pairwise microbe correlations or pairwise inverse-covariance, where microbes are aggregated at different taxonomical scales or ‘taxa’. Of these tools, only Flashweave can scale across more than thousands of taxa; however, it does this by avoiding the estimation of the covariance matrix. Methods that attempt to estimate the covariance matrix can only handle on the order of a few thousand dimensions. Although there is no widely accepted consensus definition of Multinomial PPCA in this context, being able to efficiently estimate the parameters of Multinomial PPCA would be highly useful for exploratory biological analysis. A number of studies have proposed using mixture modeling as a proxy for PCA (29; 30; 31); however, these techniques depend either on the Dirichlet distribution, whose covariance matrix is not flexible, or on stick-breaking, which violates permutation invariance (32).\nLucas et al. (8) has previously shown that the following two models can obtain the same maximum likelihood estimates of principal componentsW :\nProbabilistic PCA\np(x|z) = N (Wz + µ, σ2Id) p(z) = N(0, Ik)\n∣∣∣∣∣ Linear VAE\np(x|z) = N (Wz + µ, σ2Id) q(z|x) = N (V (x− µ),D)\nHere, p(x|z) denotes the likelihood of observations x ∈ Rd given the latent representation z ∈ Rk, p(z) denotes the prior on z and q(z|x) denotes the estimated variational posterior distribution of z given an encoder parameterized by V and diagonal variances D. Both models estimate the same low dimensional representation of the data through z, and learn the same factorization of the covariance matrix throughW . While PPCA parameters are typically estimated through expectation maximization (10), linear VAEs are optimized by maximizing the Evidence Lower Bound (ELBO) given by\nlog p(x) ≥ Eq(z|x) [ log p(x|z) ] −KL ( q(z|x) ∣∣∣∣p(z)) For linear VAEs with a Gaussian likelihood, the variational posterior distribution q(z|x) can be shown to analytically agree with the posterior distribution p(z|x) learned from PPCA (8). However, deriving this connection for count-based likelihoods such as the multinomial distribution is complicated due to non-conjugacy issues (Appendix A). This is a major obstacle for many biological applications; multiple works have shown the merits of incorporating count distributions explicitly into the model (33; 34; 35; 36). Here, we provide directions for overcoming this issue." }, { "heading": "4 METHODS", "text": "First, we will redefine Multinomial PPCA with the ILR transform (21). Then we will make the connection between Multinomial VAEs and Multinomial PPCA by leveraging insights from the\nCollapse-Uncollapse (CU) sampler (33). We will then derive an algorithm to obtain the maximum a posteriori (MAP) estimate for the VAE parameters." }, { "heading": "4.1 PROBABILISTIC MULTINOMIAL PCA", "text": "PPCA can be extended to multinomially distributed data with the following generative model: p(x|η) = Mult(φ(Ψη)) (1) p(η|z) = N (Wz, σ2Id−1) (2) p(z) = N (0, Ik) (3) Here W ∈ Rd−1×k represents the PCA loading matrix, σ2 is the variance, Ψ ∈ Rd×d−1 is a fixed contrast matrix whose columns sum to zero and φ is the softmax transform. For a single sample, x ∈ Nd are the observed d -dimensional counts, η ∈ Rd−1 are the latent logits and z ∈ Rk is the latent representation. The term φ(Ψη) is distributed logistic normal, φ(Ψη) ∼ LN (Wz, σ2I), as shown by Aitchison (37). Furthermore, p(x|z) yields a multinomial logistic normal distribution, which is given by marginalizing out η in the following expression:\nMLN (x|z) = ∫ η p(x|η)p(η|z)dη\nThis integral is not tractable; as a result, this distribution does not have an analytically defined expectation, variance or probability density function. There have been multiple attempts to estimate the posterior distribution with MCMC (38; 39; 35; 40), but the complexity of this distribution requires a large number of samples, limiting the scalability of these methods. Variational methods have been developed to estimate the logistic normal distribution, but due to conditional non-conjugacy, these methods often rely on approximations to the ELBO, further complicating estimation (41).\nRecently, Silverman et al. (33) proposed to use a Laplace approximation to estimate the parameters of the multinomial logistic normal posterior distribution. This approach relies on a two-stage optimization procedure on the factorized posterior distribution given by\np(η, z|x) ∝ p(η|x)p(z|η) (4) If η can be directly estimated, then conditional independence can be used to factorize the posterior distribution. Since the probability densities of the multinomial distribution and the normal distribution are both log-convex functions, a global optimum can be obtained for the multinomial logistic normal distribution (33) (Appendix B.3). Furthermore, the multinomial distribution does not introduce additional local optima for estimating Multinomial PCA. Given this, in addition to recent evidence that a PPCA MAP estimator can be obtained from regularized linear autoencoders (17), we can design a new algorithm to obtain a Multinomial PPCA MAP estimator." }, { "heading": "4.2 THE ILR TRANSFORM ENFORCES IDENTIFIABLITY", "text": "The softmax function is a shift-invariant function, which introduces an identifiability issue that has been addressed by the compositional data analysis community (1; 42). In order to remove the identifiability issues, an isomorphism between the logits η and the multinomial parameters must be maintained. One commonly used solution is to use a degenerate softmax, also known as the inverse additive log-ratio (ALR) transform (1) (Appendix B.2). Previous work has suggested that the isometric log-ratio transform (ILR) (21; 22) is more suitable for principal components analysis (Appendix B). The ILR and inverse ILR are given as follows: ILR(x) = ΨT logx ILR(x)−1 = φ(Ψx) (5) where Ψ ∈ Rd×d−1 is a basis such that ΨTΨ = Id−1 and ΨΨT = Id − 1d1d×d. A naive implementation of the ILR transform can be memory-intensive and computationally intensive for large d. However, any orthonormal basis can be used to parameterize the ILR transform and some of these bases can be represented by binary trees (43; 44; 45). A binary tree can be used to represent Ψ with O(d log d) elements, where the lth column vector of Ψ is given as follows:\nΨ.l = (0, . . . 0︸ ︷︷ ︸ k , a, . . . a︸ ︷︷ ︸ r , b, . . . , b︸ ︷︷ ︸ s , 0, . . . , 0︸ ︷︷ ︸ t ) (6)\na = √ |s|√\n|r|(|r|+ |s|) b =\n− √ |r|√\n|s|(|r|+ |s|) (7)\nwhere l indexes an internal node in the tree with left children r, right children s, nodes to the left k and nodes to the right t (46) (Figure S2). Due to rotation invariance, it doesn’t matter which tree is used to parameterize the ILR basis, but the choice of tree can influence the runtime of the ILR transform. If a balanced binary tree is used, the memory requirements representing Ψ can be brought down from O(d2) to O(d log d) and can reduce the matrix vector multplication runtime from O(d2) to O(d log d) (See Appendix B.1). This can speed up the matrix-vector multiplication operations by an order of magnitude for datasets with more than ten thousand dimensions." }, { "heading": "4.3 MULTINOMIAL VARIATIONAL AUTOENCODER ARCHITECTURE", "text": "Our full Multinomial VAE model is given as follows:\np(x|η) = Mult(φ(Ψη)) (8) p(η|z;θdec) = N (Wz+ µ, σ2Id−1) (9)\nq(z|x;θenc) = N (FL ( ΨT (l̃og(x)− µ) ) ,D) (10)\nwhere θdec = {W , σ2} denotes the decoder parameters, θenc = {FL,D} denotes the encoder parameters and µ ∈ Rd−1 is a bias parameter. Here, q(z|x;θenc) denotes the variational posterior distribution of z given by the encoder represented as an L-layer dense neural network with appropriate activations. This encoder is directly used to evaluate p(η|z;θdec). Furthermore, flat priors are assumed for all variables except z.\nIt is important to note potentially challenging modeling issues when designing the encoder. The ILR transform is not directly applicable to count data, since log(0) is undefined. A common approach to this problem is to introduce a pseudocount before applying a logarithm, which we will denote as l̃og(x) = log(x+ 1). The choice of pseudocount is arbitrary and can introduce biases. To alleviate this issue, we introduce the deep encoder neural network highlighted in Equation 10; we expect that the universal approximation theorem would apply here (47; 48) and that the accuracy of estimating the latent representation z will improve with more complex neural networks. This is supported in our simulation benchmarks; more complex encoder architectures can better remove biases induced from the pseudocounts." }, { "heading": "4.4 ALTERNATING MINI-BATCH OPTIMIZATION PROCEDURE", "text": "Given that our objective here is to obtain the MAP estimate of the VAE model parameters, the VAE parameters θ = {FL,W ,D, σ2} can be obtained by estimating the global maximum of the posterior distribution. In the original CU sampler implementation, the parameter (η1, . . . , ηn) is optimized across the entire dataset and then (η1, . . . , ηn) is fixed in order to estimate the remaining parameters. For large studies, this can be memory demanding, since (η1, . . . , ηn) ∈ Rn×d−1 alone can scale to millions of parameters.\nTo scale this estimation procedure to large high-dimensional datasets we have devised a minibatched alternating minimization procedure. For a mini-batch X(i) = (x(1), . . . ,x(b)) of size b\nAlgorithm 1 VAE Alternating Maximization Optimization repeat\nforX(i) ∈X do Ĥ ← argmax\nη\n[ log p(X(i)|H) ] ∈ Rb×d−1\nθ ← argmax θ\n[ log q(Ĥ|X(i);θ) + log p(Z) ] end for\nuntil convergence\nand corresponding latent variables Z(i) = (z(1), . . . ,z(b)) and H(i) = (η(1), . . . ,η(b)), the quantities log p(X(i)|H(i)) and log p(Z(i)) are given by Equation 1 and 3. The variational posterior\ndistribution of η given by q(H(i)|X(i);θ) = ∏b j=1 q(η\n(j)|x(j);θ) can be obtained by marginalizing out z in Equations 8, 9 and 10 as follows:\nq(η(j)|x(j);θ) = N ( WFL ( ΨT (l̃og(x(j))− µ) ) + µ, WDW T + σ2Id ) (11)\nThe prior log p(Z(i)) is evaluated, given log p(Z(i)) = logN (Z(i)|0, Ik) and the latent encoding means given by Z(i) = FL ( ΨT (l̃og(X(i))− µ) ) , where Z(i) is integrated out of Equation 11." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 MULTINOMIAL PPCA AND VAES AGREE IN SIMULATION BENCHMARKS", "text": "To determine if the proposed multinomial variational autoencoder can recover principal components, we extended the benchmarks proposed in (18). Here, we benchmarked our proposed analytical VAE\nto the stochastic VAEs (9) with a multinomial likelihood across multiple simulations. These methodologies were compared against the ground truth covariances of the multinomial logistic normal, in addition to the MAP and Hamiltonian Monte Carlo (HMC) estimates obtained from Stan (49). The agreement between the ground truth principal components and the estimated principal components is measured by axis-alignment, subspace distance, correlation and Procrustes (50) (see Appendix C.4 for details).\nWhen fitting against multinomial logistic normal distributed data with no zero counts, all of the ILRbased methodologies can accurately estimate the covariance matrix up to scale (Figure 1a-d). The analytical VAE MAP estimate and the posterior samples of HMC all have a correlation close to 1, suggesting a strong agreement between the ground truth covariance and the estimated covariances. If the principal components were perfectly estimated, the axis-alignment, subspace distance and Procrustes metrics would all be close to zero. While the subspace distance and Procrustes are notably close to zero, the axis-alignment metric is above 0.5, which would suggest disagreement between the ground-truth and the estimated principal component axes. Given that the posterior distribution of the axis-alignment metrics obtained from HMC overlaps with the analytical VAE, stochastic VAE and MAP estimates, there is evidence that reducing this metric to zero is inherently difficult for count data. However, both the subspace distance and Procrustes metrics approach zero, supporting our claim that principal components can be estimated up to scale and rotation on fully observed count data. Furthermore, our simulations (Figure 1i-k and Figure 2a-h) suggest that among the log-ratio transforms benchmarked, only the ILR transform can recover principal components, corroborating previous findings surrounding compositional PCA (42; 23). Dealing with sparse data presents addi-\ntional challenges; the zeros in count data are indicative of missing data, which can further complicate the estimation of principal components. Our hypothesis that boosting the complexity of the encoder architecture would help alleviate issues with missing data is supported by benchmarks shown in Figure 2i-p. In both of the sparse datasets, none of the methods were able to achieve optimal accuracy across any of the benchmark metrics, but there is a clear advantage of utilizing multilayer encoders compared to single-layer encoders.\nAcross the simulation benchmarks, the analytical and stochastic VAEs have comparable performance, with discrepancies highlighted by the correlation metric in the dense and sparse benchmarks. On the dense datasets, our proposed analytical VAE has better agreement with the ground truth covariance metric, whereas on the sparse datasets, it appears that the stochastic VAE has better agreement. It is difficult to explain the reason for the performance gap between our proposed analytical VAE and the stochastic VAE due the analytical intractability of the Multinomial VAE ELBO. The challenge of accurately estimating an optimal H for each mini-batch could be one limiting factor affecting the performance of our proposed analytical VAE (Appendix B.4)." }, { "heading": "5.2 PRETRAINING MULTINOMIAL VAES ON A VERY LARGE COMPENDIUM OF MOUSE MICROBIOME OBSERVATIONS", "text": "To showcase this on real microbiome datasets, 11k mouse fecal pellet samples with a total of 8.8k features across 45 studies were analyzed. We trained 2 models to evaluate the differences between the stochastic VAE and the analytical VAE. We trained these models for 10k epochs; the models with the smallest validation error are reported here. The details behind the full training procedure are given in Appendix C.3. Visualization of the learned decoder weights shown in Figure 3 makes\nit clear that the analytical VAE and, to a lesser extent, the stochastic VAE are able to learn orthog-\nonal embeddings, since W TW approximates a diagonal matrix. The orthogonality of the decoder weights is more apparent in our proposed analytical VAE than in the stochastic VAE decoder; there is a larger distinction between the diagonal elements and the off-diagonal elements. Unlike linear VAEs (8), the connection between the eigenvalues and the decoder weights is not as clear due to the asymmetry between the encoder and the decoder; scale identifiability of the decoder weights complicates the process of estimating the eigenvalues. This is apparent in Figure 3; heatmaps of the W inner products are on vastly different scales.\nOne of the advantages of employing a pretraining strategy with VAEs is that the pretrained models can enable one-shot learning (51). By estimating a low-dimensional representation from a large unlabeled high-dimensional dataset, fewer labeled samples for training downstream classifiers. We showcase this property in two classification benchmarks." }, { "heading": "5.3 CLASSIFICATION AND CORRELATION BENCHMARKS", "text": "To construct benchmarks on real biological datasets, we analyzed 16S sequencing data obtained from a study conducted by Tripathi et al. investigating hypoxia that used mouse fecal pellets (52) and a study conducted by Shalapour et al. investigating cancer effects on mice (53).\nDue to the connections between Multinomial VAEs and compositional PCA (20; 22), we expect that, if our VAE is performing well, we will see that the row Euclidean distances of ΨW ∈ Rd×k correspond to Lovell’s un-normalized proportionality metric (54; 23) given by∥∥∥∥(ΨW )i − (ΨW )j∥∥∥∥2\n2\n∝ Var ( log\nxi xj\n) (12)\nwhere x refers to the observed proportions and i and j refer to the two features being compared. The proportionality metric has a simple interpretation; if this variance is small, that implies that the two features are highly co-occurring.\nHere, only a relative relationship between the proportionality metric and the VAE decoder row distances can be stated due to scale identifiability issues inherent in Multinomial PPCA. Proportionality has been shown to be a more reliable metric for compositional data compared to standard correlation metrics such as Pearson and Spearman in single cell RNA sequencing and microbiome studies (3; 55; 56; 2), supporting the notion that our proportional comparison of our VAE decoder to Lovell’s un-normalized proportionality metric will provide a sound perspective, as well as a means for interpreting the weights of the VAE.\nWe compared the estimated VAE embeddings with the proportionality metric to determine if this relationship holds empirically. The pairwise decoder distances are compared to the pairwise proportionality metrics of 300 selected features (Table 1 and Figure S5). The learned VAE representations are benchmarked using K-nearest neighbors (KNN) classification. KNN is applied to the learned VAE encodings across the two microbiome datasets to determine how well the classifiers can identify the mice that were induced with hypoxia (52) and classify mice based on their experimental group (53). These classification models are compared against two baseline models, namely KNN trained on LDA topic proportions (31), and KNN trained on raw counts.\nFrom the classification benchmarks illustrated in Table 1, we can see that there is a large margin between the VAE models and the baseline models across the measured metrics. Both VAEs had a higher F1 score and AUC, suggesting that the learned representations can better separate the experimental groups. Part of this performance boost could be attributed to the differences between distance metrics. When the ILR transform is utilized, the Euclidean distance between the latent encodings approximates the Aitchison distance between proportions (57; 23), which measures relative differences instead of absolute differences. Our findings collaborate theoretical insights from compositional data analysis and empirical microbiome observations, where the Aitchison distance provided a substantial boost in KNN classification accuracy compared to other distance metrics (58; 59). Furthermore, the average negative log-likelihood score (NLL) is lower for both of the VAE models compared to LDA, suggesting that the VAE models generalized better on held out data than LDA. This decrease in the predictive log-likelihood could be attributed to the increased flexibility of the covariance matrix in the logistic normal distribution compared to the Dirichlet distribution.\nThere are a couple of notable discrepancies between the two Multinomial VAE models. The stochastic VAE appears to have superior classification performance and lower reconstruction error com-\npared to the analytical VAE. The exception to this is the HCC dataset (53), where the analytical VAE marginally outperforms the stochastic VAE in terms of AUC. However, the analytical VAE can learn more apparent orthogonal embeddings (Figure 3) and better agrees with Lovell’s proportionality metric (Table 1). Only the analytical VAE was able to recover the log-linear relations between Lovell’s proportionality and the VAE embedding distances, suggesting that it can more accurately learn biologically relevant correlations (Figure S5)." }, { "heading": "CONCLUSION", "text": "Prior work aiming to build frameworks for probabilistic estimation of covariance matrices from count data have been largely limited to conjugate priors due to their tractability. However, these choices can lead to models with lower explanatory power due to the rigid structure of the resulting covariances. Further, the correct treatment of compositional data requires additional development in this context. For example, disregarding the simplicial sample space associated with compositional data and performing Pearson correlations on raw count data is common practice in scientific applications, but is a source of reproducibility issues (2). Due to the negative bias induced from the covariance on the observed count data, the estimated covariances will not agree with the ground truth covariances in the system of interest, a fact noted by Pearson in 1897 (60).\nAdapting the logistic normal distribution in place of conventional conjugate priors provides a means to remedy the issue of inferring correlations on compositional data. The covariance matrix on ILRtransformed data can be interpreted using Lovell’s proportionality metric, and can serve as a replacement for pairwise correlations. Since log-ratios are scale-invariant, the dependence on the total counts disappears, which is critical for assuring agreement between the relative quantities measured through count data and the absolute quantities in the system that aren’t directly observable. While there is sound motivation to employ the logistic normal distribution in this context, its application has been limited due to the challenge of estimating these distributions. Here, we have provided a means to accurately and efficiently estimate these distributions and covariance matrices up to scale using the ILR transform. To this end, we have provided a proof-of-concept that Multinomial VAEs can learn the principal components obtained from Multinomial PPCA.\nWe have shown that fitting low-rank approximations using Multinomial VAEs can provide more explanatory representations than LDA while providing a means to obtain interpretable correlations. These methods can be used in one-shot learning and transfer learning settings, requiring fewer labeled samples for training and allowing for the use of pretrained models to extract features from smaller datasets. Given the vast number of scientific disciplines that collect compositional data, we anticipate that models such as these Multinomial VAEs will have a significant impact on many scientific applications." }, { "heading": "CODE AVAILABILITY", "text": "All of our software and analyses can be found on Zenodo at http://doi.org/10.5281/zenodo.4289004" }, { "heading": "ACKNOWLEDGEMENTS", "text": "We want to acknowledge Pytorch (61), Pytorch-Lightning (62), the Biom format (63), Matplotlib (64), Scipy (65), Numpy (66) and Stan (49) for providing the software foundation that this work was built upon." }, { "heading": "Appendices", "text": "" }, { "heading": "A CHALLENGES IN DERIVATION OF AN ANALYTICAL MULTINOMIAL VAE", "text": "ELBO\nRecall that the generative model for Multinomial PPCA is given as follows: p(x|η) = Mult(x|φ(Ψη)) (13) p(η|z) = N (η|Wz + µ, σ2Id−1) (14) p(z) = N (z|0, Ik) (15)\nWith this in mind, we wish to estimate variational distributions q(η, z|x) = q(η|z)q(z|x) to approximate the posterior p(η, z|x). These variational distributions can both be chosen to be normal distributions as follows:\nq(z|x) = N (VΨT (l̃og(x)− µ),D) q(η|z) = p(η|z)\nNoting that z ∼ N (VΨT (l̃og(x)− µ),D), q(η|x) can be derived from q(z|x) as follows: q(η|x) = N ( WVΨT (l̃og(x))− µ) + µ,WDW + σ2I ) To fine-tune these variational distributions to approximate the posterior distribution, we can minimize the following KL divergence:\nargmax q(η,z|x)\nKL(q(η, z|x) ∣∣∣∣p(η, z|x))\nSince we cannot optimize this quantity directly, we opt instead to maximize the evidence lower bound (ELBO) given by\nEq(η,z|x) [ log\np(η, z|x) q(η, z|x)\n] ≥ Eq(η,z|x) [ log\np(x|η)p(η|z)p(z) q(η, z|x) ] We can partition this lower bound into three parts, given as follows:\n= Eq(η|x)q(z|x)[log p(x|η)]︸ ︷︷ ︸ (i)\n+Eq(η|x) [ log\np(η|z) q(η|x) ] ︸ ︷︷ ︸\n(ii)\n+Eq(z|x) [ log p(z)\nq(z|x) ] ︸ ︷︷ ︸\n(iii) Since Mult(x|p) ∝ d∑ i=1 xi log pi, the first term is given by\nEq(η|x)q(z|x) [ log p(x|η) ] ∝ Eq(η|x) [ d∑ i=1 xiφ(Ψη)i ]\n∝ ∫ η N ( WVΨT (l̃og(x)− µ),WDW + σ2I ) d∑ i=1 xiφ(Ψη)i dη\nEstimating the above integral is equivalent to estimating the expectation of a logistic normal distribution, which does not have an analytical solution (37). As a result, the analytical ELBO for the Multinomial VAE is intractable." }, { "heading": "B THE ILR TRANSFORM", "text": "As outlined in Equation 5, any orthogonal basis Ψ can be used to perform the ILR transform. However, there are a select few bases that can be represented by a binary tree. To see how an orthogonal basis can be constructed from a binary tree, consider the illustration in Figure S1.\nFigure S1: A small example showing how an orthogonal basis can be constructed from a binary tree. x = (x1, x2, x3, x4) ∈ S4 represents species proportions and η = (η1, η2, η3) ∈ R3 represents the log-ratios in the internal nodes. Ψ is a matrix of orthogonal contrasts that also can be represented from a binary tree.\nHere the rows of the matrix in Figure S1 are orthogonal and the resulting product η = ΨT logx yields\nη1 = log x1\nx2x3x4 η2 = log x2x3 x4\nη3 = log x3 x4\nThe contrast matrix Ψ can be forced to be orthonormal such that ΨTΨ = Id−1, as highlighted in Equation 6. Furthermore, this construction can be scaled to large binary trees as shown in Figure S2. Here, ηl represents the log-ratios at the internal node l, given by\nηl = √ |r||s| |r|+ |s| log g(xr) g(xs) (16)\nThe runtime of this operation is further discussed in Appendix B.1." }, { "heading": "B.1 THE RUNTIME OF THE ILR TRANSFORM", "text": "The naive runtime of the ILR transform of a single sample x ∈ Rd is O(d2) due to the running time of dense matrix-vector multiplication. As shown in Equation 5, the ILR transform can also be represented by log-linear transformation with a contrast matrix. The binary tree can be used to represent a contrast matrix, as discussed in (46).\nIf the binary tree is balanced, each row of Ψ will have O(log d) non-zero elements, since the tree has a height of O(log d). Given that there are d rows, the matrix-vector multiplication behind the inverse ILR transform Ψη can be done in O(d log d). For the same reason, the ILR transform has a runtime of O(d log d).\nFigure S2: An illustration of how the ILR basis can be constructed on large trees. The quantities g(xr) and g(xs) yield the geometric means within a vector of proportion x for subsets xr and xs. Here, r and s refer to the sets of features in the left and right subtrees for the internal node l. The log-ratios η can be obtained from either Equation 6 or Equation 16 ." }, { "heading": "B.2 SHIFT INVARIANCE OF THE SOFTMAX TRANSFORM", "text": "There are two different scale identifiability issues. The softmax transform that is commonly used is shift invariant, where\nφ(x)← φ(x+ a) ∀a ∈ R This shift invariance will cause an identifiability issue when identifying the decoder matrixW . The inverse ALR transform resolves this issue by enforcing an isomorphism; one of the coordinates is set to zero as follows:\nALR(x) = [ log\nx1 xd , . . . , log xd−1 xd , 0\n] , ALR−1(x) = φ ( (x1, . . . , xd−1, 0) ) One of the implicit constructions of this transform is that the resulting contrast matrix is not orthogonal (42). Furthermore, because the ALR transform is not isometric, the resulting Euclidean distances in z will not approximate the Aitchison distance. The ILR transform provides the best of both worlds, enforcing both isometry and isomorphism. Furthermore, there is a tight connection between the ILR transform and compositional PCA that is further discussed in the next section." }, { "heading": "B.3 GLOBAL LOG-CONVEXITY OF THE MULTINOMIAL LOGISTIC NORMAL DISTRIBUTION", "text": "Since the multinomial logistic normal distribution is difficult to directly evaluate, it is challenging to make statements regarding its maximum likelihood estimator. With the posterior factorization highlighted in Equation 4, we can make concrete statements about the original posterior factors. The log probability density of the multinomial distribution p(x|η) can be written as\nlog p(x|η) ∝ d∑ i=1 xi log φ(Ψη)i\n= d∑ i=1 xi(Ψη)i −m log ( d∑ j=1 exp(Ψη)i ) = g(η) · T (x)−A(η)\nwhere m = ∑d i=1 xi is the total number of counts and the functions g(η)i = (Ψη)i, T (x)i = xi,\nand A(η) = log (∑d\nj=1 exp(Ψη)i\n) are the natural parameters of the exponential family distribu-\ntion. The Hessian of log p(x|η) is given by\nd2 log p(x|η) dηidηj = d2A(η) dηidηj\nSince A(η) is strictly convex, log p(x|η) is also strictly convex. Similarly, the log probability density of the multivariate Gaussian distribution p(z|η) = N (µ,σ) is also strictly convex with respect to µ and Σ. Since the multinomial logistic normal distribution can be written as the sum of log p(z|η) and log p(x|η), it is also strictly convex. Therefore, there must be a unique optimal estimate for µ and Σ." }, { "heading": "B.4 MULTINOMIAL VARIATIONAL AUTOENCODER ESTIMATION", "text": "The covariance matrixWDW T + σ2Id−1 ∈ Rd−1×d−1 in Equation 11 can be efficiently inverted using the Woodbury identity (67; 68). The prior log p(Z(i)) is evaluated given log p(Z(i)) = N (Z(i)|0, Ik), where Z(i) = FL ( ΨT (l̃og(X(i))− µ) ) .\nThe optimal Ĥ(i) that maximizes log p(X(i)|H(i)) is obtained through gradient descent optimization. Once Ĥ(i) is obtained, the remaining VAE parameters θ can be estimated via gradient descent optimization. Like the EM algorithm proposed in (69), this procedure alternates between estimating H(i) and θ and repeats until convergence. For a single-layer linear encoder, this optimization procedure will eventually reach the global maxima with respect toH(i) and the multivariate normal mean and covariance in q(H(i)|θ,X(i)), due to the log-convexity of the multinomial and normal distributions (Appendix B.3). On fully observed data,W andD can be estimated up to rotation and scale (Appendix B.6). This is not guaranteed for sparse count data, but using simulations, we can empirically show that increasing the complexity encoder architectures can help. Since Ĥ(i) needs to be accurately estimated before optimizing FL,W , σ, and D, multiple gradient descent steps are required for a given mini-batch.\nIn practice, obtaining the optimal Ĥ(i) for a given mini-batch i may require hundreds of gradient descent steps from a random initialization. Since H(i) is used to approximate the multinomial parameters describing the observed counts x, we can initializeH(i) with Ĥ(i)0 = Ψ\nT l̃og(X(i)). In practice, this can greatly reduce the number of gradient descent updates needed per mini-batch." }, { "heading": "B.5 THE CONNECTION BETWEEN MULTINOMIAL VAES AND COMPOSITIONAL PCA", "text": "With singular value decomposition, the resulting factors can be used to approximate the row and column distances. For a singular value decomposition given by X = USV T , row distances and column distances can be approximated as follows:\n‖xi. − xj.‖2 ≈ ‖siui − sjuj‖2 ‖x.i − x.j‖2 ≈ ‖sivi − sjvj‖2\nThis relationship also yields a connection to the row and column covariances; the row covariances are given byXTX = US2U and the column covariances are given byXXT = V S2V .\nA similar relationship applies to compositional PCA, except the singular value decomposition is applied to CLR-transformed values of X . The CLR transform is given by\nCLR(x) = [ log\nx1 g(x) , . . . , log xd g(x)\n] CLR−1(x) = φ(x)\nThe inverse CLR transform is equivalent to the softmax transform φ(x). The Euclidean distance on CLR and ILR transformed values is given by the Aitchison distance (57). As a result, the row and column distances are given by the Aitchison distance. The covariance matrix of CLR-transformed values is singular, and as a result, the resulting singular value decomposition will have a full rank of d−1. As shown by Egozcue et al. (23), the top d−1 components of the singular value decomposition are all in ILR coordinates. This observation cements the connection between compositional PCA and our proposed Multinomial VAE; since all of the singular value components can be represented\nin ILR coordinates, this theoretically justifies the use of the ILR transform within our proposed Multinomial VAE.\nDue to this connection, the distances between the learned VAE representations z across pairs of samples should be proportional to the Aitchison distance. Furthermore, the distances between rows of ΨW would also be given by the Aitchison distance, which is equivalent to Lovell’s un-normalized proportionality constant (23)." }, { "heading": "B.6 LACK OF SCALE IDENTIFIABILITY OF PPCA", "text": "As discussed in (8), VAEs will have the same identifiability issues that PPCA has; namely, for a diagonal matrixA ∈ Rk×k, the following equivalences hold:\nW ←WA, V ← A−1V\nAs a result, the decoder weightsW can only be identified up to scale. Furthermore, the Multinomial VAE architecture is asymmetric and, as a result, the Tranpose Theorem proposed by (17) is not as readily applicable. The lack of symmetry between W and V complicates the process of obtaining eigenvalues fromW ." }, { "heading": "C SIMULATION AND MICROBIOME BENCHMARKS", "text": "" }, { "heading": "C.1 SIMULATION DETAILS", "text": "In the dense simulation highlighted in Figure 1, there were 100 features, 1000 samples and 1 million counts per sample. These counts were drawn from a multinomial logistic normal distribution whose covariance matrix had a rank of 10. The sparse dataset highlighted in Figure 2a-h contained 200 dimensions and 1000 samples, each of which contained 100 counts that were sampled from a multinomial logistic normal distribution whose covariance matrix had a rank of 10. The sparse dataset highlighted in Figure 2i-p contained 5000 dimensions and 10000 samples, each of which contained only 1000 counts total. These counts were sampled from a multinomial logistic normal distribution whose covariance matrix had a rank of 128.\nThe encoder architecture used for the stochastic and analytical VAEs consisted of fully connected dense layers with interwoven softplus activation functions. The intermediate encoder layers all have the same input and output dimension as the number of latent dimensions specified in the model. The same architecture was used for both the stochastic VAEs and the analytical VAEs. A schematic of the architecture is shown in Figure S3.\nMultinomial PCA was fitted using both penalized likelihood estimation and Hamiltonian Monte Carlo using Stan (Appendix C.4). The initialization of these models was determined by the estimated factor loadings from the analytical VAE. This is particularly critical for fitting HMC on high-dimensional datasets. We were not able to run Stan on the sparse datasets." }, { "heading": "C.2 EXPERIMENTAL DATASET DETAILS", "text": "The full 8.8k mouse pellet dataset was retrieved from qiita (70) using redbiom (71).\nIn the hypoxia study conducted by Tripathi et al., mice were placed in simulated conditions to induce intermittent hypoxic/hypercapnic (IHH) stress, where the mouse’s oxygen supply was reduced and its CO2 supply was increased. The goal of this experiment was to detect the gut microbiome differences between the IHH mice and the control mice. We will use this information as a classification benchmark to determine if the experimental conditions the mice were placed under can be predicted from 16S sequencing counts. In total, there were 48 mice; 24 of these mice were placed in these IHH conditions and the remaining 24 mice served as controls. Fecal samples were collected twice a week for 6 weeks, resulting in 579 samples total. There were 5775 microbial taxa that were detected in this dataset.\nThe goal of the hepatocellular carcinoma (HCC) study conducted by Shalapour was to investigate the interplay between immunity, diet and carcinoma. In total, there were 478 mice with 52 phenotypes, split according to diet, immunity status and whether or not the mice were induced with HCC. Our\nFigure S3: Visualization of VAE architecture\nclassification objective is to predict the mouse phenotype from the 16S sequencing counts obtained from their fecal pellets. In total, there was one fecal pellet collected from each mouse and there were 2794 microbial taxa detected across these samples. This benchmark is more challenging than the IHH benchmark due to the extreme class imbalance; some phenotypes contain upwards of 30 mice and other contain as few as 2 mice (Figure S4). The resulting classification benchmarks are shown in Table 1." }, { "heading": "C.3 MICROBIOME VAE TRAINING DETAILS", "text": "For the full 11k fecal pellet dataset, we focused on samples that were processed using closedreference OTU picking using vsearch (72). This dataset was split into a 80/10/10 train/validation/test split in order measure out-of-distribution generalizability.\nWe trained our proposed analytical VAE and the stochastic VAE with 128 latent dimensions and a batch size of 1000 samples for 10000 epochs, with a learning rate of 10−3 and 100 gradient descent steps per mini-batch. Checkpoints were recorded every epoch; the models with the best validation error are reported. Cosine annealing with warm restarts was used as a learning rate scheduler, with the intention of easily escaping saddle points during optimization." }, { "heading": "C.4 METRICS FOR EVALUTING ESTIMATED DECODER WEIGHTS", "text": "To measure the agreement between the ground truth simulations and the estimated decoder weights W , three different metrics were evaluated, namely, axis alignment and subspace distance, as proposed in (18), in addition to Procrustes analysis (50). Given the decoder weights W and the ground truth principal components U , these metrics are defined as follows:" }, { "heading": "Axis alignment", "text": "d(U ,W ) = 1− 1 k k∑ i=1 max j\n(UTi Wj) 2\n‖|Ui‖|22‖|Wj‖|22\nThis metric is ultimately a measure of the average cosine distance between the ground truth eigenvectors and the estimated decoder weights." }, { "heading": "Subspace distance", "text": "d(U ,W ) = 1− Tr(UUTW∗W T∗ )\nwhere W∗ denotes the left singular vectors of W , given by W = W∗ΛV . Since UUT yields the ground truth correlations and W∗W T∗ yields the estimated correlations, this metric can be interpreted as a measure of agreement between the ground truth correlations and the estimated correlations." }, { "heading": "Procrustes Analysis", "text": "d(U ,W ) = argmin R,A\n‖|U −WRA‖|22\nfor A,R ∈ Rk×k, where A is a diagonal matrix and R is a rotation matrix. Prior to evaluating this metric, both U and W are standardized such that Tr(UUT ) = Tr(WW T ) = 1 and W is centered around the origin.\nCorrelation\nd(U ,W ) = Corr(vec(AU ), vec(AW ))\nwhere AU (i, j) = ‖|ui − uj‖|2 and AW (i, j) = ‖|wi −wj‖|2 denote the pairwise distances of U and W . The pairwise distances are rotation invariant. Furthermore, the measure of the correlation is agnostic to scale, and will ignore the eigenvalue scale identifiability issues highlighted in the main text." }, { "heading": "C.5 STAN MULTINOMIAL PCA IMPLEMENTATION", "text": "data { i n t<lower=0> N; / / number o f samp les i n t<lower=0> D; / / number o f d i m e n s i o n s i n t<lower=0> K; / / number o f l a t e n t d i m e n s i o n s matrix [D−1, D] P s i ; / / Or thonormal b a s i s i n t y [N, D ] ; / / o b s e r v e d c o u n t s }\nparameters { matrix [N, D−1] e t a ; / / i l r t r a n s f o r m e d abundances matrix [D−1, K] W; rea l<lower=0> s igma ; }\ntransformed parameters { matrix [D−1, D−1] Sigma ; matrix [D−1, D−1] I ; v e c t o r [D−1] z ; I = d i a g m a t r i x ( r e p v e c t o r ( 1 . 0 , D−1 ) ) ; Sigma = W ∗ W’ + square ( s igma ) ∗ I ; z = r e p v e c t o r ( 0 , D−1); }\nmodel { / / g e n e r a t i n g c o u n t s f o r ( n in 1 :N){\ne t a [ n ] ˜ mult i normal ( z , Sigma ) ; y [ n ] ˜ mul t inomia l ( softmax ( t o v e c t o r ( e t a [ n ] ∗ P s i ) ) ) ;\n} }\nFigure S4: Distribution of 52 phenotypes in the Shalapour et al. study, highlighting the class imbalance inherent in this benchmark.\nDue to the relationship between squared Euclidean distance and proportionality given in Equation 12, we expect the proportionality metric to exhibit a log-linear relationship between the Euclidean distances obtained from the VAE embeddings. Indeed, this is apparent with the analytical VAE embeddings in Shalapour et al., and to a lesser extent in Tripathi et al., as shown in Figure S5.\nFigure S5: Comparison of Lovell’s proportionality metric and the pairwise embedding distances for the stochastic VAE and the analytical VAE evaluated on the Shalapour et al. dataset and the Tripathi et al. dataset. The embedding distance is defined by the Euclidean distance between two rows within ΨW . Since the proportionality metric is not defined for zeros, it is only evaluated for samples where both taxa are observed. Only the top 300 most abundant microbes are visualized here." } ]
2,020
null
SP:dd0782278b556d2946ddd4bb7ea71c2bfbea948d
[ "This paper proposes an adaptive self-training framework, called MetaST, for tackling few-shot sequence labeling tasks. The framework consists of several components: a teacher model that finetunes with the few-shot training data and generates noisy labels for the unlabeled examples; a student model that learns from re-weighted noisy labels (at the token level), and an iterative process to update the teacher with the trained student. It also uses a meta-learning mechanism to adjust the token-level weights based on a subsampled set of clean data. This subset is sampled based on the student model’s uncertainty to improve learning efficiency." ]
Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing. Large-scale pre-trained language models obtain very good performance on these tasks when fine-tuned on large amounts of task-specific labeled data. However, such large-scale labeled datasets are difficult to obtain for several tasks and domains due to the high cost of human annotation as well as privacy and data access constraints for sensitive user applications. This is exacerbated for sequence labeling tasks requiring such annotations at token-level. In this work, we develop techniques to address the label scarcity challenge for neural sequence labeling models. Specifically, we develop self-training and meta-learning techniques for training neural sequence taggers with few labels. While self-training serves as an effective mechanism to learn from large amounts of unlabeled data – meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. Extensive experiments on six benchmark datasets including two for massive multilingual NER and four slot tagging datasets for task-oriented dialog systems demonstrate the effectiveness of our method. With only 10 labeled examples for each class for each task, our method obtains 10% improvement over state-of-the-art systems demonstrating its effectiveness for the low-resource setting.
[]
[ { "authors": [ "Philip Bachman", "Ouais Alsharif", "Doina Precup" ], "title": "Learning with pseudo-ensembles", "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Trapit Bansal", "Rishikesh Jha", "Andrew McCallum" ], "title": "Learning to few-shot learn across diverse natural language classification", "venue": null, "year": 2020 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Haw-Shiuan Chang", "Erik Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Haw-Shiuan Chang", "Erik G. Learned-Miller", "Andrew McCallum" ], "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Haw-Shiuan Chang", "Shankar Vembu", "Sunil Mohan", "Rheeya Uppaal", "Andrew McCallum" ], "title": "Using error decay prediction to overcome practical issues of deep active learning for named entity recognition", "venue": "Machine Learning,", "year": 2020 }, { "authors": [ "Luoxin Chen", "Weitong Ruan", "Xinyue Liu", "Jianhua Lu" ], "title": "Seqvat: Virtual adversarial training for semi-supervised sequence labeling", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Mingda Chen", "Qingming Tang", "Karen Livescu", "Kevin Gimpel" ], "title": "Variational sequential labelers for semi-supervised learning", "venue": "arXiv preprint arXiv:1906.09535,", "year": 2019 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Christopher D Manning", "Quoc V Le" ], "title": "Semi-supervised sequence modeling with cross-view training", "venue": "arXiv preprint arXiv:1809.08370,", "year": 2018 }, { "authors": [ "Alice Coucke", "Alaa Saade", "Adrien Ball", "Théodore Bluche", "Alexandre Caulier", "David Leroy", "Clément Doumouro", "Thibault Gisselbrecht", "Francesco Caltagirone", "Thibaut Lavril", "Maël Primet", "Joseph Dureau" ], "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "venue": "In Privacy in Machine Learning and Artificial Intelligence workshop,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "Minneapolis, MN,", "year": 2019 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Junxian He", "Jiatao Gu", "Jiajun Shen", "Marc’Aurelio Ranzato" ], "title": "Revisiting self-training for neural sequence generation, 2019", "venue": null, "year": 2019 }, { "authors": [ "H.J. Scudder III" ], "title": "Probability of error of some adaptive pattern-recognition machines", "venue": "IEEE Trans. Inf. Theory,", "year": 1965 }, { "authors": [ "Diederik P. Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semisupervised learning with deep generative models", "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "arXiv preprint arXiv:1703.04730,", "year": 2017 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ananya Kumar", "Tengyu Ma", "Percy Liang" ], "title": "Understanding self-training for gradual domain adaptation", "venue": "arXiv preprint arXiv:2002.11361,", "year": 2020 }, { "authors": [ "M.P. Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Xinzhe Li", "Qianru Sun", "Yaoyao Liu", "Qin Zhou", "Shibao Zheng", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Learning to self-train for semi-supervised few-shot classification", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Chen Liang", "Yue Yu", "Haoming Jiang", "Siawpeng Er", "Ruijia Wang", "Tuo Zhao", "Chao Zhang" ], "title": "Bond: Bert-assisted open-domain named entity recognition with distant supervision", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "J. Liu", "Panupong Pasupat", "D. Cyphers", "James R. Glass" ], "title": "Asgard: A portable architecture for multilingual dialogue systems", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Fan Ma", "Deyu Meng", "Qi Xie", "Zina Li", "Xuanyi Dong" ], "title": "Self-paced co-training", "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yishu Miao", "Phil Blunsom" ], "title": "Language as a latent variable: Discrete generative models for sentence compression", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Scott Miller", "Jethran Guinness", "Alex Zamanian" ], "title": "Name tagging with word clusters and discriminative training", "venue": "In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL", "year": 2004 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Xiaoman Pan", "Boliang Zhang", "Jonathan May", "Joel Nothman", "Kevin Knight", "Heng Ji" ], "title": "Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Emmeleia Panagiota Mastoropoulou" ], "title": "Enhancing deep active learning using selective self-training for image classification", "venue": "Master’s thesis, KTH, School of Electrical Engineering and Computer Science (EECS),", "year": 2019 }, { "authors": [ "Matthew E Peters", "Waleed Ammar", "Chandra Bhagavatula", "Russell Power" ], "title": "Semi-supervised sequence tagging with bidirectional language models", "venue": "arXiv preprint arXiv:1705.00108,", "year": 2017 }, { "authors": [ "Slav Petrov", "Ryan McDonald" ], "title": "Overview of the 2012 shared task on parsing the web", "venue": null, "year": 2012 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Antti Rasmus", "Mathias Berglund", "Mikko Honkala", "Harri Valpola", "Tapani Raiko" ], "title": "Semisupervised learning with ladder networks", "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sebastian Ruder", "Barbara Plank" ], "title": "Strong baselines for neural semi-supervised learning under domain shift", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Erik F. Tjong Kim Sang", "Fien De Meulder" ], "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", "venue": "In Proceedings of the Seventh Conference on Natural Language Learning", "year": 2003 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016", "venue": "doi: 10.18653/v1/p16-1009", "year": 2016 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Q. Sun", "Y. Liu", "T. Chua", "B. Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Erik F Tjong", "Kim Sang", "Jorn Veenstra" ], "title": "Representing text chunks", "venue": "In Ninth Conference of the European Chapter of the Association for Computational Linguistics,", "year": 1999 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V. Le" ], "title": "Unsupervised data augmentation for consistency training, 2019", "venue": null, "year": 2019 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Pengcheng Yin", "Chunting Zhou", "Junxian He", "Graham Neubig" ], "title": "Structvae: Tree-structured latent variable models for semi-supervised semantic parsing", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Chunting Zhou", "Graham Neubig" ], "title": "Multi-space variational encoder-decoders for semi-supervised labeled sequence transduction", "venue": "arXiv preprint arXiv:1704.01691,", "year": 2017 }, { "authors": [ "Barret Zoph", "Golnaz Ghiasi", "Tsung-Yi Lin", "Yin Cui", "Hanxiao Liu", "Ekin Dogus Cubuk", "Quoc Le" ], "title": "Rethinking pre-training and self-training", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Motivation. Deep neural networks typically require large amounts of training data to achieve stateof-the-art performance. Recent advances with pre-trained language models like BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019) and RoBERTa (Liu et al., 2019) have reduced this annotation bottleneck. In this paradigm, large neural network models are trained on massive amounts of unlabeled data in a self-supervised manner. However, the success of these large-scale models still relies on fine-tuning them on large amounts of labeled data for downstream tasks. For instance, our experiments show 27% relative improvement on an average when fine-tuning BERT with the full training set (2.5K-705K labels) vs. fine-tuning with only 10 labels per class. This poses several challenges for many real-world tasks. Not only is acquiring large amounts of labeled data for every task expensive and time consuming, but also not feasible in many cases due to data access and privacy constraints. This issue is exacerbated for sequence labeling tasks that require annotations at token- and slot-level as opposed to instance-level classification tasks. For example, an NER task can have slots like B-PER, I-PER, O-PER marking the beginning, intermediate and out-of-span markers for person names, and similar slots for the names of location and organization. Similarly, language understanding models for dialog systems rely on effective identification of what the user intends to do (intents) and the corresponding values as arguments (slots) for use by downstream applications. Therefore, fully supervised neural sequence taggers are expensive to train for such tasks, given the requirement of thousands of annotations for hundreds of slots for the many different intents.\nSemi-supervised learning (SSL) (Chapelle et al., 2010) is one of the promising paradigms to address labeled data scarcity by making effective use of large amounts of unlabeled data in addition to task-specific labeled data. Self-training (ST, (III, 1965)) as one of the earliest SSL approaches has recently shown state-of-the-art performance for tasks like image classification (Li et al., 2019; Xie et al., 2020) performing at par with supervised systems while using very few training labels. In contrast to such instance-level classification tasks, sequence labeling tasks have dependencies\nbetween the slots demanding different design choices for slot-level loss optimization for the limited labeled data setting. For instance, prior work (Ruder & Plank, 2018) using classic self-training techniques for sequence labeling did not find much success in the low-data regime with 10% labeled data for the target domain. Although there has been some success with careful task-specific data selection (Petrov & McDonald, 2012) and more recently for distant supervision (Liang et al., 2020) using external resources like knowledge bases (e.g., Wikipedia). In contrast to these prior work, we develop techniques for self-training with limited labels and without any task-specific assumption or external knowledge.\nFor self-training, a base model (teacher) is trained on some amount of labeled data and used to pseudo-annotate (task-specific) unlabeled data. The original labeled data is augmented with the pseudo-labeled data and used to train a student model. The student-teacher training is repeated until convergence. Traditionally in self-training frameworks, the teacher model pseudo-annotates unlabeled data without any sample selection. This may result in gradual drifts from self-training on noisy pseudo-labeled instances (Zhang et al., 2017). In order to deal with noisy labels and training set biases, Ren et al. (2018) propose a meta-learning technique to automatically re-weight noisy samples by their loss changes on a held-out clean labeled validation set. We adopt a similar principle in our work and leverage meta-learning to re-weight noisy pseudo-labeled examples from the teacher. While prior techniques for learning to re-weight examples have been developed for instance-level classification tasks, we extend them to operate at token-level for discrete sequence labeling tasks. To this end, we address some key challenges on how to construct an informative held-out validation set for token-level re-weighting. Prior works (Ren et al., 2018; Shu et al., 2019) for instance classification construct this validation set by random sampling. However, sequence labeling tasks involve many slots (e.g. WikiAnn has 123 slots over 41 languages) with variable difficulty and distribution in the data. In case of random sampling, the model oversamples from the most populous category and slots. This is particularly detrimental for low-resource languages in the multilingual setting. To this end, we develop an adaptive mechanism to create the validation set on the fly considering the diversity and uncertainty of the model for different slot types. Furthermore, we leverage this validation set for token-level loss estimation and re-weighting pseudo-labeled sequences from the teacher in the meta-learning setup. While prior works (Li et al., 2019; Sun et al., 2019; Bansal et al., 2020) on meta-learning for image and text classification leverage multi-task learning to improve a target classification task based on several similar tasks, in this work we focus on a single sequence labeling task – making our setup more challenging altogether.\nOur task and framework overview. We focus on sequence labeling tasks with only a few annotated samples (e.g., K = {5, 10, 20, 100}) per slot type for training and large amounts of task-specific unlabeled data. Figure 1 shows an overview of our framework with the following components: (i) Self-training: Our self-training framework leverages a pre-trained language model as a teacher and co-trains a student model with iterative knowledge exchange (ii) Adaptive labeled data acquisition for validation: Our few-shot learning setup assumes a small number of labeled training samples per slot type. The labeled data from multiple slot types are not equally informative for the student model to learn from. While prior works in meta-learning randomly sample some labeled examples for held-out validation set, we develop an adaptive mechanism to create this set on the fly. To this end, we leverage loss decay as a proxy for model uncertainty to select informative labeled samples for the student model to learn from in conjunction with the re-weighting mechanism in the next step. (iii) Meta-learning for sample re-weighting: Since pseudo-labeled samples from the teacher can be noisy, we employ meta-learning to re-weight them to improve the student model performance on the held-out validation set obtained from the previous step. In contrast to prior work (Ren et al., 2018)\non sample re-weighting operating at instance-level, we incorporate the re-weighting mechanism at token-level for sequence labeling tasks. Here the token-level weights are determined by the student model loss on the above validation set. Finally, we learn all of the above steps jointly with end-toend learning in the self-training framework. We refer to our adaptive self-training framework with meta-learning based sample re-weighting mechanism as MetaST.\nWe perform extensive experiments on six benchmark datasets for several tasks including multilingual Named Entity Recognition and slot tagging for user utterances from task-oriented dialog systems to demonstrate the generalizability of our approach across diverse tasks and languages. We adopt BERT and multilingual BERT as encoder and show that its performance can be significantly improved by nearly 10% for low-resource settings with few training labels (e.g., 10 labeled examples per slot type) and large amounts of unlabeled data. In summary, our work makes the following contributions. (i) Develops a self-training framework for neural sequence tagging with few labeled training examples. (ii) Leverages an acquisition strategy to adaptively select a validation set from the labeled set for meta-learning of the student model. (iii) Develops a meta-learning framework for re-weighting pseudo-labeled samples at token-level to reduce drifts from noisy teacher predictions. (iv) Integrates the aforementioned components into an end-to-end learning framework and demonstrates its effectiveness for neural sequence labeling across six benchmark datasets with multiple slots, shots, domains and languages." }, { "heading": "2 BACKGROUND", "text": "Sequence labeling and slot tagging. This is the task identifying the entity span of several slot types (e.g., names of person, organization, location, date, etc.) in a text sequence. Formally, given a sentence with N tokens X = {x1, ..., xN}, an entity or slot value is a span of tokens s = [xi, ..., xj ](0 ≤ i ≤ j ≤ N) associated with a type. This task assumes a pre-defined tagging policy like BIO (Tjong et al., 1999), where B marks the beginning of the slot, I marks an intermediate token in the span, and O marks out-of-span tokens. These span markers are used to extract multi-token values for each of the slot types with phrase-level evaluation for the performance.\nSelf-training. Consider f(·; θtea) and f(·; θstu) to denote the teacher and student models respectively in the self-training framework. The role of the teacher model (e.g., a pre-trained language model) is to assign pseudo-labels to unlabeled data that is used to train a student model. The teacher and student model can exchange knowledge and the training schedules are repeated till convergence. The success of self-training with deep neural networks in recent works (He et al., 2019; Xie et al., 2020) has been attributed to a number of factors including stochastic regularization with dropouts and data regularization with unlabeled data. Formally, given m-th unlabeled sentence with N tokens Xum = {xu1,m, ..., xuN,m} and C pre-defined labels, consider the pseudo-labels Ŷ (t) m = [ŷ (t) m,1, ..., ŷ (t) m,N ] generated by the teacher model at the t-th iteration where,\nŷ(t)m,n = argmax c∈C fn,c(x u m,n; θ (t) tea). (1)\nThe pseudo-labeled data set, denoted as (Xu, Ŷ (t)) = {(Xum, Ŷ (t) m )}Mm , is used to train the student model and learn its parameters as:\nθ̂ (t) stu = argmin\nθ\n1\nM M∑ m=1 l(Ŷ (t)m , f(X u m; θ (t−1) stu )), (2)\nwhere l(·, ·) can be modeled as the cross-entropy loss." }, { "heading": "3 ADAPTIVE SELF TRAINING", "text": "Given a pre-trained language model (e.g., BERT (Devlin et al., 2019)) as the teacher, we first finetune it on the small labeled data to make it aware of the underlying task. The fine-tuned teacher model is now used to pseudo-label the large unlabeled data. We consider the student model as another instantiation of the pre-trained language model that is trained over the pseudo-labeled data. However, our few-shot setting with limited labeled data results in a noisy teacher. A naive transfer of teacher knowledge to the student results in the propagation of noisy labels limiting the performance of the student model. To address this challenge, we develop an adaptive self-training framework to\nre-weight pseudo-labeled predictions from the teacher with a meta-learning objective that optimizes the token-level loss from the student model on a held-out labeled validation set. This held-out set is adaptively constructed via labeled data acquisition which selects labeled samples with high uncertainty for efficient data exploration." }, { "heading": "3.1 ADAPTIVE LABELED DATA ACQUISITION", "text": "In standard meta-learning setup for instance-level classification tasks, the held-out validation set is usually constructed via random sampling (Ren et al., 2018; Shu et al., 2019). Sequence labeling tasks involve many slot types with variable difficulty and distribution in the data. For instance, NER tasks over WikiAnn operate over 123 slot types from 41 languages with additional complexity from variable model performance across different languages. A random sampling leads to oversampling instances with the most populous categories and slot types in the data. Therefore, we propose a novel labeled data acquisition strategy to construct the validation set for effective data exploration. We demonstrate its benefit over classic meta-learning approaches from prior works in experiments.\nIn general, data acquisition strategies for prior works in meta-learning and active learning broadly leverage random sampling (Ren et al., 2018; Shu et al., 2019), easy (Kumar et al., 2010) and hard example mining (Shrivastava et al., 2016) or uncertainty-based methods (Chang et al., 2017a). These strategies have been compared in prior works (Chang et al., 2017a; Gal et al., 2017) that show uncertainty-based methods to have better generalizability across diverse settings. There are several approaches to uncertainty estimation including error decay (Konyushkova et al., 2017; Chang et al., 2020), Monte Carlo dropouts (Gal et al., 2017) and predictive variance (Chang et al., 2017a). We follow a similar principle of error decay to find samples that the model is uncertain about and can correspondingly benefit from knowing their labels (similar to active learning settings). To this end, we leverage stochastic loss decay from the model as a proxy for the model uncertainty to generate validation set on the fly. This is used for estimating token-level weights and re-weighting pseudo labeled data in Section 3.2.\nConsider the loss of the student model with parameters θ(t)stu on the labeled data (X l m, Ym) in the t-th iteration as l(Ym, f(X lm; θ (t) stu)). To measure the loss decay value at any iteration, we use the difference between the current and previous loss values. Considering these values may fluctuate across iterations, we adopt the moving average of the loss values for (X lm, Ym) in the latest R iterations as a baseline lmb for loss decay estimation. Baseline measure l m b is calculated as follows:\nlmb = 1\nR R∑ r=1 l(Ym, f(X l m; θ (t−r) stu )). (3)\nSince the loss decay values are estimated on the fly, we want to balance exploration and exploitation. To this end, we add a smoothness factor δ to prevent the low loss decay samples (i.e. samples with low uncertainty) from never being selected again. Considering all of the above factors, we obtain the sampling weight of labeled data (X lm, Y l m) as follows:\nWm ∝ max(lmb − l(Ym, f(X lm; θ (t) stu)), 0) + δ. (4)\nThe smoothness factor δ needs to be adaptive since the training loss is dynamic. Therefore, We adopt the maximum of the loss decay value as the smoothness factor δ to encourage exploration.\nThe aforementioned acquisition function is re-estimated after a fixed number of steps to adapt to model changes. With labeled data acquisition, we rely on informative uncertain samples to improve learning efficiency. The sampled mini-batches of labeled data {Bls} are used as a validation set for the student model in the next step for re-weighting pseudo-labeled data from the teacher model. We demonstrate its impact via ablation study in experiments. Note that the labeled data is only used to compute the acquisition function and not used for explicit training of the student model in this step." }, { "heading": "3.2 RE-WEIGHTING PSEUDO-LABELED DATA", "text": "To mitigate error propagation from noisy pseudo-labeled sequences from the teacher, we leverage meta-learning to adaptively re-weight them based on the student model loss on the held-out validation set obtained via labeled data acquisition from the previous section. In contrast to prior work focusing on instance-level tasks like image classification – sequence labeling operates on discrete text sequences as input and assigns labels to each token in the sequence. Since teacher predictions vary for different slot labels and types, we adapt the meta-learning framework to re-weight samples at a token-level resolution.\nToken Re-weighting. Consider the pseudo-labels {Ŷ (t)m = [ŷ(t)m,1, ..., ŷ (t) m,N ]}Mm=1 from the teacher in the t-th iteration with m and n indexing the instance and a token in the instance, respectively. In classic self-training, we update the student parameters leveraging pseudo-labels as follows:\nθ̂ (t) stu = θ̂ (t−1) stu − αO ( 1 M M∑ m=1 l(Ŷ (t)m , f(X u m; θ (t−1) stu )) ) . (5)\nNow, to downplay noisy token-level labels, we leverage meta-learning to re-weight the pseudolabeled data. To this end, we follow a similar analysis from (Koh & Liang, 2017) and (Ren et al., 2018) to perturb the weight for each token in the mini-batch by . Weight perturbation is used to discover data points that are most important to improve the model performance on a held-out validation set (Koh & Liang, 2017) where the sample importance is given by the magnitude of the the negative gradients. We extend prior techniques to obtain token-level perturbations as:\nθ̂ (t) stu( ) = θ̂ (t−1) stu − αO ( 1 M 1 N M∑ m=1 N∑ n=1 [ m,n · l(ŷ(t)m,n, f(xum,n; θ̂ (t−1) stu ))] ) . (6)\nThe token weights are obtained by minimizing the student model loss on the held-out validation set. Here, we employ the labeled data acquisition strategy from Eq. 4 to sample informative mini-batches of labeled data Bls locally at step t. To obtain a cheap estimate of the meta-weight at step t, we take a single gradient descent step for the sampled labeled mini-batch Bls :\num,n,s = − ∂\n∂ m,n,s ( 1 |Bls| 1 N |Bls|∑ m=1 N∑ n=1 [l(ym,n, f(x l m,n; θ̂ (t) stu( ))] ) | m,n,s=0 (7)\nWe set the token weights to be proportional to the negative gradients to reflect the importance of pseudo-labeled tokens in the sequence. Since sequence labeling tasks have dependencies between the slot types and tokens, it is difficult to obtain a good estimation of the weights based on a single mini-batch of examples. Therefore, we sample S mini-batches of labeled data {Bl1, ...,BlS} with the adaptive acquisition strategy and calculate the mean of the gradients to obtain a robust gradient estimate. Note that S is a constant number that is the same for each token and the proportional sign in Eq. 8. Since a negative weight indicates a pseudo-label of poor quality that would potentially degrade the model performance, we set such weights to 0 to filter them out. The impact of S is investigated in the experiments (refer to Appendix A.1). The overall meta-weight of pseudo-labeled token (xum,n, ŷm,n) is obtained as:\nwm,n ∝ max( S∑ s=1 um,n,s, 0) (8)\nTo further ensure the stability of the loss function in each mini-batch, we normalise the weightwm,n. Finally, we update the student model parameters while accounting for token-level re-weighting as:\nθ̂ (t) stu = θ̂ (t−1) stu − αO ( 1 M 1 N M∑ m=1 N∑ n=1 [wm,n · l(ŷ(t)m,n, f(xum,n; θ̂ (t−1) stu ))] ) . (9)\nWe demonstrate the impact of our re-weighting mechanism with an ablation study in experiments." }, { "heading": "3.3 TEACHER MODEL ITERATIVE UPDATES", "text": "At the end of every self-training iteration, we assign the student model as a new teacher model (i.e., θtea = θ (T ) stu ) . Since the student model uses the labeled data only as a held-out validation set for meta-learning, we further utilize the labeled data (X l, Y ) to fine-tune the new teacher model\nf(·, θ(t)tea) with standard supervised loss minimization. We explore the effectiveness of this step with an ablation study in experiments. The overall training procedure is summarized in Algorithm 1.\nAlgorithm 1: MetaST Algorithm. Input: Labeled sequences (Xl, Y ); Unlabeled sequences (Xu); Pre-trained BERT model with randomly initialized token classification layer f(·; θ(0)); Batches S; Number of self-training iterations T . Initialize teacher model θtea = θ(0) while not converged do Fine-tune teacher model on small labeled data (Xl, Y ); Initialize the student model θ(0)stu = θ\n(0); Generate hard pseudo-labels Ŷ (t) for unlabeled samplesXu with model f(·, θtea); for t← 1 to T do\nCompute labeled data acquisition function according to Eq. 4; Sample S mini-batches of labeled examples {Bl1, ..., B l S} from (X\nl, Y ) based on labeled data acquisition function; Randomly sample a batch of pseudo-labeled examples Bu from (Xu, Ŷ (t)) ; Compute token-level weights in Bu based on the loss on {Bl1, ...,B l S} according to Eq. 8; Train model f(·, θ(t)stu) on weighted pseudo-labeled sequences Bu and update parameters θ (t) stu ;\nend Update the teacher: θtea = θ (T ) stu\nend" }, { "heading": "4 EXPERIMENTS", "text": "Encoder. Pre-trained language models like BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019) and RoBERTa (Liu et al., 2019) have shown state-of-the-art performance for various natural language processing tasks. In this work we adopt one of them as a base encoder by initializing the teacher with pre-trained BERT-base model and a randomly initialized token classification layer.\nDatasets. We perform large-scale experiments with six different datasets including user utterances for task-oriented dialog systems and multilingual Named Entity Recognition tasks as summarized in Table 1. (a) Email. This consists of natural language user utterances for email-oriented user actions like sending, receiving or searching emails with attributes like date, time, topics, people, etc. (b) SNIPS is a public benchmark\ndataset (Coucke et al., 2018) of user queries from multiple domains including music, media, and weather. (c) MIT Movie and Restaurant corpus (Liu et al., 2013) consist of similar user utterances for movie and restaurant domains. (d) CoNLL03 (Sang & Meulder, 2003) and Wikiann (Pan et al., 2017) are public benchmark datasets for multilingual Named Entity Recognition. CoNLL03 is a collection of news wire articles from the Reuters Corpus from 4 languages with manual annotations, whereas Wikiann comprises of extractions from Wikipedia articles from 41 languages with automatic annotation leveraging meta-data for different entity types like ORG, PER, LOC etc. For every dataset, we sample K ∈ {5, 10, 20, 100} labeled sequences for each slot type from the Train data, and add the remaining to the unlabeled set while ignoring their labels – following standard setups for semi-supervised learning. We repeatedly sample K labeled instances three times for multiple runs to report average performance with standard deviation across the runs.\nBaselines. The first baseline we consider is the fully supervised BERT model trained on all available training data which provides the ceiling performance for every task. Each of the other models are trained on K training labels per slot type. We adopt several state-of-the-art semi-supervised methods as baselines: (1) CVT (Clark et al., 2018) is a semi-supervised sequence labeling method based on cross-view training; (2) SeqVAT (Chen et al., 2020) incorporates adversarial training with conditional random field layer for semi-supervised sequence labeling; (3) Mean Teacher (MT) (Tarvainen & Valpola, 2017) averages model weights to obtain an aggregated teacher; (4) VAT (Miyato et al., 2018) adopts virtual adversarial training to make the model robust to noise; (5) classic ST (III, 1965) is simple self-training method with hard pseudo-labels; (6) BOND (Liang et al., 2020) is the most recent work on self-training for sequence labeling with confidence-based sample selection and forms a strong baseline for our work. We implement our framework in Pytorch and use Tesla V100 gpus for experiments. Hyper-parameter configurations with model settings presented in Appendix.\nNeural sequence labeling performance with few training labels. Table 2 shows the performance comparison among different models with K=10 labeled examples per slot type. The fully supervised BERT trained on thousands of labeled examples provides the ceiling performance for the few-shot setting. We observe our method MetaST to significantly outperform all methods across all datasets including the models that also use the same BERT encoder as ours like MT, VAT, Classic ST and BOND with corresponding average performance improvements as 14.22%, 14.90%, 8.46% and 8.82%. Non BERT models like CVT and SeqVAT are consistently worse than other baselines.\nWe also observe variable performance of the models across different tasks. Specifically, the performance gap between the best few-shot model and the fully supervised model varies significantly. MetaST achieves close performance to the fully-supervised model in some datasets (e.g. SNIPS and Email) but has bigger room for improvement in others (e.g. CoNLL03 (EN) and Wikiann (EN)). This can be attributed to the following factors. (i) Labeled training examples and slots. The total number of labeled training instances for our K-shot setting is given byK×#Slots. Therefore, for tasks with higher number of slots and consequently more training labels, most of the models perform better including MetaST. Task-oriented dialog systems with more slots and inherent dependency between the slot types benefit more than NER tasks. (ii) Task difficulty: User utterances from task-oriented dialog systems for some of the domains like weather, music and emails contain predictive query patterns and limited diversity. In contrast, Named Entity Recognition datasets are comparatively diverse and require more training labels to generalize well. Similar observations are also depicted in Table 3 for multilingual NER tasks with more slots and consequently more training labels from multiple languages as well as richer interactions across the slots from different languages.\nControlling for the total amount of labeled data. In order to control for the variable amount of training labels across different datasets, we perform another experiment where we vary the number of labels for different slot types while keeping the total number of labeled instances for each dataset similar (ca. 200). Results are shown in Table 4. To better illustrate the effect of the number of training labels, we choose tasks with lower performance in Table 2 for this experiment. Comparing the results in Tables 2 and 4, we observe the performance of MetaST to improve with more training labels for all the tasks .\nEffect of varying the number of labels K per slot. Table 5 shows the improvement in the performance of MetaST when increasing the number of labels for each slot type in the SNIPS dataset.\nSimilar trends can be found on other datasets (results in Appendix). As we increase the amount of labeled training instances, the performance of BERT also improves, and correspondingly the margin between MetaST and these baselines decreases although MetaST still improves over all of them. In the self-training framework, given the ceiling performance for every task and the improved performance of the teacher with more training labels, there is less room for (relative) improvement of the student over the teacher model. Consider SNIPS for example. Our model obtains 12% and 2% improvement over the few-shot BERT model for the 10-shot and 100-shot setting with F1-scores as 88.22% and 95.39%, respectively. The ceiling performance for this task is 95.8% on training BERT on the entire dataset with 13K labeled examples. This demonstrates that MetaST is most impactful for low-resource settings with few training labels for a given task.\nAblation analysis. Table 6 demonstrates the impact of different MetaST components with ablation analysis. We observe that soft pseudo-labels hurt the model performance compared to hard pseudolabels, as also shown in recent work (Kumar et al., 2020). Such a performance drop may be attributed to soft labels being less informative compared to sharpened ones. Removing the iterative teacher fine-tuning step (Section 3.1) also hurts the overall performance.\nMethod Datasets\nSNIPS CoNLL03\nBERT w/ Continued Pre-training + Few-shot Supervision 83.96 69.84\nClassic ST 83.26 70.99 Classic ST w/ Soft Pseudo-Labels 81.17 71.87\nMetaST (ours) w/ Hard Pseudo-Labels 88.23 76.65 MetaST w/ Soft Pseudo-Labels 86.16 75.84\nMetaST w/o Iterative Teacher Fine-tune 85.64 72.74 MetaST w/o Labeled Data Acq. 86.63 75.02\nPseudo-labeled Data Re-weighting MetaST w/o Re-weighting 85.48 73.02 MetaST (Easy) 85.56 74.53 MetaST (Difficult) 86.34 68.06\nTable 6: Ablation analysis of our framework MetaST with 10 labeled examples per slot on SNIPS and CoNLL03 (EN).\nFigure 2: Visualization of MetaST reweighting on CoNLL03 (EN).\nContinued pre-training v.s. self-training. To contrast continued pre-training with self-training, we further pre-train BERT on in-domain unlabeled data and then fine-tune it with few labeled examples denoted as “BERT (Continued Pre-training + Few-shot Supervision)”. The pre-training step improves the BERT performance over the baseline on SNIPS but degrades the performance on CoNLL03. This indicates that continued pre-training can improve the performance of few-shot supervised BERT on specialized tasks (e.g., SNIPS) with different data distribution than the original pre-training data (e.g., Wikipedia), but may not help for general domain ones like CoNLL03 with overlapping data from Wikipedia. In contrast to the above baseline, MetaST brings significant improvements on both datasets. This demonstrates the generality and flexibility of self-training over pre-training as also observed in contemporary work (Zoph et al., 2020) on image classification.\nAdaptive labeled data acquisition. We perform an ablation study by removing adaptive labeled data acquisition from MetaST (denoted as “MetaST w/o Labeled Data Acq.”). Removing this component leads to around 2% performance drop on an average demonstrating the impact of labeled data acquisition. Moreover, the performance drop on SNIPS (39 slots) is larger than that on CoNLL03 (4 slots). This demonstrates that adaptive acquisition is more helpful for tasks with more slot types – where diversity and data distribution necessitate a better exploration strategy in contrast to random sampling employed in prior meta-learning works. Re-weighting strategies. To explore the role of token-level re-weighting for pseudo-labeled sequences (discussed in Section 3.2), we replace our meta-learning component with different sample selection strategies based on the model confidence for different tokens. One sampling strategy chooses samples uniformly without any re-weighting (referred to as “MetaST w/o Re-weighting”). The sampling strategy with weights proportional to the model confidence favors easy samples (referred to as “MetaST-Easy”), whereas the converse favors difficult ones (referred to as “MetaSTDifficult”).We observe the meta-learning based re-weighting strategy to perform the best. Interestingly, MetaST-Easy outperforms MetaST-Difficult significantly on CoNLL03 (EN) but achieves slightly lower performance on SNIPS. This demonstrates that difficult samples are more helpful when the quality of pseudo-labeled data is relatively high. On the converse, the sample selection strategy focusing on difficult samples introduces noisy examples with lower pseudo-label quality. Therefore, sampling strategies may need to vary for different datasets, thereby, demonstrating the necessity of adaptive data re-weighting as in our framework MetaST. Moreover, MetaST significantly outperforms classic self-training strategies with hard and soft pseudo-labels demonstrating the effectiveness of our design.\nAnalysis of pseudo-labeled data re-weighting. To visually explore the adaptive re-weighting mechanism, we illustrate token-level re-weighting of MetaST on CoNLL03 (EN) dataset with K=10 shot at step 100 in Fig. 2. We include the re-weighting visualisation on SNIPS in Appendix A.1. We observe that the selection mechanism filters out most of the noisy pseudo-labels (colored in blue) even those with high teacher confidence as shown in Fig. 2." }, { "heading": "5 RELATED WORK", "text": "Semi-supervised learning has been widely used for consistency training (Bachman et al., 2014; Rasmus et al., 2015; Laine & Aila, 2017; Tarvainen & Valpola, 2017; Miyato et al., 2018), latent variable models (Kingma et al., 2014) for sentence compression (Miao & Blunsom, 2016) and code generation (Yin et al., 2018). More recently, methods like UDA (Xie et al., 2019) leverage consistency training for few-shot learning of instance-classification tasks leveraging auxiliary resources like paraphrasing and back-translation (BT) (Sennrich et al., 2016).\nSample selection. Curriculum learning (Bengio et al., 2009) techniques are based on the idea of learning easier aspects of the task first followed by the more complex ones. Prior work leveraging self-paced learning (Kumar et al., 2010) and more recently self-paced co-training (Ma et al., 2017) leverage teacher confidence to select easy samples during training. Sample selection for image classification tasks have been explored in recent works with meta-learning (Ren et al., 2018; Li et al., 2019) and active learning (Panagiota Mastoropoulou, 2019; Chang et al., 2017b). However, all of these techniques rely on only the model outputs applied to instance-level classification tasks.\nSemi-supervised sequence labeling. Miller et al. (2004); Peters et al. (2017) leverage large amounts of unlabeled data to improve token representation for sequence labeling tasks. Another line of research introduces latent variable modeling (Chen et al., 2019; Zhou & Neubig, 2017) to learn interpretable and structured latent representations. Recently, adversarial training based model SeqVAT (Chen et al., 2020) and cross-view training method CVT (Clark et al., 2018) have shown promising results for sequence labeling tasks." }, { "heading": "6 CONCLUSIONS", "text": "In this work, we develop an adaptive self-training framework MetaST that leverages self-training and meta-learning for few-shot training of neural sequence taggers. We address the issue of error propagation from noisy pseudo-labels from the teacher in the self-training framework by adaptive sample selection and re-weighting with meta-learning. Extensive experiments on six benchmark datasets and different tasks including multilingual NER and slot tagging for task-oriented dialog systems demonstrate the effectiveness of the proposed method particularly for low-resource settings." }, { "heading": "A APPENDIX", "text": "A.1 EXPLORATIONS ON UNLABELED DATA AND MINI-BATCH S\nVariation in model performance with unlabeled data. Table 12 shows the improvement in model performance as we inject more unlabeled data with diminishing returns after a certain point.\nVariation in model performance with mini-batch S. We set the value of S in Eq. 8 to {1, 3, 5} respectively to explore its impact on the re-weighting mechanism. From Figure 3 we observe that the model is not super sensitive to hyper-parameter S but can achieve a better estimate of the weights of the pseudo-labeled data with increasing mini-batch values.\nFigure 3: Varying S mini-batch labeled data for re-weighting.\nA.2 ANALYSIS OF RE-WEIGHTING ON SNIPS AND CONLL03\nAnalysis of pseudo-labeled data re-weighting. To visually explore the adaptive re-weighting mechanism, we illustrate token re-weighting of MetaST on CoNLL03 and SNIPS datasets with K=10 shot at step 100 in Fig. 4. Besides the observation in the experimental section, we observe that many difficult and correct pseudo-labeled samples (low teacher confidence) are selected according to Fig. 4a.\nA.3 K-SHOTS\nEffect of varying the number of few-shots K. We show the performance changes with respect to varying number of few-shots K {5, 10, 20, 100} on Wikiann (en), MIT movie, MIT Restaurant, CoNLL2003 (En), Multilingual CoNLL and Multilingual Wikiann in Table 9-13. Since the number of labeled examples for some slots in Email dataset is around 10, we only show 5 and 10 shots for Email dataset in Table 8.\nTable 9: Wikiann (En) Dataset.\nFigure 5: MIT Movie Dataset.\nMethod Shots (8 Slot Types)5 10 20 100\nFull-supervision BERT 78.95\nFew-shot Supervision BERT 41.39 54.06 60.12 72.24\nFew-shot Supervision + unlabeled data CVT 33.74 42.57 51.33 70.84\nSeqVAT 41.94 51.55 56.15 71.39 Mean Teacher 40.37 51.75 57.34 72.40\nVAT 41.29 53.34 59.68 72.65 Classic ST 44.35 56.80 60.28 73.13\nBOND 43.01 55.78 59.96 73.60\nMetaST 53.02 63.83 67.86 75.25\nTable 10: MIT Restaurant Dataset.\nMethod Shots (4 Slot Types)5 10 20 100\nFull-supervision BERT 92.40\nFew-shot Supervision BERT 63.87 71.15 73.57 84.36\nFew-shot Supervision + unlabeled data CVT 51.15 54.31 66.11 81.99\nSeqVAT 58.02 67.21 74.15 82.20 Mean Teacher 59.04 68.67 72.62 84.17\nVAT 57.03 65.03 72.69 84.43 Classic ST 64.04 70.99 74.65 84.93\nBOND 62.52 69.56 74.19 83.87\nMetaST 71.49 76.65 78.54 85.77\nTable 11: CoNLL2003 (EN)\nMethod Shots (4 Slot Types)5 10 20 100\nFull-supervision BERT 87.67\nFew-shot Supervision BERT 64.80 70.77 73.89 80.61\nFew-shot Supervision + unlabeled data Mean Teacher 64.55 68.34 73.87 79.21\nVAT 64.97 67.63 74.26 80.70 Classic ST 67.95 72.69 73.79 81.82\nBOND 69.42 72.79 76.02 80.62\nMetaST 73.34 76.65 77.01 82.11\nTable 12: Multilingual CoNLL03.\nMethod Shots (3 Slot Types × 41 languages)5 10 20 100\nFull-supervision BERT 87.17\nFew-shot Supervision BERT 77.68 79.67 82.33 85.70\nFew-shot Supervision + unlabeled data Mean Teacher 77.09 80.23 82.19 85.34\nVAT 74.71 78.82 82.60 85.82 Classic ST 76.73 80.24 82.39 86.08\nBOND 78.81 79.57 82.19 86.14\nMetaST 79.10 81.61 83.14 85.57\nTable 13: Multilingual Wikiann\nA.4 IMPLEMENTATIONS AND HYPER-PARAMETER\nWe do not perform any hyper-parameter tuning for different datasets. The batch size and maximum sequence length varies due to data characteristics and are as shown in Tbale 14. The hyperparameters are as shown in Table 14.\nAlso, we retain parameters from original BERT implementation from https://github.com/ huggingface/transformers.\nWe implement SeqVAT based on https://github.com/jiesutd/NCRFpp and implement CVT following https://github.com/tensorflow/models/tree/master/ research/cvt_text." } ]
2,020
null
SP:f76f1289d7b47dd1bd381108f5b86a410613af9e
[ "The paper applies mixture-of-experts (MoE) [1] to the Transformers to significantly increase the number of parameters in the model while keeping the total computational cost feasible. The main differences with [1] are: (1) Only choose 2 experts at each timestep; (2) Set a capacity upper bound for each expert to make sure that no expert becomes a lagger. The paper also presents a convenient library to make implementing MoE models easier. The proposed method is tested on a large multilingual machine translation dataset and shows performance gains over models trained on a single language pair and a model trained without MoE." ]
Neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. Although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. In this paper we demonstrate conditional computation as a remedy to the above mentioned impediments, and demonstrate its efficacy and utility. We make extensive use of GShard, a module composed of a set of lightweight annotation APIs and an extension to the XLA compiler to enable large scale models with up to trillions of parameters. GShard and conditional computation enable us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-ofExperts. We demonstrate that such a giant model with 600 billion parameters can efficiently be trained on 2048 TPU v3 cores in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art.
[ { "affiliations": [], "name": "AUTOMATIC SHARDING" }, { "affiliations": [], "name": "Dmitry Lepikhin" }, { "affiliations": [], "name": "HyoukJoong Lee" }, { "affiliations": [], "name": "Yuanzhong Xu" }, { "affiliations": [], "name": "Dehao Chen" }, { "affiliations": [], "name": "Orhan Firat" }, { "affiliations": [], "name": "Yanping Huang" }, { "affiliations": [], "name": "Maxim Krikun" }, { "affiliations": [], "name": "Noam Shazeer" }, { "affiliations": [], "name": "Zhifeng Chen" } ]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: a system for large-scale machine learning", "venue": "In OSDI,", "year": 2016 }, { "authors": [ "Madhu S. Advani", "Andrew M. Saxe" ], "title": "High-dimensional dynamics of generalization error in neural networks, 2017", "venue": null, "year": 2017 }, { "authors": [ "Roee Aharoni", "Melvin Johnson", "Orhan Firat" ], "title": "Massively multilingual neural machine translation", "venue": "CoRR, abs/1903.00089,", "year": 2019 }, { "authors": [ "Naveen Arivazhagan", "Ankur Bapna", "Orhan Firat", "Dmitry Lepikhin", "Melvin Johnson", "Maxim Krikun", "Mia Xu Chen", "Yuan Cao", "George Foster", "Colin Cherry", "Wolfgang Macherey", "Zhifeng Chen", "Yonghui Wu" ], "title": "Massively multilingual neural machine translation in the wild: Findings and challenges, 2019", "venue": null, "year": 2019 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "arXiv preprint arXiv:1802.06509,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Ankur Bapna", "Orhan Firat" ], "title": "Exploring massively multilingual, massive neural machine translation", "venue": null, "year": 2019 }, { "authors": [ "Ankur Bapna", "Naveen Arivazhagan", "Orhan Firat" ], "title": "Controlling computation versus quality for neural sequence models, 2020", "venue": null, "year": 2020 }, { "authors": [ "Frédéric Bastien", "Pascal Lamblin", "Razvan Pascanu", "James Bergstra", "Ian Goodfellow", "Arnaud Bergeron", "Nicolas Bouchard", "David Warde-Farley", "Yoshua Bengio" ], "title": "Theano: new features and speed improvements", "venue": "arXiv preprint arXiv:1211.5590,", "year": 2012 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster", "venue": null, "year": 2015 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": null, "year": 2013 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": "arXiv preprint arXiv:2005.14165,", "year": 2020 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": "arXiv preprint arXiv:2005.14165,", "year": 2020 }, { "authors": [ "Lynn Elliot Cannon" ], "title": "A Cellular Computer to Implement the Kalman Filter Algorithm", "venue": "PhD thesis,", "year": 1969 }, { "authors": [ "William Chan", "Navdeep Jaitly", "Quoc Le", "Oriol Vinyals" ], "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "venue": "In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2016 }, { "authors": [ "Heng-Tze Cheng", "Mustafa Ispir", "Rohan Anil", "Zakaria Haque", "Lichan Hong", "Vihan Jain", "Xiaobing Liu", "Hemal Shah", "Levent Koc", "Jeremiah Harmsen" ], "title": "Wide and deep learning for recommender systems", "venue": "Proceedings of the 1st Workshop on Deep Learning for Recommender Systems DLRS 2016,", "year": 2016 }, { "authors": [ "Chung-Cheng Chiu", "Tara N Sainath", "Yonghui Wu", "Rohit Prabhavalkar", "Patrick Nguyen", "Zhifeng Chen", "Anjuli Kannan", "Ron J Weiss", "Kanishka Rao", "Ekaterina Gonina" ], "title": "State-of-the-art speech recognition with sequence-to-sequence models", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Dan Claudiu Cireşan", "Ueli Meier", "Luca Maria Gambardella", "Jürgen Schmidhuber" ], "title": "Deep, big, simple neural nets for handwritten digit recognition", "venue": "Neural computation,", "year": 2010 }, { "authors": [ "Alexis Conneau", "Kartikay Khandelwal", "Naman Goyal", "Vishrav Chaudhary", "Guillaume Wenzek", "Francisco Guzmán", "Edouard Grave", "Myle Ott", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Unsupervised cross-lingual representation learning at scale, 2019", "venue": null, "year": 2019 }, { "authors": [ "Andrew Davis", "Itamar Arel" ], "title": "Low-rank approximations for conditional feedforward computation in deep neural networks", "venue": null, "year": 2013 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang" ], "title": "Large scale distributed deep networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Albert Einstein" ], "title": "Die grundlage der allgemeinen relativitätstheorie", "venue": "In Das Relativitätsprinzip,", "year": 1923 }, { "authors": [ "Orhan Firat", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Mario Geiger", "Arthur Jacot", "Stefano Spigler", "Franck Gabriel", "Levent Sagun", "Stéphane d’ Ascoli", "Giulio Biroli", "Clément Hongler", "Matthieu Wyart" ], "title": "Scaling description of generalization with number of parameters in deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2020 }, { "authors": [ "Aaron Harlap", "Deepak Narayanan", "Amar Phanishayee", "Vivek Seshadri", "Nikhil Devanur", "Greg Ganger", "Phil Gibbons" ], "title": "Pipedream: Fast and efficient pipeline parallel dnn training", "venue": "arXiv preprint arXiv:1806.03377,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable", "venue": null, "year": 2017 }, { "authors": [ "Joel Hestness", "Newsha Ardalani", "Gregory Diamos" ], "title": "Beyond human-level accuracy", "venue": "Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu", "Zhifeng Chen" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Paolo Ienne", "Thierry Cornu", "Gary Kuhn" ], "title": "Special-purpose digital hardware for neural networks: An architectural survey. Journal of VLSI signal processing systems for signal, image and video", "venue": null, "year": 1996 }, { "authors": [ "Zhihao Jia", "Matei Zaharia", "Alex Aiken" ], "title": "Beyond Data and Model Parallelism for Deep Neural Networks", "venue": "In Proceedings of the Conference on Systems and Machine Learning (SysML),", "year": 2019 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V. Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Norman P Jouppi", "Cliff Young", "Nishant Patil", "David Patterson", "Gaurav Agrawal", "Raminder Bajwa", "Sarah Bates", "Suresh Bhatia", "Nan Boden", "Al Borchers" ], "title": "In-datacenter performance analysis of a tensor processing unit", "venue": "In Proceedings of the 44th Annual International Symposium on Computer Architecture,", "year": 2017 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": null, "year": 2018 }, { "authors": [ "Andrew K. Lampinen", "Surya Ganguli" ], "title": "An analytic theory of generalization dynamics and transfer learning in deep linear networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Loren Lugosch", "Derek Nowrouzezahrai", "Brett H. Meyer" ], "title": "Surprisal-triggered conditional computation with neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nathan Srebro" ], "title": "Exploring generalization in deep learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "John Nickolls", "Ian Buck", "Michael Garland", "Kevin Skadron" ], "title": "Scalable parallel programming with cuda", "venue": null, "year": 2008 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Shoumik Palkar", "Matei Zaharia" ], "title": "Optimizing data-intensive computations in existing libraries with split annotations", "venue": "In Proceedings of the 27th ACM Symposium on Operating Systems Principles,", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Rajat Raina", "Anand Madhavan", "Andrew Y Ng" ], "title": "Large-scale deep unsupervised learning using graphics processors", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Samyam Rajbhandari", "Jeff Rasley", "Olatunji Ruwase", "Yuxiong He" ], "title": "Zero: Memory optimization towards training a trillion parameter models", "venue": null, "year": 1910 }, { "authors": [ "Jared Roesch", "Steven Lyubomirsky", "Logan Weber", "Josh Pollock", "Marisa Kirisame", "Tianqi Chen", "Zachary Tatlock" ], "title": "Relay: a new ir for machine learning frameworks", "venue": "Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages MAPL 2018,", "year": 2018 }, { "authors": [ "Nadav Rotem", "Jordan Fix", "Saleem Abdulrasool", "Garret Catron", "Summer Deng", "Roman Dzhabarov", "Nick Gibson", "James Hegeman", "Meghan Lele", "Roman Levenstein", "Jack Montgomery", "Bert Maher", "Satish Nadathur", "Jakob Olesen", "Jongsoo Park", "Artem Rakhov", "Misha Smelyanskiy", "Man Wang" ], "title": "Glow: Graph lowering compiler techniques for neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Noam Shazeer" ], "title": "Fast transformer decoding: One write-head is all you need", "venue": "arXiv preprint arXiv:1911.02150,", "year": 2019 }, { "authors": [ "Noam Shazeer", "Mitchell Stern" ], "title": "Adafactor: Adaptive learning rates with sublinear memory", "venue": "cost. ArXiv,", "year": 2018 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Noam Shazeer", "Youlong Cheng", "Niki Parmar", "Dustin Tran", "Ashish Vaswani", "Penporn Koanantakool", "Peter Hawkins", "HyoukJoong Lee", "Mingsheng Hong", "Cliff Young" ], "title": "Mesh-tensorflow: Deep learning for supercomputers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "Rj Skerrv-Ryan" ], "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Tianxiao Shen", "Myle Ott", "Michael Auli", "Marc’Aurelio Ranzato" ], "title": "Mixture models for diverse machine translation: Tricks of the trade", "venue": null, "year": 1902 }, { "authors": [ "Mohammad Shoeybi", "Mostofa Patwary", "Raul Puri", "Patrick LeGresley", "Jared Casper", "Bryan Catanzaro" ], "title": "Megatron-lm: Training multi-billion parameter language models using gpu model parallelism", "venue": null, "year": 1909 }, { "authors": [ "Yifan Sun", "Nicolas Bohm Agostini", "Shi Dong", "David Kaeli" ], "title": "Summarizing cpu and gpu design trends with product data", "venue": null, "year": 1911 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need, 2017", "venue": null, "year": 2017 }, { "authors": [ "2019. Chris Ying", "Sameer Kumar", "Dehao Chen", "Tao Wang", "Youlong Cheng" ], "title": "Image classification", "venue": null, "year": 2019 }, { "authors": [ "We implemented the partitioner in the XLA compiler xla" ], "title": "Multiple frontend frameworks including TensorFlow, JAX, PyTorch and Julia already have lowering logic to transform their graph representation to XLA HLO graph", "venue": "XLA also has a much smaller set of operators compared to popular frontend frameworks like TensorFlow, which reduces the burden of implementing a partitioner without harming generality, because the existing lowering from frontends performs the heavy-lifting", "year": 2019 }, { "authors": [ "√ D" ], "title": "When the number experts grows 16x from 128 to 2048, the execution time increases by about 3.75x, and their proportion of execution time in the MoE and Transformer increases", "venue": null, "year": 2048 }, { "authors": [ "√ D" ], "title": "Comparing 2048 partitions and 16 partitions, while D grows by 128 times, the execution time of AllToAll only increases by 9 times. This enables us to use resharding to efficiently implement cross-partition dispatching", "venue": null, "year": 2048 }, { "authors": [ "Y. along" ], "title": "To further reduce weight storage on each device, we additionally shard the model dimension of the weights along the X dimension. Weights will be partially unsharded on-demand with a subgrouped AllGather operator across devices along X. The parallelism pattern along X is conceptually equivalent to weight-update sharding Xu et al", "venue": null, "year": 2020 }, { "authors": [ "layer. A" ], "title": "DECODING WITH FLAT BEAM SEARCH During decoding, we use beam search with length normalization similar to Wu et al", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. Although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. In this paper we demonstrate conditional computation as a remedy to the above mentioned impediments, and demonstrate its efficacy and utility. We make extensive use of GShard, a module composed of a set of lightweight annotation APIs and an extension to the XLA compiler to enable large scale models with up to trillions of parameters. GShard and conditional computation enable us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-ofExperts. We demonstrate that such a giant model with 600 billion parameters can efficiently be trained on 2048 TPU v3 cores in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art." }, { "heading": "1 INTRODUCTION", "text": "Scaling neural networks brings dramatic quality gains over a wide array of machine learning problems such as computer vision, language understanding and neural machine translation (Devlin et al., 2018; Mahajan et al., 2018; Arivazhagan et al., 2019; Huang et al., 2019; Brown et al., 2020b). This general tendency motivated recent studies to scrutinize the factors playing a critical role in the success of scaling, including the amounts of training data, the model size, and the computation being utilized as found by past studies (Advani & Saxe, 2017; Hestness et al., 2019; Geiger et al., 2020). While the final model quality was found to have a power-law relationship with these factors (Hestness et al., 2017; Kaplan et al., 2020), the significant quality gains brought by larger models also came with various practical challenges. Training efficiency, which we define as the amount of compute and time used to achieve a superior model quality against the best system existed, is oftentimes left out.\nIn this study, we strive for improving the model quality while being training efficiently. We built a 600 billion parameters sequence-to-sequence Transformer model with Sparsely-Gated Mixture-of-Experts layers, which enjoys sub-linear computation cost and O(1) compilation time. We trained this model with 2048 TPU v3 devices for 4 days on a multilingual machine translation task and achieved far superior translation quality compared to prior art when translating 100 languages to English with a single non-ensemble model. We conducted experiments with various model sizes and found that the translation quality increases as the model gets bigger, yet the total wall-time to train only increases sub-linearly with respect to the model size, as illustrated in Figure 1. To train such an extremely large model, we relied on the following key design choices.\nConditional computation First, model architecture should be designed to keep the computation and communication requirements sublinear in the model capacity. Conditional computation enables us to satisfy training and inference efficiency by having a sub-network activated on the per-input basis. Shazeer et al. (2017) has shown that scaling RNN model capacity by adding Sparsely Gated Mixture-of-Experts (MoE) layers allowed to achieve improved results with sub-linear cost. We therefore present our approach to extend Transformer architecture with MoE layers in this study.\nGShard Annotation Second, the model description should be separated from the partitioning implementation and optimization. This separation of concerns let model developers focus on the network architecture and flexibly change the partitioning strategy, while the underlying system applies semantic-preserving transformations and implements efficient parallel execution. To this end we propose a module, GShard, which only requires the user to annotate a few critical tensors in the model with partitioning policies. It consists of a set of simple APIs for annotations, and a compiler extension in XLA for automatic parallelization. Model developers write models as if there is a single device with huge memory and computation capacity, and the compiler automatically partitions the computation for the target based on the user annotations and their own heuristics." }, { "heading": "2 MODEL", "text": "The Transformer (Vaswani et al., 2017) architecture has been widely used for natural language processing. We scale Transformer with conditional computation by replacing every other feedforward layer with a sparsely activated Position-wise Mixture of Experts (MoE) layer (Shazeer et al., 2017), with a variant of top-2 gating in both the encoder and the decoder (Figure 2). Each subword token in the training example activates a sub-network of the MoE Transformer during both training and inference. The size of the sub-network is roughly independent of the number of experts per MoE Layer, allowing sublinear scaling of the computation cost." }, { "heading": "2.1 POSITION-WISE MIXTURE-OF-EXPERTS LAYER", "text": "The Mixture-of-Experts (MoE) layers used in our model differ from Shazeer et al. (2017)’s in the sparse gating function and the auxiliary loss being used. A MoE layer for Transformer consists of E feed-forward networks FFN1 . . . FFNE , each of which outputs woe · ReLU(wie · xs), where xs is the input token to the MoE layer, wi and wo being the input and output projection matrices for the feed-forward layer (an expert) with shapes [M,H] and [H,M ], respectively. The output of a MoE layer is the combination of the expert outputs ∑E e=1 Gs,e · FFNe(xs), where the vector Gs,E is computed by a gating function GATE(·). We choose to let each token dispatched to at most two experts. The corresponding gating entries Gs,e become non-zeros, representing how much an expert contributes to the final network output.\nThe gating function GATE(·) is critical to the MoE layer, which is modeled by a softmax activation function to indicate the weights of each expert in processing incoming tokens. We designed a novel efficient gating function with the following mechanisms (details illustrated in Algorithm 1).\nLoad balancing Naively picking top-k experts from the softmax probability distribution leads to load imbalance problem for training as shown in Shazeer et al. (2017). Most tokens would have been dispatched to a small number of experts, leaving other experts insufficiently trained. To ensure the load is balanced, we enforce that the number of tokens processed by one expert is below some uniform threshold called expert capacity. Assuming N total tokens in a batch and at most two experts per token, then the expert capacity C is set to be O(N/E). GATE(·) keeps a running counter ce for how many tokens are dispatched to an expert. When both experts selected by a token already exceed their capacity, the token is considered as an overflowed token, where Gs,E degenerates into a zero vector. Such tokens will be passed on to the next layer via residual connections. The introduction of the fixed expert capacity instead of loading balancing functions in Shazeer et al. (2017) allows us to run parallel execution of gating function as described blow.\nLocal dispatching for parallel gating Load balancing required the token assignments of one expert dependent on assignments of the other experts. The original gating function proposed by (Shazeer et al., 2017) had to be implemented sequentially, especially under the static shape constraints on TPUs. In our study, we distributed thousands of experts over thousands of devices, a sequential implementation of the gating function would keep most of the devices idle most of the time. Instead, we propose a new GATE(·) function that partitions all tokens in a training batch evenly into G local groups, i.e., each group contains S = N/G tokens for local dispatching. All local groups are processed independently in parallel. Each group is given a fractional capacity of each expert, C = 2N/(G · E), to ensure that at most this many tokens are dispatched to an expert. In general, increasing the expect capacity C decreases the number of overflowed tokens thus improves the model quality. Since G× C is a constant, however, the higher capacity leads to smaller number of groups which hurts the training throughput by limiting the number of parallel gating execution. In this way, we can ensure that expert capacity is still enforced and the overall load is balanced. With fixed expert capacity and local dispatching, we are able to speed up the gating function by O(G) times.\nAuxiliary loss Following Shazeer et al. (2017), we define a new differentiable auxiliary loss term `aux to enforce the load balancing. It is added to the overall loss function of the model L = `ori +k ∗ `aux with a constant multiplier k, where `aux is defined in line (13) of algorithm 1, and the term ce/S represents the fraction of input routed to each expert. We replace the mean square (ce/S)2 with\nAlgorithm 1: Group-level top-2 gating with auxiliary loss Data: xS , a group of tokens of size S Data: C, Expert capacity allocated to this group Result: GS,E , group combine weights Result: `aux, group auxiliary loss\n(1) for e← 1 to E do (2) ce ← 0 . gating decisions per expert (3) gS,e ← softmax(wg · xS) . gates per token per expert, wg are trainable weights (4) me ← 1S ∑S s=1 gs,e . mean gates per expert\n(5) end (6) for s← 1 to S do (7) g1, e1, g2, e2 = top_2({gs,e|e = 1 · · ·E}) . top-2 gates and expert indices (8) g1← g1/(g1 + g2) . normalized g1 (9) c← ce1 . position in e1 expert buffer (10) if ce1 < C then (11) Gs,e1 ← g1 . e1 expert combine weight for xs (12) end (13) ce1 ← c + 1 . incrementing e1 expert decisions count (14) end (15) `aux =\n1 E ∑E e=1 ce S ·me\n(16) for s← 1 to S do (17) g1, e1, g2, e2 = top_2({gs,e|e = 1 · · ·E}) . top-2 gates and expert indices (18) g2← g2/(g1 + g2) . normalized g2 (19) rnd← uniform(0, 1) . dispatch to second-best expert with probability ∝ 2 · g2 (20) c← ce2 . position in e2 expert buffer (21) if c < C ∧ 2 · g2 > rnd then (22) Gs,e2 ← g2 . e2 expert combine weight for xs (23) end (24) ce2 ← c + 1 (25) end\ndifferentiable approximation me(ce/S), which can provide better numerical stability since it can be optimized with gradient descent.\nRandom routing Intuitively, the output ys is a weighted average of what selected experts return. If the weight for the 2nd expert is very small, we can simply ignore the 2nd expert to conserve the overall expert capacity. Hence, in addition to respecting the expert capacity constraint, GATE(·) dispatches to the 2nd-best expert with the probability proportional to its weight g2. We observed much less overflowed tokens thus better accuracy with random routing for models at the small scale. We then adopted this approach for our experiments at large scales." }, { "heading": "2.2 HIGHLY PARALLEL IMPLEMENTATION USING GSHARD", "text": "To implement the model in Section 2.1 efficiently on a cluster of devices, we first express the model in terms of linear algebra operations, which are highly tailored and optimized in our software stack TensorFlow (Abadi et al., 2016) and the hardware platform (TPU).\nOur model implementation (Algorithm 2) views the whole accelerator cluster as a single device and expresses its core algorithm in a few tensor operations independent of the setup of the cluster. We extensively used tf.einsum, the Einstein summation notation (Einstein, 1923), to concisely express the model. Top2Gating in Algorithm 2 computes the union of all group-local GS,E described in the gating Algorithm 1. combine_weights is a 4-D tensor with shape [G,S,E,C], whose element value becomes non-zero when the input token s in group g is sent to expert e at capacity buffer position c. For a specific g and s, a slice combine_weight[g, s, :, :] contains at most two non-zero values. Binary dispatch_mask is produced from combine_weights by simply setting all non-zero values to 1.\nTo scale the computation to a cluster with D devices, we choose the number of groups G and the number of experts E proportional to D. With CE = O(2S) and the number of tokens per group S independent of D, the model dimension M and the feed-forward hidden dimension H , the total\nnumber of floating point operations (FLOPS) per device in Algorithm 2:\nFLOPSSoftmax +FLOPSTop2Gating+FLOPSDispatch|Combine+FLOPSFFN\n= O(GSME)/D+O(GSEC)/D +O(GSMEC)/D +O(EGCHM)/D\n= O(DM) +O(2) +O(2M) +O(2HM)\nAlgorithm 2: Forward pass of the Positions-wise MoE layer. The underscored letter (e.g., G and E) indicates the dimension along which a tensor will be partitioned.\n1 gates = softmax(einsum(\"GSM,ME->GSE\", inputs, wg)) 2 combine_weights, dispatch_mask = Top2Gating(gates) 3 dispatched_inputs = einsum(\"GSEC,GSM->EGCM\", dispatch_mask, inputs) 4 h = einsum(\"EGCM,EMH->EGCH\", dispatched_inputs, wi) 5 h = relu(h) 6 expert_outputs = einsum(\"EGCH,EHM->GECM\", h, wo) 7 outputs = einsum(\"GSEC,GECM->GSM\", combine_weights, expert_outputs)\nThe per device flops for softmax is proportional to D, but in our experiments D ≤ 2H for up to 16K devices so it is less than that of FFN. Consequently the total per-device FLOPS could be considered independent of D, satisfying sublinear scaling design requirements. In addition to the computation cost, dispatching and combining token embedding using AllToAll operators consumed O( √ D) cross-device communication cost on our 2D TPU cluster. We will discuss the cost analysis and micro-benchmarks for such communication overheads in Appendix section A.3.3.\nDue to the daunting size and computation demand of tensors in Algorithm 1 when we scale the number of tokens N to millions and the number of experts E to thousands, we have to parallelize the algorithm over many devices. To express parallelism, tensors in the linear algebra computation are annotated with sharding information using GShard APIs to selectively specify how they should be partitioned across a cluster of devices. For example, the underscored letters in Algorithm 2 specified along which dimension the tensors are partitioned. This sharding information is propagated to the compiler so that the compiler can automatically apply transformations for parallel execution. Please refer to appendix A.2 for more detailed description of the GShard module.\nWe express the annotated version of Algorithm 2 as below. The input tensor is split along the first dimension and the gating weight tensor is replicated. After computing the dispatched expert inputs, we apply split to change the sharding from the group (G) dimension to the expert (E) dimension." }, { "heading": "1 # Partition inputs along the first (group G) dim across D devices.", "text": "" }, { "heading": "2 + inputs = split(inputs, 0, D)", "text": "" }, { "heading": "3 # Replicate the gating weights across all devices", "text": "" }, { "heading": "4 + wg = replicate(wg)", "text": "" }, { "heading": "5 gates = softmax(einsum(\"GSM,ME->GSE\", inputs, wg))", "text": "" }, { "heading": "6 combine_weights, dispatch_mask = Top2Gating(gates)", "text": "" }, { "heading": "7 dispatched_inputs = einsum(\"GSEC,GSM->EGCM\", dispatch_mask, inputs)", "text": "" }, { "heading": "8 # Partition dispatched inputs along expert (E) dim.", "text": "" }, { "heading": "9 + dispatched_inputs = split(dispatched_inputs, 0, D)", "text": "10 h = einsum(\"EGCM,EMH->EGCH\", dispatched_inputs, wi)\nwhere split(tensor, d, D) annotates tensor to be partitioned along the d dimension over D devices, and replicate(tensor) annotates tensor to be replicated across partitions. The invocations of GShard APIs such as split or replicate only adds sharding information to the tensor and does not change its logical shape. Moreover, users are not required to annotate every tensor in the program. Annotations are typically only required on a few important operators like Einsums in our model and the compiler uses iterative data-flow analysis to infer sharding for the rest of the tensors." }, { "heading": "3 MASSIVELY MULTILINGUAL, MASSIVE MACHINE TRANSLATION (M4)", "text": "We chose multilingual neural machine translation (MT) (Firat et al., 2016; Johnson et al., 2017; Aharoni et al., 2019) to validate our design for efficient training with GShard. Multilingual MT,\nwhich is an inherently multi-task learning problem, aims at building a single neural network for the goal of translating multiple language pairs simultaneously. This extends the line of work Huang et al. (2019); Arivazhagan et al. (2019); Shazeer et al. (2017) towards a universal machine translation model (Bapna & Firat, 2020), a single model that can translate between more than hundred languages.\nIn this section, we advocate how conditional computation (Bengio et al., 2013; Davis & Arel, 2013) with sparsely gated mixture of experts fits into the above detailed desiderata and show its efficacy by scaling neural machine translation models, while keeping the training time of such massive networks practical. E.g. a 600B GShard model for M4 can process 1T tokens (source side tokens after sub-word segmentation) in 250k training steps under 4 days. We experiment with increasing the model capacity by adding more layers and more experts into the model and study the factors playing role in convergence, model quality and training efficiency. Further, we demonstrate how conditional computation can speed up the training and how sparsely gating each token through the network can efficiently be learned without any prior knowledge on task or language relatedness, exemplifying the capability of learning the gating decision directly from the data.\nWe focus on improving the translation quality (measured in terms of BLEU score Papineni et al. (2002)) from all 100 languages to English. This resulted in approximately 13 billion training examples to be used for model training. Our baselines are separate bilingual Neural Machine Translation models for each language pair (e.g. a single model for German-to-English), tuned depending on the available training data per-language1. Rather than displaying individual BLEU scores for each language pair, we follow the convention of placing the baselines along the x-axis at zero, and report the ∆BLEU trendline of each massively multilingual model trained with GShard (see Figure 3). The x-axis in Figure 3 is sorted from left-to-right in the decreasing order of amount of available training data, where the left-most side corresponds to high-resourced languages, and low-resourced languages on the right-most side respectively. We also include a variant of dense 96 layer Transformer EncoderDecoder network T(96L) trained with GPipe pipeline parallelism on the same dataset as another baseline, which took over 6 weeks to convergence on 2048 TPU v3 cores 2.\nWe varied the depth of the transformer network (L) and the number of experts (E) to scale the model. For depth, we tested three different options, 12 (original Transformer depth, which consists of 6 encoder and 6 decoder layers), 36 and 60 layers. For the number of experts that replaces every other feed-forward layer, we also tested three options, namely 128, 512 and 2048 experts. Note that, the number of devices used for training, is fixed to be equal to the number of experts per-layer for simplicity. Please also see the detailed description in Table 1 for model configurations. During training, we use float32 for both model weights and activations in order to ensure training stability. We also ran additional scalability experiments with MoE(2048E, 60L) with bfloat16 activations with more than one trillion model weights. We are still working on the model convergence and hence did not include the results from this trillion weight model for the sake of reproducibility." }, { "heading": "3.1 RESULTS", "text": "For each experiment (rows of the Table 1), we trained the corresponding MoE Transformer model until it has seen 1 trillion (1012) tokens. The model checkpoint at this point is used in the model evaluation. We did not observe any over-fitting patterns by this point in any experiment. Instead, we observed that the training loss continued to improve if we kept training longer. We evaluated BLEU scores that the models achieved for all language pairs on a held-out test set in Figure 3.\nHere we discuss the implication of each experiment on languages that have large amounts of training data (high resourced), as well as languages with limited data (low-resource). In order to improve the quality for both high- and low-resource languages simultaneously within a single model, scaled models must mitigate capacity bottleneck issue by allocating enough capacity to high-resource tasks, while amplifying the positive transfer towards low-resource tasks by facilitating sufficient parameter sharing. We loosely relate the expected learning dynamics of such systems with the long-standing memorization and generalization dilemma, which is recently studied along the lines of width vs depth scaling efforts (Cheng et al., 2016). Not only do we expect our models to generalize better to the\n1We tuned batch-size and different values of regularization methods (e.g. dropout) in a Transformer-Big or Transformer-Base layout, for high or low-resourced languages respectively.\n2T(96L) measured to be processing 1+ trillion tokens at 300k steps, processing around 4M tokens/step\nheld-out test sets, we also expect them to exhibit high transfer capability across languages as another manifestation of generalization performance Lampinen & Ganguli (2018).\nDeeper Models Bring Consistent Quality Gains Across the Board. We first investigate the relationship between the model depth and the model quality for both high- and low-resource languages. With an increasing number of per-layer experts for each experiment (128, 512 and 2048), we tripled the depth of the network for each expert size, from 12 to 36. Fig. 3 show that when the number of experts per-layer is fixed, increasing the depth (L) alone brings consistent gains for both low and high resourced languages (upwards ∆ shift along the y-axis), almost with a constant additive factor every time we scale the depth from 12L to 36L (2-to-3 BLEU points on average in Table 1).\nRelaxing the Capacity Bottleneck Grants Pronounced Quality Gains. We also consider three models with identical depths (12L), with increasing number of experts per-layer: 128, 512 and 2048. As we increase the number of experts per-layer from 128 to 512, we notice a large jump in model quality, +3.3 average BLEU score across 100 languages. However again by four folds scaling of the number of experts per-layer, from 512 to 2048, yields only +1.3 average BLEU scores. Despite the significant quality improvement, this drop in gains hints the emergence of diminishing returns.\nGiven the over 100 languages considered, the multilingual model has a clear advantage on improving the low-resource tasks. On the contrary, for high-resource languages the increased number of tasks limits per-task capacity within the model, resulting in lower translation quality compared to a models trained on a single language pair. We observed in our experiments that this capacity bottleneck on task interference for high resourced languages can be relaxed by increasing the number of experts per-layer,. Interestingly increasing the depth does not help as much if the capacity bottleneck is not relaxed. For 12 layer models increase in the expert number yields larger gains for high resourced languages as opposed to earlier revealed diminishing returns for low-resourced languages. While adding more experts relaxes the capacity bottleneck, at the same time it reduces the amount of transfer due to a reduction of the shared sub-networks. Notably, ∆BLEU gains for MoE(512E, 36L) exceed ones with higher capacity, but shallower MoE(2048E, 12L). While a comparison of proportionally smaller models, shows that MoE(128E, 36L) is suboptimal compared to MoE(512E, 12L). One can conclude that scaling depth brings most quality gains only after capacity bottleneck is resolved.\nDeep-Dense Models are Better at Positive Transfer towards Low-Resource Tasks. Lastly we look into the impact of the depth on low-resourced tasks as a loose corollary to our previous experiment. We include a dense model with 96 layers T(96L) trained with GPipe on the same data into our analysis. We compare T(96L) with the shallow MoE(128E, 12L) model. While the gap\nbetween the two models measured to be almost constant for the majority of the high-to-mid resourced languages, the gap grows in favor of the dense-deep T(96L) model as we get into the low-resourced regime. Following our previous statement, as the proportion of the shared sub-networks across tasks increase, which is 100% for dense T(96L), the bandwidth for transfer gets maximized and results in a comparably better quality against its shallow counterpart. The same transfer quality to the low-resourced languages can be also achieved with MoE(128E, 36L) which has 37 billion parameters.\nWe conjecture that, increasing the depth might potentially increase the extent of transfer to lowresource tasks hence generalize better along that axis. But we also want to highlight that the models in comparison have a disproportionate training resource requirements. We again want to promote the importance of training efficiency, which is the very topic we studied next." }, { "heading": "3.2 TRAINING EFFICIENCY", "text": "To measure the training efficiency. we first keep track of the number of tokens being processed to reach a certain training loss and second we keep track of the wall-clock time for a model to process certain number of tokens. We focus on measuring the training time to fixed training loss targets3 while varying other factors. We left systems performance analysis in appendex A.3.\nDeeper models converge faster with fewer examples. It has been shown that, deeper models are better at sample efficiency, reaching better training/test error given the same amount of training examples (Huang et al., 2019; Shoeybi et al., 2019), commonly attributed to the acceleration effect of over-parametrization (Arora et al., 2018). We empirically test the hypothesis again using GShard with MoE Transformers and share trade-offs for models that are not only deep, but also sparsely activated.\nFor this purpose, we compare number of tokens being processed by each model to reach a preset training loss. A general trend we observe from Table 1 is that, MoE Transformer models with 3 times the depth need 2 to 3 times fewer tokens to reach the preset training loss thresholds. For example MoE(128E, 12L) takes 3 times the number of tokens to reach 0.7 training cross-entropy compared to MoE(128E, 36L). We observe a similar trend for models with 512 and 2048 experts.\nAnother intriguing observation from Table 1, is again related to the presence of capacity bottleneck. Comparing the models with same depth, we notice a significant drop in the number of tokens required to reach training loss of 0.7, as we transition from 128 to 512 number of experts. Practically that is where we observed the capacity bottleneck was residing. After this phase shift, models with ample capacity tend to exhibit similar sample efficiency characteristics.\nModel with 600B parameters trained under 4 days achieved the best quality. Next we delve deeper into the interaction between model size and wall-clock time spent for training. We monitor number of TPU cores being used, training steps per-second, total number of tokens per batch, TPU core years4, and actual wall-clock time spent in days for training (see Table 1 columns respectively). One of the largest models we trained, MoE(2048E, 36L) with 600 billion parameters, utilized 2048 TPU cores for 4 days. This model achieves the best translation quality in terms of average BLEU, but also takes a total of 22.4 TPU years to train. While we have not seen any signs that the quality improvements plateau as we scale up our models, we strive for finding cost-effective solutions for\n3Training loss reported in this section corresponds to cross-entropy loss and excludes the auxiliary loss term introduced in Section 2.1\n4TPU core years is simply measured by the product of number of cores and wall-clock time in years.\nscaling. Results in Table 1 again validates scaling with conditional computation is way more practical compared to dense scaling. Given the same number of TPU cores used by MoE(2048E, 36L), the dense scaling variant, T(96L), appears to be taking more than ten times to train (235 TPU core years), while trailing behind in terms of model quality compared to models trained with GShard." }, { "heading": "4 RELATED WORK", "text": "Model parallelism partitions computation of neural network to build very large models on a cluster of accelerators. For example, pipelining (Huang et al., 2019; Harlap et al., 2018) splits a large model’s layers into multiple stages, while operator-level partitioning (Shazeer et al., 2018; Jia et al., 2019) splits individual operators into smaller parallel operators. GShard used a type of operator-level partitioning to scale our model. Without the need to rewrite the model implementation on other frameworks, GShard only requires users to annotate how tensors are split on existing model code, while not worrying the correct reduction and data exchange over partitions, because that is handled by the compiler. GShard solved many practical problems when implementing SPMD transformation on a production compiler (XLA). For example, to our knowledge, it is the first work showing how we can partition unevenly-shaped, non-trivial ops that have spatial dimensions with complex static configurations (e.g., convolutions with static dilation and padding).\nConditional Computation Conditional computation (Bengio et al., 2015; Elbayad et al., 2020) postulates that examples should be routed within the network by activating an input dependent sub-network. Prior work (Bapna et al., 2020; Yang et al., 2019; Shazeer et al., 2017) have shown its promising applications in machine translation, language models and computer vision. The routing strategy can be any of the following: estimated difficulty of the example (Lugosch et al., 2020), available computation budget (Elbayad et al., 2020; Bapna et al., 2020), or more generally a learned criterion with sparsity induced mixture of experts (Shazeer et al., 2017). This paper extended sparsely gated mixture of experts to Transformers (Vaswani et al., 2017) and introduced novel gating function with efficient implementation on parallel devices.\nModel scaling Within a single model family, simply making the network wider or deeper often improves the model quality empirically. E.g., deeper ResNets performed better (He et al., 2016b), bigger Transformer models achieved better translation quality (Vaswani et al., 2017), models with larger vocabulary, or embedding or feature crosses work better, too (Arivazhagan et al., 2019; Conneau et al., 2019). Across different model families, it has also been observed that bigger models with larger model capacities not only fit the training data better but also generalize better on test time (Zhang et al., 2017; Neyshabur et al., 2017; Huang et al., 2019). This observation motivated many research efforts to build much bigger neural networks than those typically used in deep learning research models or production models. Shazeer et al. (2017) showed that a recurrent language model with 69 billion parameters using mixture-of-expert layers achieved much lower test perplexity for the one billion words (LM1B) benchmark. Brown et al. (2020a) showed that a dense 175 billion parameters model is capable of exhibiting highly accurate few-shot performance on downstream NLP tasks." }, { "heading": "5 CONCLUSION", "text": "Our results in this paper suggest that progressive scaling of neural networks yield consistent quality gains, validating that the quality improvements have not yet plateaued as we scale up our models. We applied GShard, a deep learning module that partitions computation at scale automatically, to scale up MoE Transformer with light weight sharding annotations in the model code. We demonstrated a 600B parameter multilingual neural machine translation model can efficiently be trained in 4 days achieving superior performance and quality compared to prior art when translating 100 languages to English with a single model. MoE Transformer models trained with GShard also excel at training efficiency, with a training cost of 22 TPU v3 core years compared to 29 TPU years used for training all 100 bilingual Transformer baseline models. Empirical results presented in this paper confirmed that scaling models by utilizing conditional computation not only improve the quality of real-world machine learning applications but also remained practical and sample efficient during training. Our proposed method presents a favorable scalability/cost trade-off and alleviates the need for modelspecific frameworks or tools for scaling giant neural networks." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 RELATED WORK", "text": "Neural networks Deep learning models have been very successful in advancing sub-fields of artificial intelligence. For years, the fields have been continuously reporting new state of the art results using varieties of model architectures for computer vision tasks (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2016a), for natural language understanding tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Wu et al., 2016), for speech recognition and synthesis tasks (Hinton et al., 2012; Chan et al., 2016; Chiu et al., 2018; Oord et al., 2016; Shen et al., 2018). More recently, attention-based Transformer models further advanced state of the art of these fields (Vaswani et al., 2017; Devlin et al., 2018; Shen et al., 2019).\nHardware Neural networks demand non-negligible amounts of computation power. To address such a demand, special hardware (chips and networked machines) built for neural network training and inference can be dated back to 25 years ago (Ienne et al., 1996). Since late 2000s, researchers started to leverage GPUs to accelerate neural nets (Raina et al., 2009; Krizhevsky et al., 2012; Cireşan et al., 2010). More recently, the industry also invested heavily in building more dedicated hardware systems chasing for more cost-effective neural network hardware (Jouppi et al., 2017). Because the core computation of neural networks (various forms of summation of multiplications: convolution, matrix multiplication, einsum) are highly parallelizable numerical calculations, these chips are equipped with huge number of floating processing units (FPUs). Hence, the compute power of these specially designed hardware grew dramatically. It is reported that GPU price per flops dropped a factor of ten in just the last 4 years (gpu) and flops per watts increased by 2 magnitude over the past 12 years (Sun et al., 2019). The widely available low-cost computation power is a major enabler for the success of neural networks.\nSoftware Software systems supporting neural networks evolved together with the advancement of the underlying hardware (Dean et al., 2012; Bastien et al., 2012; Abadi et al., 2016; Paszke et al., 2017; Palkar & Zaharia, 2019). While the accelerators are highly parallel compute machines, they are significantly more difficult to program directly. The frameworks made building neural networks easier and abstracted away many hardware specific details from the practitioners. They in turn rely on lower-level libraries to drive special hardware (accelerators) efficiently. E.g., CUDA (Nickolls et al., 2008) for Nvidia’s GPUs, or XLA for Google’s TPUs (xla, 2019). These lower-level libraries are critical for achieving high efficiency using these special hardware.\nAutomated parallelism Because programming in a distributed heterogeneous environment is challenging, particularly for high-level practitioners, deep-learning frameworks attempt to alleviate the\nburden of their users from specifying how the distributed computation is done. For example, TensorFlow (Abadi et al., 2016) has support for data parallelism, and basic model parallelism with graph partitioning by per-node device assignment. Mesh TensorFlow (Shazeer et al., 2018) helps the user to build large models with SPMD-style per-operator partitioning, by rewriting the computation in a Python library on top of TensorFlow; in comparison, our approach partitions the graph in the compiler based on light-weight annotations without requiring the user to rewrite the model. FlexFlow (Jia et al., 2019) uses automated search to discover the optimal partition of operators in a graph for better performance; while it focuses on determining the partitioning policy, our SPMD partitioner focuses on the mechanisms to transform an annotated graph. Weight-update sharding (Xu et al., 2020) is another automatic parallelization transformation based on XLA, which mostly focuses on performance optimizations for TPU clusters, and conceptually can be viewed as a special case for GShard. Zero (Rajbhandari et al., 2019) presents a set of optimizations to reduce memory redundancy in parallel training devices, by partitioning weights, activations, and optimizer state separately, and it is able to scale models to 170 billion parameters; in comparison, GShard is more general in the sense that it does not distinguish these tensors, and all of those specific partitioning techniques can be supported by simply annotating the corresponding tensors, allowing us to scale to over 1 trillion parameters and explore more design choices." }, { "heading": "A.2 THE XLA SPMD PARTITIONER FOR GSHARD", "text": "This section describes the compiler infrastructure that automatically partitions a computation graph based on sharding annotations. Sharding annotations inform the compiler about how each tensor should be distributed across devices. The SPMD (Single Program Multiple Data) partitioner (or “partitioner” for simplicity) is a compiler component that transforms a computation graph into a single program to be executed on all devices in parallel. This makes the compilation time near constant regardless of the number of partitions, which allows us to scale to thousands of partitions. 5\nWe implemented the partitioner in the XLA compiler xla (2019). Multiple frontend frameworks including TensorFlow, JAX, PyTorch and Julia already have lowering logic to transform their graph representation to XLA HLO graph. XLA also has a much smaller set of operators compared to popular frontend frameworks like TensorFlow, which reduces the burden of implementing a partitioner without harming generality, because the existing lowering from frontends performs the heavy-lifting to make it expressive. Although we developed the infrastructure in XLA, the techniques we describe here can be applied to intermediate representations in other machine learning frameworks (e.g., ONNX onn (2019), TVM Relay Roesch et al. (2018), Glow IR Rotem et al. (2018)).\nXLA models a computation as a dataflow graph where nodes are operators and edges are tensors flowing between operators. The core of the partitioner is per-operation handling that transforms a full-sized operator into a partition-sized operator according to the sharding specified on the input and output. When a computation is partitioned, various patterns of cross-device data transfers are introduced. In order to maximize the performance at large scale, it is essential to define a core set of communication primitives and optimize those for the target platform." }, { "heading": "A.2.1 SHARDING PROPAGATION", "text": "GShard only requires the user to annotate a few key tensors in the model, and the compiler will propagate them to all tensors on the graph in an optimization pass. This allows the user to reuse legacy model code by adding a small set of annotations.\nThe propagation pass is designed to be intuitive, and it mostly passes through shardings along shared dimensions between inputs and outputs. Typically, it requires annotations on model weights, and if sharding involves multiple dimensions, activations could also be annotated around core computation operators like Einsum which could have multiple possible outcomes of sharding propagation." }, { "heading": "A.2.2 PER-OPERATOR SPMD PARTITIONING", "text": "The core of the partitioner is the per-operator transformation from a full-sized operator into a partition-sized operator according to the specified sharding. While some operators (e.g., elementwise)\n5An alternative is MPMD (Multiple Program Multiple Data), which does not scale as shown in Figure 4.\nare trivial to support, we discuss several common cases where cross-partition communications are required.\nTo keep the discussion more relevant to the MoE model, this section focuses on Einsum partitioning to illustrate a few communication patterns. And to keep it simple for now, we assume that all tensors are evenly partitioned, which means the size of the dimension to partitition is a multiple of the partition count.\nEinsum Case Study Einsum is the most critical operator in implementing the MoE model. They are represented as a Dot operation in XLA HLO, where each operand (LHS or RHS) consists of three types of dimensions:\n• Batch dimensions are the embarrassingly parallel dimensions. The same set of batch dimensions must exist in all of LHS, RHS and the output, and each element in the output only depends on the corresponding batch in LHS and RHS. • Contracting dimensions only exist in the operands. LHS and RHS must have the same set\nof contracting dimensions, and they are summed up and collapsed in the output. • Non-contracting dimensions are also parallel dimensions that exist in one of the operands\nand the output. Each of LHS and RHS has its own set of non-contracting dimensions, which are inherited by the output.\nSharding propagation prioritizes choosing the same sharding on batch dimensions of LHS, RHS and output, because that would avoid any cross-partition communication. However, that is not always possible, and we need cross-partition communication in the following three cases.\n• Resharding. In the MoE model we built, the expert dispatching logic (Line 3 in Algorithm 2) requires switching the partitioned dimension after an Einsum. Since resharding is efficient (Section A.3.2) with AllToAll, we first execute the Einsum locally, then reshard it to the desired dimension, as shown in Figure 5a. • Accumulating partial results. If the inputs are partitioned along contracting dimensions,\nthe local result is partial and we need to use an AllReduce to combine them and produce the final result, as shown in Figure 5b. • Slicing in a loop. For certain scenarios, we also implemented an algorithm similar to\nCannon’s algorithm Cannon (1969), in order to limit the size of tensors on each partition. For example, if both operands are partitioned on a non-contracting dimension, we cannot compute the local Einsum directly since operands have different non-contracting dimensions." }, { "heading": "Matmul/Einsum: AB,BC->AC", "text": "" }, { "heading": "Matmul/Einsum: AB,BC->AC", "text": "Replicating one of the operands would not cause redundant computation, but it requires the replicated operand to fit in device memory. Therefore, if the size of the operand is too large, we instead keep both operands partitioned and use a loop to iterate over each slice of the result, and use CollectivePermute to communicate the input slices (Figure 5c).\nCompiler optimizations The SPMD partitioner creates various data formatting operators in order to perform slicing, padding, concatenation, masking and halo exchange. To address the issue, we leverage XLA’s fusion capabilities on TPU, as well as code motion optimizations for slicing and padding, to largely hide the overhead of data formatting. As a result, the run-time overhead is typically negligible, even for convolutional networks where masking and padding are heavily used." }, { "heading": "A.2.3 GENERAL SHARDING API", "text": "In addition to the two common APIs (replicate() and split()) for sharding listed in Section 2.2, users or the compiler may use a more advanced sharding strategy to minimize data transfers.\nshard(tensor, device_assignment) annotates tensor to be partitioned with the provided device assignment, and returns the annotated tensor. We use device assignment, a multi-dimensional integer array, to represent how the split is done. device_assignment has the same rank as the data tensor; its element count is the total number of partitions, and each element is the ID of the device that occupies the corresponding data slice. For example, a 3D tensor with shape [256, 1024, 8192] with device assignment shape [2, 1, 4] will have partition shape [128, 1024, 2048], and the order of elements in device assignment determines which slice each partition occupies.\nSince data movement across devices critically affects the parallel execution performance, it is important to consider the target device topology as well as the communication between partitions of the tensor when assigning device ids in the device assignment for maximum performance. Figure 6 shows two different device assignments based on the device topology and the row-wise communication pattern on the tensor." }, { "heading": "A.3 PERFORMANCE AND MEMORY CONSUMPTION", "text": "This section discusses how well GShard achieves computation and memory efficiency on the TPU platform. Our measurement and analysis show that the device memory consumption is roughly constant when we increase the number of devices and experts, and the step time grows sublinearly, i.e., 1.7x execution time increase when we scale the model by 16x from 128 devices to 2048 devices. We also provide microbenchmarks and analyses for a variety of partitioned operators, which could guide use cases beyond this paper." }, { "heading": "A.3.1 MEMORY EFFICIENCY AND SCALABILITY", "text": "In the GShard model, there are mainly three types of memory usage, all of which have constant per-device sizes after SPMD partitioning, when the number of experts increases.\n• Replicated weights (e.g. transformer feed-forward layers). • Distributed weights (MoE feed-forward layers6). • Activations (output of each layer that is used in both forward and backward pass).\nThe O(1) memory scaling is demonstrated in Figure 7, which shows the per-device memory usage distribution for different models. With a fixed number of layers, both weight memory and activation memory stay constant when the number of experts increases.\nOn this other hand, weight memory and activation memory both scale linearly with the number of layers. When the memory requirement exceeds available memory on each device, compiler-based\n6Gate projection weights are O(E) in size and could be partitioned, but in practice they are small enough to be replicated and only have negligible effect on peak memory usage.\nrematerialization will automatically recompute part of the activations in the backward pass in order to reduce peak activation memory. This is why the activation size for MoE(2048E, 60L) is smaller than MoE(2048E, 36L). The overhead of rematerialization is also optimized, e.g. only 28% and 34% of the total cycles are spent on recomputation for 36L and 60L models respectively, and 0% for 12L and 24L since they fit in device memory without rematerialization." }, { "heading": "A.3.2 RUNTIME EFFICIENCY AND SCALABILITY", "text": "Figure 8 shows the breakdown of execution time for an MoE layer and its adjacent Transformer layer. It also compares the achieved performance to a roofline, which is estimated by assuming compute-, memory-, or communication-bounded operations can achieve 100% of the peak FLOPS, memory bandwidth, or interconnect bandwidth. This is a very optimistic estimate as many operators are bounded by a mixed set of resources. At a smaller scale (128 experts), our model can achieve > 70% of the roofline performance. The device time increases by 1.7x when we scale the model to 16x larger (2048 experts), and can still achieve 48% of the roofline performance.\nTransformer layers and MoE feed-forward layer These are the dense parts of the model, which are designed to achieve peak TPU utilization. On each device, these computations also have a constant cost when we scale to more experts. Feed-forward layers and Transformer projections are mainly large matrix multiplications that utilize the TPU’s matrix unit well. These operations have achieved > 85% peak FLOPS in our experiment. The attention operations are composed of mainly batch matmuls, which are bounded by memory bandwidth when sequence lengths are small. As a result, in our experiments attention operations only achieved > 30% peak FLOPS.\nGate computation In Figure 8, “Gate Einsum” represents the first two and the last Einsums in Algorithm 2. The first Einsum is the projection that calculates per-expert input to softmax. It has an O(D) cost, but it is a very small part of the layer. The other two Einsums are dispatching tokens and\ncombining expert results. They effectively implement Gather with one-hot matrices, which are more expensive, but with constant O(GC) = O(1) cost that is independent from the number of experts. The execution time of these Einsums increases by around 2x when we scale from 128 to 2048 experts (16x).\nThe remaining per-device gating computation involves many general-purpose computations like ArgMax and Cumsum, which are either memory-bound or even sequential in nature, thus not designed to utilize TPUs well. The majority of the time is spent on sequential Cumsum operations to invert one-hot matrices that represent selected experts for each token to one-hot matrices that represent selected tokens for each expert. The linear complexity of Cumsum is demonstrated in Figure 8. This part of the gating computation also has an O(D) cost, but fortunately, similar to the Einsum before softmax, it has a very small constant factor. It has negligible execution time with 128 experts, and takes less than 10% of the total time spent in the MoE and Transformer layers with 2048 experts.\nThe most significant part of gating is communication, shown as “MoE dispatch and combine” in Figure 8. These are AllToAll operators, and as we will discuss in Section A.3.3, their cost is O( √ D). When the number experts grows 16x from 128 to 2048, the execution time increases by about 3.75x, and their proportion of execution time in the MoE and Transformer increases from 16% to 36%." }, { "heading": "A.3.3 COMMUNICATION MICROBENCHMARKS AND PER-OPERATOR SCALABILITY", "text": "In this section, we measure and analyze the performance scalability of the SPMD partitioner for basic operators, which can be used to guide use cases beyond the MoE model presented in this paper.\nPerformance scaling of communication primitives Two critical collective communication operators in the MoE model are AllReduce and AllToAll. AllReduce is used in accumulating partial results, and AllToAll is used in resharding (Section A.2.2). Figure 9 shows their performance scalability from 16 to 2048 partitions. AllReduce on TPU has an execution time independent from the number of devices Ying et al. (2018). The variance in Figure 9 is due to specifics of each topology, e.g., whether it is a square or a rectangle, and whether it is a torus or a mesh.\nAllToAll, on the other hand, gets more expensive as the number of partitions grows, but in a sublinear manner. On our 2D TPU cluster, AllToAll cost is roughlyO( √ D), whereD is the number of partitions. This is because with a fixed amount of data each partition sends (8MB or 32MB in Figure 9), the total amount of data that all partitions send is d = O(D). Meanwhile, each data piece needs to travel h = O( √ D) hops on average, and there are overall l = O(D) device-to-device links in the network. Therefore, if it is bandwidth-bound, the execution time of an AllToAll is\nt = dh\nl = O(\nD √ D\nD ) = O(\n√ D).\nEven if it is latency-bound, the execution time will still be O(h) = O( √ D). Comparing 2048 partitions and 16 partitions, while D grows by 128 times, the execution time of AllToAll only increases by 9 times. This enables us to use resharding to efficiently implement cross-partition dispatching (Figure 5a).\nAllGather and CollectivePermute are easier to analyze. AllGather’s output is D larger than the input, and if we fix input size, then its communication cost is O(D). CollectivePermute has a one-to-one communication pattern, and with reasonable device arrangement where the source-destination pairs are close, its cost is O(1) for a fixed input size.\nPartitioned operator scalability We summarize the performance scalability for common operators using GShard in Table 2. It contains the Einsum/Matmul examples in Section A.2.2, and also other common operators like Convolution and Reduce. The table includes the local compute on each partition, as well as the required communication based on our analysis above.\nMost operators in Table 2 have sublinear scalability in terms of both compute and communication, which is consistent with our performance measurement of the MoE model. The O(1) scaling of spatially partitioned convolutions also demonstrates the efficiency of GShard for image partitioning.\nHowever, the last two Matmul operators in Table 2 have O(D) scaling of per-partition compute and communication, where they have unmatched sharding in the operands. This is not due to inefficiency in the partitioning algorithm, but because the total compute in the full operator is very large (O(D2)).\nDifferent partitioning strategies can be used for these cases, producing different communication primitives: replicating one operand will result in AllGather (requiring the replicated operand to fit in device memory), while slicing in a loop (Figure 5c) will result in CollectivePermute." }, { "heading": "A.4 DENSE MODEL SCALABILITY AND BENCHMARKS", "text": "GShard is not limited to sparse models. In this subsection we applied GShard to build large dense transformer with up to trillions of parameters. We open sourced our example implementation and provided a step by step instruction how to train it on the public cloud provider (https://github. com/tensorflow/lingvo/tree/master/lingvo/tasks/lm). We included the model details and performance benchmarks in Table 3. To the best of our knowledge, we provide the only open source implementation that can train transformer models with trillions of parameters efficiently on public cloud. GShard allows tensor partitioning with more than one dimension. For example, we split the activation tensors along both the batch and the model dimensions. This allows input batches with long sequence length (1024 in the benchmark) and global batch size smaller than the number of devices. The performance scales linearly from 64B to 1T. The communication bottleneck started to dominate when scaling further to 4T as compute/communication ratio is lower due to small batch size. Larger batch size is possible by scaling out the model to more TPU cores, or by enabling\ngradient accumulation. For the purpose of apple-to-apple comparison to other models, we did not include the above optimizations in the 4T model." }, { "heading": "64B 32 8192 65536 1024 512 48.4% 150.7", "text": "" }, { "heading": "1T 128 16384 131072 1024 128 47.2% 9.23", "text": "" }, { "heading": "4T 128 32768 262144 1024 32 28.8% 1.36", "text": "Details of dense Transformer sharding We use a 2D mesh of TPU devices of shape [X, Y]. X is used to shard the batch dimension of activation tensors, and Y is used to shard attention heads and the feed-forward hidden dimension. Activations’ head and hidden dimensions are sharded the same way along Y. To further reduce weight storage on each device, we additionally shard the model dimension of the weights along the X dimension. Weights will be partially unsharded on-demand with a subgrouped AllGather operator across devices along X. The parallelism pattern along X is conceptually equivalent to weight-update sharding Xu et al. (2020).\nWhen we further increase the model size, activation storage becomes the bottleneck, because activations between transformer layers are only partially sharded in the device mesh on the batch dimension. So we further shard the model dimension of these activation tensors along Y, making them fully sharded across all devices. Such an activation tensor will be produced by a ReduceScatter operator, which is semantically AllReduce followed by a DynamicSlice but can be implemented more efficiently. The fully sharded activations will also be partially unsharded with AllGather in the next layer." }, { "heading": "A.5 DECODING WITH FLAT BEAM SEARCH", "text": "During decoding, we use beam search with length normalization similar to Wu et al. (2016). Decoding is auto-regressive and generates the target sequence one token at a time, so for an output of length m the decoder layer stack is executed m times, sequentially. In particular for each decoder MoE layer there are dispatch/combine operations, which require cross-device communication. Inference utilizes same cluster with same number of devices as training.\nDuring beam search we flatten the beam hypotheses into a single sequence which contains all underlying tokens interleaved, and we modify decoder self-attention mask so that each hypothesis only has attention to appropriate positions in the joint flat sequence. We apply the same transformation to key/value tensors maintained by each decoder self-attention layer. This allows us to avoid reordering previously computed attention key/values after each beam expansion. Instead, we only reorder the 0/1 mask representing the current active hypotheses. However, attention becomes k times longer.\nThis trade-off can be positive or negative depending on implementation details. As explained in Shazeer (2019), memory bandwidth limits are important for incremental decoding with Transformer models. From this point of view, by flattening the beam we replace two operations with low compute/memory ratio (attention dot product and key/value reordering) with a single operation with a slightly higher compute/memory ratio (attention dot product over a longer sequence with more keys), but with the same total amount of memory it has to access." }, { "heading": "A.6 MACHINE TRANSLATION EXPERIMENTS DETAILS", "text": "In our Machine Translation experiments MoE Transformer models shared\n• Transformer model dimension M = 1024 • Feed Forward and MoE hidden dimension H = 8192 • Number of heads in multi-head attention = 16 • Attention key and value dimension = 128\n• Input, residual and attention dropout rate = 0.1 • The number of groups G = 2D, twice the number of devices. • The expert capacity C = 2 ∝ B×LD×E .\n.\nWe used the Adafactor (Shazeer & Stern, 2018) optimizer with a) factored second-moment estimation; b) first moment decay β1 = 0.0; c) second moment decay β2 = 0.99 with 1 − t−0.8 schedule; d) clipping threshold of 1.0; and e) 1.0 learning rate with square root decay after 10k training steps.\nWe used SentencePiece Kudo & Richardson (2018) subword tokenizer with a single multilingual vocabulary for source-side spanning 102 languages of size 64000, and English-only target-side vocabulary of size 32000.\nIn Figure 10, we compare the achieved loss of each model at different preset training budgets. We observed that lower loss can be obtained by growing the model capacity until some budget-dependent maximum size, which increased with higher training budget. For example, with a relatively low training budget of 5 TPU core years training budget, we observed models with larger capacity lead to even lower training loss up to 150B parameters. But with a high training budget of 30 TPU core years, 600B achieved the lower cost." } ]
2,021
GSHARD: SCALING GIANT MODELS
SP:36c53ab1d8f25c8c61d8b1538ed304b710c14849
[ "This paper proposes an improved offline RL (batch RL) algorithm combining the state-of-the-art behavior-regularization actor-critic method (Nair et al., 2020) with a model-based RL technique. N-trained probabilistic dynamics models generate fictitious trajectories with uncertainty-penalized rewards after pretraining the policy with the behavior-regularization solely on the offline data. Both these generated data and the original offline data are used for the further behavior-regularized actor-critic training. Numerical results showed that the proposed method outperformed recent offline model-free and model-based RL algorithms." ]
In offline reinforcement learning (RL), we attempt to learn a control policy from a fixed dataset of environment interactions. This setting has the potential benefit of allowing us to learn effective policies without needing to collect additional interactive data, which can be expensive or dangerous in real-world systems. However, traditional off-policy RL methods tend to perform poorly in this setting due to the distributional shift between the fixed data set and the learned policy. In particular, they tend to extrapolate optimistically and overestimate the action-values outside of the dataset distribution. Recently, two major avenues have been explored to address this issue. First, behavior-regularized methods that penalize actions that deviate from the demonstrated action distribution. Second, uncertainty-aware model-based (MB) methods that discourage state-actions where the dynamics are uncertain. In this work, we propose an algorithmic framework that consists of two stages. In the first stage, we train a policy using behavior-regularized model-free RL on the offline dataset. Then, a second stage where we fine-tune the policy using our novel Model-Based Behavior-Regularized Policy Optimization (MB2PO) algorithm. We demonstrate that for certain tasks and dataset distributions our conservative model-based fine-tuning can greatly increase performance and allow the agent to generalize and outperform the demonstrated behavior. We evaluate our method on a variety of the Gym-MuJoCo tasks in the D4RL benchmark and demonstrate that our method is competitive and in some cases superior to the state of the art for most of the evaluated tasks.
[]
[ { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Marc G. Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "CoRR, abs/1707.06887,", "year": 2017 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "CoRR, abs/1805.12114,", "year": 2018 }, { "authors": [ "Will Dabney", "Mark Rowland", "Marc G. Bellemare", "Rémi Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "CoRR, abs/1710.10044,", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4rl: Datasets for deep data-driven reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "CoRR, abs/1802.09477,", "year": 2018 }, { "authors": [ "Mohammad Ghavamzadeh", "Marek Petrik", "Yinlam Chow" ], "title": "Safe policy improvement by minimizing robust baseline regret", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Vicenç Gómez", "Hilbert J Kappen", "Jan Peters", "Gerhard Neumann" ], "title": "Policy search for path integral control", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2014 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "CoRR, abs/1801.01290,", "year": 2018 }, { "authors": [ "Garud N. Iyengar" ], "title": "Robust dynamic programming", "venue": "Mathematics of Operations Research,", "year": 2005 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Romain Laroche", "Paul Trichelair", "Remi Tachet Des Combes" ], "title": "Safe policy improvement with baseline bootstrapping", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sergey Levine" ], "title": "Deep learning for robots: Learning from large-scale interaction", "venue": "Google Research Blog,", "year": 2016 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems, 2020", "venue": null, "year": 2020 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin A. Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "CoRR, abs/1312.5602,", "year": 2013 }, { "authors": [ "Ashvin Nair", "Murtaza Dalal", "Abhishek Gupta", "Sergey Levine" ], "title": "Accelerating online reinforcement learning with offline datasets, 2020", "venue": null, "year": 2020 }, { "authors": [ "Arnab Nilim", "Laurent El Ghaoui" ], "title": "Robust control of markov decision processes with uncertain transition matrices", "venue": "Operations Research,", "year": 2005 }, { "authors": [ "Xue Bin Peng", "Aviral Kumar", "Grace Zhang", "Sergey Levine" ], "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Jan Peters", "Stefan Schaal" ], "title": "Reinforcement learning by reward-weighted regression for operational space control", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel", "Timothy Lillicrap", "David Silver" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model, 2020", "venue": null, "year": 2020 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": "CoRR, abs/1502.05477,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Philip S Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High-confidence offpolicy evaluation", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "CoRR, abs/1509.06461,", "year": 2015 }, { "authors": [ "Ziyu Wang", "Alexander Novikov", "Konrad Zolna", "Jost Tobias Springenberg", "Scott Reed", "Bobak Shahriari", "Noah Siegel", "Josh Merel", "Caglar Gulcehre", "Nicolas Heess", "Nando de Freitas" ], "title": "Critic regularized regression, 2020", "venue": null, "year": 2020 }, { "authors": [ "Tengyu Ma" ], "title": "Mopo: Model-based offline policy optimization, 2020", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning has recently been able to achieve impressive results in a variety of video games (Badia et al., 2020) and board games (Schrittwieser et al., 2020). However, it has had limited success in complicated real-world tasks. In contrast, deep supervised learning algorithms have been achieving extraordinary success in scaling to difficult real-world datasets and tasks, especially in computer vision (Deng et al., 2009) and NLP (Rajpurkar et al., 2016). The success of supervised learning algorithms can be attributed to the combination of deep neural networks and methods that can effectively scale with large corpora of varied data. The previous successes of deep RL (Levine, 2016; Schrittwieser et al., 2020) seem to indicate that reinforcement learning can potentially scale with large active data exploration to solve specific tasks. However, the ability to collect such large datasets online seems infeasible in many real-world applications such as automated driving or robotassisted surgery, due to the difficulty and inherent risks in collecting online exploratory data with an imperfect agent.\nExisting off-policy RL algorithms can potentially leverage large, previously collected datasets, but they often struggle to learn effective policies without collecting their own online exploratory data (Agarwal et al., 2020). These failures are often attributed to the Q-function poorly extrapolating to out-of-distribution actions, which leads to overly optimistic agents that largely over-estimate the values of unseen actions. Because we train Q-functions using bootstrapping, these errors will often compound and lead to divergent Q-functions and unstable policy learning (Kumar et al., 2019).\nRecently, there have been a variety of offline RL approaches that have attempted to address these issues. Broadly, we group these approaches into two main categories based on how they address the extrapolation issue.\nThe first set of approaches (Wu et al., 2019; Kumar et al., 2019) rely on behavior-regularization to limit the learned policy’s divergence from the perceived behavioral policy that collected the data. These approaches discourage the agent from considering out-of-distribution actions in order to avoid erroneous extrapolation. While these methods can often be effective when given some amount of expert demonstrations, they often seem too conservative and rarely outperform the best demonstrated behavior.\nThe second set of approaches (Yu et al., 2020; Kidambi et al., 2020) leverage uncertainty-aware MB RL to learn a policy that is discouraged from taking state-action transitions where the learned model has low confidence. Thus, these methods allow a certain degree of extrapolation where the models are confident. Because these methods tend to be less restrictive, they can generalize better than behavior-regularization methods and sometimes outperform the behavioral dataset. However, this flexibility also seems to make it harder for these methods to recover the expert policy when it is present in the dataset, and reduce their effectiveness when trained with a narrow distribution.\nIn this work, we develop an algorithmic framework that combines ideas from behavior-regularization and uncertainty-aware model-based learning. Specifically, we first train a policy using behaviorregularized model-free RL. Then, we fine-tune our results with our novel algorithm Model-Based Behavior-Regularized Policy Optimization (MB2PO). We find that our approach is able to combine the upside of these approaches and achieve competitive or superior results on most of the GymMuJoCo (Todorov et al., 2012) tasks in the D4RL (Fu et al., 2020) benchmark." }, { "heading": "2 RELATED WORK", "text": "While there exist many off-policy RL methods that can learn to solve a large variety of complex control tasks and can scale with large amounts of online data collection, these methods often perform quite poorly when run completely offline without any online data collection. Recently, there have been several methods that made progress in improving the capabilities of offline RL. For a general overview of the field of offline RL, we refer the reader to Levine et al. (2020). Here we will discuss some recent works that are particularly relevant to our approach." }, { "heading": "2.1 IMPROVING OFF-POLICY Q-LEARNING", "text": "Many of the recent advances in both discrete and continuous action off-policy deep RL can be attributed to improvements in stabilizing off-policy Q-learning and reducing overestimation due to erroneous extrapolation. Some notable methods include target networks (Mnih et al., 2013), double Q-learning (DDQN) (van Hasselt et al., 2015), distributional RL (Bellemare et al., 2017; Dabney et al., 2017), and variance reduction through invertible transforms (Pohlen et al., 2018). In learning for continuous control, Fujimoto et al. (2018) introduced a conservative method that uses the minimum estimate of an ensemble of Q-networks as the target, which is often referred to as clipped double-Q-learning. Agarwal et al. (2020) demonstrated that Quantile Regression DDQN (Dabney et al., 2017) and other ensemble methods can be effective in certain discrete action offline RL problems. However, Agarwal et al. (2020) showed that when used naively, these methods do not perform well on complex continuous control tasks. In our work, we incorporate the mentioned advances in off-policy Q-learning into our approach to stabilize performance and prevent potential divergence.\nAdditionally, the offline RL algorithm Conservative Q-learning (CQL) (Kumar et al., 2020) has attempted to address Q-learning’s overestimation issue on offline data directly by including a constraint term that discourages the agent from valuing an out-of-distribution action more than the demonstrated actions. In our method, instead of using a constraint on the Q-values, we use a combination of behavior-regularized model-free RL and uncertainty-aware model-based RL to discourage erroneous extrapolation." }, { "heading": "2.2 BEHAVIOR-REGULARIZED MODEL-FREE RL", "text": "A variety of recent offline RL approaches have incorporated constraints or penalties on the learned policy’s divergence from the empirical behavioral policy. In particular, recent works have used both KL Divergence (Wu et al., 2019) and mean measure of divergence (MMD) (Kumar et al., 2019).\nMMD is sometimes used over KL Divergence because MMD approximately constrains the learned policy to be in the support of the behavioral policy, which is less restricting than KL Divergence. However, most behavior-regularization or policy-constraint methods require the behavioral policy to be represented explicitly in order to estimate these divergences or to enforce their policy constraint (Laroche et al., 2019). In contrast, AWAC (Nair et al., 2020) or CRR (Wang et al., 2020) is able to incorporate a KL divergence constraint without explicitly representing the behavioral policy. They do this by reformulating the policy-constrained RL optimization equations into a form that resembles behavioral cloning re-weighted by the exponential of the advantage. Wang et al. (2020) demonstrates that this method can effectively learn complex control tasks purely from offline data, and Nair et al. (2020) demonstrate that performance can even be improved with further online data collection. In this work, we demonstrate that these properties make AWAC work exceptionally well when used for initialization as well as when used for fine-tuning with Model-Based Policy Optimization (MBPO) (Janner et al., 2019)." }, { "heading": "2.3 UNCERTAINTY-AWARE MODEL-BASED RL", "text": "MB RL algorithms have several natural advantages for offline RL compared to model-free RL algorithms. First, MB RL algorithms rely on supervised learning, which provide more robust gradient signals compared to bootstrapped learning and policy gradients. Second, learning a dynamics model often provides strong task-independent supervision, which allows MB RL algorithms to learn from sub-optimal trajectories. These benefits make generalization easier, and can allow MB RL algorithms to surpass the performance of the demonstrated data. In fact, in many environments, MB RL methods have already been effective in learning with offline or randomly collected datasets. Additionally, there is a rich history of prior works that have explored robust solutions to MDPs with uncertain transition dynamics (Nilim & El Ghaoui, 2005; Iyengar, 2005). However, it can be difficult to scale these type of methods to high-dimensional continuous control tasks, especially when using deep neural networks. Recently, incorporating uncertainty estimation techniques from supervised learning in MB RL has demonstrated further improvement in both online (Chua et al., 2018) and offline deep RL. In particular, two recent works, Model-Based Offline Policy Optimization (MOPO) (Yu et al., 2020) and Model-Based Offline Reinforcement Learning (MoREL) (Kidambi et al., 2020), have demonstrated impressive results by incorporating uncertainty-aware MB RL with the Dyna (Sutton, 1991) style algorithm MBPO (Janner et al., 2019). Both methods use these models to create conservative MDPs that have a lower potential expected sum of rewards compared to the true MDP. By performing policy optimization in the conservative MDP through MBPO they are able to learn a conservative policy that can outperform the demonstrated trajectories. However, these methods can often fail to recover the expert policy even though it was demonstrated in the dataset. We believe that this is largely due to a lack of effective methods for estimating epistemic uncertainty for neural network regression." }, { "heading": "3 PRELIMINARIES", "text": "In RL, we assume our agent operates within a standard Markov decision process (MDP) M = (S,A, T, r, ρ0, γ), where S denotes the state space,A denotes the action space, T (s′|s, a) represents the probabilistic transition dynamics, r is the reward function, ρ0 is the initial state distribution, and γ ∈ (0, 1) is the discount factor. The objective in RL is to learn a policy π(a|s) that optimizes the expected discounted sum of rewards Rπ = Eπ,T,ρ0 [ ∑∞ t=0 γ tr(st, at)].\nIn offline RL, we assume that during training we only have access to a fixed dataset Dβ containing a set of tuples (s, a, s′, r) of environment transitions and associated rewards. We assume that the data was collected by a policy πβ , which we call the behavioral policy. Typically, when training with data not collected by your current policy π, we either use off-policy model-free algorithms or model-based algorithms. The most common off-policy model-free algorithms are actor-critic algorithms that use policy iteration. Policy iteration involves alternating between policy evaluation and policy improvement in order to learn an effective policy. In policy evaluation, these methods train a parametric Q-function by iteratively minimizing the temporal difference equation\nQπk+1 = arg min Q\nEs,a,s′∼D [ ((r(s, a) + γEa′∼π(·|s′)[Qπk (s′, a′)])−Qπ(s, a))2 ] (1)\nIn policy improvement, we update our parametric policy π to maximize our current Q-function πk+1 = arg max\nπ Es∼D,a∼π(·|s)[Qπk(s, a)] (2)\nIn MB RL, we attempt to learn a model T̂ of the transition dynamics and a model r̂ of the reward function. With this learned model of the dynamics and reward function we can create a model MDP M̂ = (S,A, T̂ , r̂, ρ0, γ) to estimate the true underlying MDP M . These methods tend to use either trajectory optimization or policy optimization in the model MDP to produce their policy." }, { "heading": "4 MODEL-BASED BEHAVIOR-REGULARIZED POLICY OPTIMIZATION FOR OFFLINE FINE-TUNING", "text": "For many offline datasets, it could be much harder to learn an effective model of the MDP than to learn a reasonable policy. This is especially the case when there is low variability or insufficient coverage of the state and action space in the collected dataset, or in environments with complex observations, like images, or long horizons. To overcome these issues, recent works (Yu et al., 2020; Kidambi et al., 2020) have leveraged uncertainty estimation methods in order to construct conservative MDPs that use soft penalties or hard thresholds on model uncertainty to discourage deviating from the confident regions. However, these methods rely on the efficacy of ensemble-based neural network uncertainty estimation methods which currently are not particularly effective at estimating epistemic uncertainty in regression settings. Therefore, we propose Model-Based BehaviorRegularized Policy Optimization (MB2PO). In MB2PO, we likewise use uncertainty-aware models to perform offline MBPO, but use the behavior-regularized model-free algorithm AWAC (also known as CRR-exp) instead of SAC (Haarnoja et al., 2018) for policy optimization." }, { "heading": "4.1 CONSERVATIVE MBPO", "text": "In this work, we use MOPO (Yu et al., 2020) as a basis for our conservative MBPO, due to its simplicity and prior effective results on the D4RL benchmarks. In MOPO, they construct a conservative MDP by augmenting the reward function as follows\nr̃(s, a) = r̂(s, a)− λu(s, a) (3) where r̂ is the learned estimate of the reward and u is the estimated uncertainty for the model transition. Note, that this general formulation for a conservative MDP has also been explored in other prior work such as (Ghavamzadeh et al., 2016). Still, we specifically follow MOPO in using the maximum standard deviation across an ensemble of probabilistic dynamics models as our measure of uncertainty. Therefore, we can decompose our Q-function in this conservative MDP as\nQπ(s, a) = Q̂πr (s, a)− λQπu(s, a) (4) where Q̂πr represents our estimate of the expected discounted sum of rewards in the real MDP and Qπu represents our expected discounted sum of uncertainty penalties. Now at convergence, if our policy π deviates from the behavioral policy πβ that collected the data, then we expect for all states in the conservative MDP that\nE[Qπ] ≥ E[Qπβ ] (5) Thus, by plugging in our decomposition we get\nE[Q̂πr (s, a)] ≥ E[Q̂ πβ r (s, a)] + λE[Qπu(s, a)] (6)\nWhile in theory, with well-calibrated uncertainty estimates and a proper tuning of λ, this should lead to only safe policy improvements over the behavioral policy, in practice it seems that MOPO is often unable to recover expert-level performance when it is provided in the offline dataset. This is unsurprising given that it is hard to generate well-calibrated epistemic uncertainty estimates in regression settings, and there will inevitably be model errors that will lead to overestimated Qvalues.\nTo address these issues, we use policy constrained model-free RL in MB2PO. In policy constrained model-free RL, we attempt to optimize the following policy objective\nπ = arg max π\nEa∼π(·|s)[Qπ(s, a)] (7)\ns.t. DKL(π(·|s)‖πβ(·|s)) ≤\nIf we estimate both π and πβ to be roughly univariate Gaussians with similar variances, then the KL constraint becomes an `2 constraint on the policy mean. Because we expect our models to be locally accurate around the data, this constraint can help ensure that we stay in the effective region of the estimated MDP even if we have poorly calibrated uncertainty estimation. Additionally, Janner et al. (2019) demonstrated that the difference between the true expected returns J(π) and the expected returns Ĵ(π) of an MDP induced by an approximate model can be bounded by\nJ(π) ≥ Ĵ(π)− [ 2γrmax( m + 2 π)\n(1− γ)2 + 4rmax π 1− γ\n] (8)\nwhere rmax is the maximum reward, γ is the discount factor, m is a bound on the total variation distance (TVD) between the learned model and the true model, and π is a bound on the TVD between π and πβ on the demonstrated states. By Pinker’s inequality, bounding the KL divergence also bounds the TVD. Therefore, by leveraging policy constraints in the policy optimization in MBPO, we can reduce the gap in expected returns and improve the algorithm’s robustness to model errors." }, { "heading": "4.2 BEHAVIOR-REGULARIZED MODEL-FREE RL WITH AWAC", "text": "For performing behavior-regularized policy optimization, we use AWAC (Nair et al., 2020), also known as CRR-exp (Wang et al., 2020) due to its impressive results in offline RL and its ability to be fine-tuned with additional online data.\nBy enforcing the KKT conditions (Peng et al., 2019; Peters & Schaal, 2007; Gómez et al., 2014), we can derive an analytic solution to Equation 7, where the Lagrangian is\nL(π, α) = Ea∼π(·|s) [Qπ(s, a)] + α( −DKL(π(·|s)‖πβ(·|s)))\nWe can substitute Aπ(s, a) for Qπ(s, a) because it does not affect the optimum and get the closedform solution\nπ∗(a|s) = 1 Z(s)\nπβ(a|s) exp ( Aπ(s, a)\nα ) where Z(s) is the normalizing partition function. In order to project this solution into our policy space, we update our parameters by minimizing DKL(π∗‖πθ). This leads to the following iterative update\nθk+1 = arg min θ\nEs,a∼D [ − log πθ(a|s) 1\nZ(s) exp\n( Aπk(s, a)\nα\n)] (9)\nWe follow Wang et al. (2020) and Peng et al. (2019) and avoid estimating Z(s) and instead clamp the exponential term to be at most 20. Additionally, one could adaptively learn α using dual gradient descent, but this would require us to explicitly model the behavioral policy πβ . Instead, we use a fixed α for all of our results. Additionally, the Q-function is updated off-policy using the Bellman equations as described in Equation 2 and the improvements from section 2.1.\nOne of the major benefits of using AWAC with a fixed α is that we can leverage behavior regularization in a principled manner without needing to explicitly represent the behavioral policy. This is particularly important in 3 major cases: 1. when there are not enough data to learn the behavioral policy; 2. when the data was collected by a variety of different policies or sources; 3. when the data was collected by a policy outside of your policy class, such as a human expert or a controller that leverages hidden state information.\nAdditionally, we can view AWAC as a reweighted behavioral cloning algorithm. Unlike SAC (Haarnoja et al., 2018) and DDPG (Lillicrap et al., 2015), it does not rely on the reparametrization trick or gradients of your learned Q-function to perform policy updates. This allows us to use a wider ranger of policy classes, which in this work we take advantage of by using a tanh squashed GMM with 5 components. We suspect that there are also some additional benefits to not depending on the gradients of the learned Q-function, which might be particularly bad in offline settings, but leave further investigation to future work.\nAn important thing to note with AWAC is that we can influence the implicit behavioral penalty by controlling the source of the data we train with. This holds, for example, if we perform a series\nof policy updates only using data collected by the previous policy iterate. Then, we are implicitly performing a trust-region policy update like TRPO (Schulman et al., 2015) and PPO (Schulman et al., 2017) of the form\nπk+1 = arg max π\nEa∼π(·|s)[Qπk(s, a)] (10)\ns.t. DKL(π(·|s)‖πk(·|s)) ≤\nIn fact, if we train on data collected by the last n policy iterates, then we are approximately constraining our policy to a weighted sum of the previous n policies π(n)k = 1 n ∑n−1 i=0 πk−i and damping our learning process in the policy space.\nIn our work, we train with a ω ∈ [0, 1] portion of the data from offline data collected by πβ and a (1− ω) portion of the data collected online from the last n policy iterates in the conservative MDP defined by our learned models. Therefore, we are approximately optimizing the following objective\nEa∼π(·|s) [ Q̂π(s, a) ] − α ( ωDKL(π(·|s)‖πβ(·|s)) + (1− ω)DKL(π(·|s)‖π(n)k ) ) (11)\nTherefore, by using AWAC as the policy optimization algorithm in MB2PO, we can easily perform behavior-regularized policy optimization with soft damped trust region updates in the conservative MDP to reduce the effects of model errors and poor uncertainty estimation." }, { "heading": "4.3 MODEL-BASED BEHAVIOR-REGULARIZED POLICY OPTIMIZATION", "text": "Train πθ, Qφ with AWAC with samples from Dβ Train an ensemble of N probabilistic dynamics {T̂ iθ(st+1, r|st, at) = N (µiθ(st, at),Σiθ(st, at))}Ni=1 on the data in Dβ for epoch k= 1, 2, . . . do Initialize empty replay buffer Dk for 1, 2, . . . , batchsize do\nSample state s1 from Dβ for j = 1, 2, . . . , h do\naj ∼ π(sj) Uniformly sample T̂ from {T̂ i}Ni=1 sj+1, rj ∼ T̂ (sj , aj) r̃j = rj − λmaxNi=1 ‖Σi(sj , aj)‖F Add sample (sj , aj , r̃j , sj=1) to Dk\nend end Draw ω portion of the samples from Dβ and the rest uniformly from {Dk−i}99i=0 to train πθ\nand Qφ with AWAC end\nWe first initialize our policy by training with AWAC solely on the offline data.\nNext for fine-tuning with MB2PO, we train an ensemble of probabilistic dynamics models represented by neural networks that output a diagonal Gaussian distribution over the next state and reward: {T̂ iθ(st+1, r|st, at) = N (µiθ(st, at),Σiθ(st, at))}Ni=1. We construct a conservative MDP that at every time step uses a randomly drawn dynamics model from {T̂ iθ}Mi=1 to determine the next state transition. Additionally, we incorporate an penalty on the largest predicted standard deviation among the dynamics models as a practical means of penalizing both epistemic and aleatoric uncertainty.\nThen, we alternate between collecting data with our current policy in the conservative MDP and updating our policy and Q-network using Equation 9 and Equation 2 respectively. When collecting data in the conservative MDP, we collect h-length truncated trajectories starting from states in the original offline dataset. By collecting data this way, we are able to collect a variety of imagined\ndata without relying on long model rollouts, which would inevitably lead to compounding errors. When performing training updates, we sample ω ∈ [0, 1] of the data from the original dataset and the remaining 1 − ω uniformly from the last 100 policy iterates. Our full algorithm is outlined in Algorithm 1." }, { "heading": "5 EXPERIMENTS", "text": "In our experiments, we aim to address two questions: (1) Is AWAC an effective initialization algorithm? (2) Can we further improve performance by fine-tuning with MB2PO?\nWe evaluate (1) by comparing AWAC to other state-of-the-art model-free offline RL algorithms. In particular, we compare our results to BRAC-v (Wu et al., 2019), BEAR (Kumar et al., 2019), and CQL (Kumar et al., 2020) on the Gym-MuJoCo tasks in the D4RL benchmark.\nWe evaluate (2) by fine-tuning the policy and Q-function, after running AWAC for 500000 gradient steps, with MB2PO. In addition to the model-free offline RL algoirthm above, we also compare these results to MOPO, which to the best of our knowledge is the state-of-the-art MB offline RL algorithm on the Gym-MuJoCo tasks in the D4RL benchmark. All of our hyperparameters for AWAC and MB2PO are given in the appendix.\nThe Gym-MuJoCo tasks are a standard in evaluating modern deep RL algorithms. The goal in these tasks is to learn to travel as far forward as possible within a set horizon on a variety of different robots. The D4RL benchmark contains a variety of precollected datasets for the halfcheetah, walker2d, and hopper tasks. For each robot task, there are 5 different provided datasets. The ”-random” datasets contain 1 million samples collected from a randomly initialized policy. The ”-medium” datasets contain 1 million samples collected from a RL policy partially trained to a performance of approximately 33. The ”-expert” datasets contain 1 million samples collected from a fully trained RL policy that reaches approximately 100. The ”-mixed” datasets contain all the data in the replay buffer from the partially trained ”medium” policy. Finally, the ”-medium-expert” datasets are a combination of the ”-medium” and ”-expert” datasets. An important thing to note is that all datasets besides the ”-mixed” datasets were collected with only 1 or 2 policies, and thus probably only cover a narrow part of the state-action distribution. On the other hand, the ”-mixed” dataset contains the data collected by all of the policy iterates during an incomplete RL training run, and thus represents a much wider part of the state-action distribution.\nResults in Table 1 demonstrate that AWAC on its own can get reasonable results on all the datasets and can approach state-of-the-art results on ”-expert” and ”-medium-expert” datasets. Unlike the other behavior-regularized model-free methods, AWAC and CQL are able to get near or fully recover expert-level performance when trained on the ”medium-expert-” datasets. This indicates that AWAC and CQL are more robust as there is less of a drop in performance compared to other methods when incorporating additional sub-optimal trajectories.\nNext, we fine-tune the trained AWAC policies with MB2PO. For each task and dataset, we pretrain an ensemble of 5 probabilistic dynamics models for 100000 gradient steps on the behavioral dataset. We then perform MB2PO for 500 iterations. Each iteration consists of collecting 1000000 steps from h-length truncated trajectories in the conservative MDP, which should run in a few seconds on modern GPU hardware, followed by 1000 gradient steps.\nResults in Table 1 demonstrate that our method is effective in improving the performance over AWAC in 11 of the 15 tasks. In particular, we find that MB2PO significantly improves the performance on all of the ”-mixed” datasets and even achieves state-of-the-art on ”walker2d-mixed” by a large margin. These strong results in the ”-mixed” datasets demonstrate that our model-based fine-tuning method can be especially beneficial when there is sufficient variation in the behavioral dataset. Additionally, the noticeable improvement in some of the ”-medium” and ”-medium-expert” datasets demonstrate that our fine-tuning can be effective even when the data was collected by one or two policies. In the 4 cases where MB2PO fine-tuning degrades performance, it is always negligible and never over 3 points.\nOur method also outperforms MOPO, the most direct comparison, in 8 out of the 12 comparable tasks. These results demonstrate the benefits of combining behavior-regularized model-free RL with uncertainty-aware MB RL as we are able to get the best of both worlds. We are able to recover high-level performance when available in the dataset like AWAC, and we can still generalize and learn to outperform the best observed trajectory like MOPO.\nFinally, our method is quite competitive with CQL as we beat it for 7 of the 15 tasks, and generally our results are quite comparable. However, our method significantly outperforms CQL in all of the ”-mixed” datasets. These results indicate that our method might be superior in situations where the dataset was collected by a variety of different policies." }, { "heading": "6 CONCLUSION", "text": "We proposed an algorithmic framework that leverages the benefits of both behavior-regularized model-free methods and uncertainty-aware model-based methods. We do this by first training an initial policy with the offline model-free AWAC algorithm. Then, we fine-tune with our novel MB2PO algorithm. We perform this by learning uncertainty-aware models that are used to create a conservative MDP. Then, we continue to use AWAC to further update our policy and Q-function in this conservative MDP. By using AWAC, we are able to perform policy optimization while implicitly constraining the learned policy’s KL divergence to the behavioral policy. We demonstrate that this two-stage process allows us to get the best of both worlds between behavior-regularized model-free methods and uncertainty-aware model-based methods. Specifically, the initial AWAC training allows us to often recover the best-performing behavior in the dataset, and, when possible, MB2PO fine-tuning can allow us to generalize and outperform the demonstrated behavior.\nWe see four important directions of future work in order to extend the effectiveness and applicability of MB2PO: 1. developing a rigorous means of determining for what datasets MB2PO fine-tuning can be effective; 2. improving MB RL and neural network uncertain estimation to increase the number of datasets where MB2PO can be effective; 3. better leveraging behavior-regularization in the policy optimization or the conservative MDP to improve MB2PO’s ability to recover expert behavior when available; 4. improving off-policy evaluation (Thomas et al., 2015) for neural network policies in order to facilitate offline hyperparameter tuning." }, { "heading": "7 APPENDIX", "text": "All methods were trained with the Adam optimizer(Kingma & Ba, 2014)" }, { "heading": "7.1 AWAC", "text": "Our Q-networks and policies were represented with [256, 256, 256, 256] fully connected networks with relu hidden-activations. The policy was a 5-head tanh squashed GMM. The Q-network outputted 100 quantiles and was trained with the following Q-learning improvement: Quantile Regression DQN (Dabney et al., 2017), Clipped DQN(Fujimoto et al., 2018), Double DQN(van Hasselt et al., 2015), and Invertible Transforms(Pohlen et al., 2018). The advantage was estimated with 10 samples. We ran AWAC for 500000 gradient steps. The rest of the parameters were taken from the original AWAC paper (Nair et al., 2020): α = 1., batch size 1000, policy weight decay 1.e − 4, policy and Q learning rate 3.e− 4, soft target network update 5.e− 3." }, { "heading": "7.2 MB RL", "text": "Our probabilistic dynamics models were represented with [200, 200, 200, 200] fully connected networks with swish hidden-activations. The models outputted a mean and variance for every state variable and the reward. We trained the models for 100000 gradient steps. The rest of the parameters are: batch size 256, weight decay 1.e− 4, learning rate 1.e− 3." }, { "heading": "7.3 MB2PO", "text": "MB2PO used the same AWAC parameters as before, but changed α according to the table below. We ran MB2PO for 500 iterations where each iteration consisted of collecting 1000000 samples from truncated h-length trajectories with the current GMM policy, then training for 1000 steps with ω of the data from real dataset and 1−ω of the data from imagined model rollouts from the last 100 iterations. We picked the better of α = 1, ω = {0.01, 0.05} for the mixed datasets, and the better of α = {1, 2}, ω = 0.8 for the other datasets. We used h = 5 for several of the halfcheetah datasets and h = 1 for the rest because any larger h tended to cause the Q-values to explode.\nTask and Dataset alpha omega h lambda halfcheetah-random 1. 0.8 5 1 hopper-random 1. 0.8 1 1 walker2d-random 1. 0.8 1 1 halfcheetah-medium 1. 0.8 5 1 hopper-medium 1. 0.8 1 1 walker2d-medium 1. 0.8 1 1 halfcheetah-expert 2. 0.8 1 1 hopper-expert 2. 0.8 1 1 walker2d-expert 2. 0.8 1 1 halfcheetah-medium-expert 2. 0.8 1 1 hopper-medium-expert 2. 0.8 1 1 walker2d-medium-expert 2. 0.8 1 1 halfcheetah-mixed 1. 0.01 5 1 hopper-mixed 1. 0.01 1 1 walker2d-mixed 1. 0.01 1 1" } ]
2,020
null
SP:c5a5db22e2ac2eaa16a74238256753e567b07d9a
[ "In this manuscript, the authors present a conditional GAN for generating protein sequences given specified GO terms. They argue that this approach to conditional protein generation is more appropriate than sequence-based generation, because it gets directly at functional specification. At a high level, this is an interesting idea though it has already started to be explored by other works. The authors are correct that these works focus primarily on optimize a single function of interest. However, there doesn’t seem to be any specific reason that guided design approaches could not generalize to multiple criteria. Regardless, controlled generation of proteins with pre specified functions is certainly interesting." ]
The availability of vast protein sequence information and rich functional annotations thereof has a large potential for protein design applications in biomedicine and synthetic biology. To this date, there exists no method for the general-purpose design of proteins without any prior knowledge about the protein of interest, such as costly and rare structure information or seed sequence fragments. However, the Gene Ontology (GO) database provides information about the hierarchical organisation of protein functions, and thus could inform generative models about the underlying complex sequence-function relationships, replacing the need for structural data. We therefore propose to use conditional generative adversarial networks (cGANs) on the task of fast de novo hierarchical multi-label protein design. We generate protein sequences exhibiting properties of a large set of molecular functions extracted from the GO database, using a single model and without any prior information. We shed light on efficient conditioning mechanisms and adapted network architectures thanks to a thorough hyperparameter selection process and analysis. We further provide statisticallyand biologically-driven evaluation measures for generative models in the context of protein design to assess the quality of the generated sequences and facilitate progress in the field. We show that our proposed model, ProteoGAN, outperforms several baselines when designing proteins given a functional label and generates well-formed sequences.
[]
[ { "authors": [ "Namrata Anand", "Possu Huang" ], "title": "Generative modeling for protein structures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christof Angermueller", "David Dohan", "David Belanger", "Ramya Deshpande", "Kevin Murphy", "Lucy Colwell" ], "title": "Model-based reinforcement learning for biological sequence design", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Frances H Arnold" ], "title": "Design by directed evolution", "venue": "Accounts of chemical research,", "year": 1998 }, { "authors": [ "Karsten M Borgwardt", "Arthur Gretton", "Malte J Rasch", "Hans-Peter Kriegel", "Bernhard Schölkopf", "Alex J Smola" ], "title": "Integrating structured biological data by kernel maximum mean discrepancy", "venue": "Bioinformatics, 22(14):e49–e57,", "year": 2006 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "David Brookes", "Hahnbeom Park", "Jennifer Listgarten" ], "title": "Conditioning by adaptive sampling for robust design", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Zak Costello", "Hector Garcia Martin" ], "title": "How to hallucinate functional proteins", "venue": "arXiv preprint arXiv:1903.00458,", "year": 2019 }, { "authors": [ "Payel Das", "Kahini Wadhawan", "Oscar Chang", "Tom Sercu", "Cicero Dos Santos", "Matthew Riemer", "Vijil Chenthamarakshan", "Inkit Padhi", "Aleksandra Mojsilovic" ], "title": "Pepcvae: Semi-supervised targeted design of antimicrobial peptide sequences", "venue": "arXiv preprint arXiv:1810.07743,", "year": 2018 }, { "authors": [ "Kristian Davidsen", "Branden J Olson", "William S DeWitt III", "Jean Feng", "Elias Harkins", "Philip Bradley", "Frederick A Matsen IV" ], "title": "Deep generative models for t cell receptor protein", "venue": "sequences. Elife,", "year": 2019 }, { "authors": [ "Terrance DeVries", "Adriana Romero", "Luis Pineda", "Graham W Taylor", "Michal Drozdzal" ], "title": "On the evaluation of conditional gans", "venue": null, "year": 1907 }, { "authors": [ "Ken A Dill", "Justin L MacCallum" ], "title": "The protein-folding problem, 50 years", "venue": "on. Science,", "year": 2012 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy", "Zoubin Ghahramani" ], "title": "Training generative neural networks via maximum mean discrepancy optimization", "venue": "arXiv preprint arXiv:1505.03906,", "year": 2015 }, { "authors": [ "Sean R Eddy" ], "title": "What is a hidden markov model", "venue": "Nature biotechnology,", "year": 2004 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Andreea Gane", "Babak Alipanahi", "Christof Angermueller", "David Belanger", "David Dohan", "Lucy Colwell", "Olivier Chapelle", "Ramya Deshpande", "Suhani Vora" ], "title": "A comparison of generative models for sequence", "venue": null, "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Joe G Greener", "Lewis Moffat", "David T Jones" ], "title": "Design of metalloproteins and novel protein folds using variational autoencoders", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Paulina Grnarova", "Kfir Y Levy", "Aurelien Lucchi", "Nathanaël Perraudin", "Ian Goodfellow", "Thomas Hofmann", "Andreas Krause" ], "title": "A domain agnostic measure for monitoring and evaluating gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Anvita Gupta", "James Zou" ], "title": "Feedback gan for dna optimizes protein functions", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Alex Hawkins-Hooker", "Florence Depardieu", "Sebastien Baur", "Guillaume Couairon", "Arthur Chen", "David Bikard" ], "title": "Generating functional protein variants with variational autoencoders", "venue": "BioRxiv,", "year": 2020 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Po-Ssu Huang", "Scott E Boyken", "David Baker" ], "title": "The coming of age of de novo protein", "venue": "design. Nature,", "year": 2016 }, { "authors": [ "Frank Hutter", "Holger Hoos", "Kevin Leyton-Brown" ], "title": "An efficient approach for assessing hyperparameter importance", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "John Ingraham", "Vikas Garg", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Generative models for graphbased protein design", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kevin Jamieson", "Ameet Talwalkar" ], "title": "Non-stochastic best arm identification and hyperparameter optimization", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Mostafa Karimi", "Shaowen Zhu", "Yue Cao", "Yang Shen" ], "title": "De novo protein design for novel folds using guided conditional wasserstein generative adversarial networks (gcwgan)", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "Nathan Killoran", "Leo J Lee", "Andrew Delong", "David Duvenaud", "Brendan J Frey" ], "title": "Generating and designing dna with deep generative models", "venue": "arXiv preprint arXiv:1712.06148,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Tuomas Kynkäänniemi", "Tero Karras", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Improved precision and recall metric for assessing generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christina Leslie", "Eleazar Eskin", "William Stafford Noble" ], "title": "The spectrum kernel: A string kernel for svm protein classification", "venue": "In Biocomputing", "year": 2002 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Yu Cheng", "Yiming Yang", "Barnabás Póczos" ], "title": "Mmd gan: Towards deeper understanding of moment matching network", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Yujia Li", "Kevin Swersky", "Rich Zemel" ], "title": "Generative moment matching networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ali Madani", "Bryan McCann", "Nikhil Naik", "Nitish Shirish Keskar", "Namrata Anand", "Raphael R Eguchi", "Po-Ssu Huang", "Richard Socher" ], "title": "Progen: Language modeling for protein generation", "venue": null, "year": 2004 }, { "authors": [ "Peter Markiewicz", "Lynn G Kleina", "Christina Cruz", "Susannah Ehret", "Jeffrey H Miller" ], "title": "Genetic studies of the Lac repressor. XIV. Analysis of 4000 altered Escherichia coli Lac repressors reveals essential and non-essential residues, as well as” spacers", "venue": "Journal of molecular biology,", "year": 1994 }, { "authors": [ "Frank J Jr Massey" ], "title": "The kolmogorov-smirnov test for goodness of fit", "venue": "Journal of the American statistical Association,", "year": 1951 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cgans with projection discriminator", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alex T Muller", "Jan A Hiss", "Gisbert Schneider" ], "title": "Recurrent neural network model for constructive peptide design", "venue": "Journal of chemical information and modeling,", "year": 2018 }, { "authors": [ "Pauline C Ng", "Steven Henikoff" ], "title": "Predicting deleterious amino acid substitutions", "venue": "Genome research,", "year": 2001 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier GANs", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Dan Ofer", "Michal Linial" ], "title": "Profet: Feature engineering captures high-level protein", "venue": "functions. Bioinformatics,", "year": 2015 }, { "authors": [ "Barrett O’neill" ], "title": "Elementary differential geometry", "venue": "Academic press,", "year": 2014 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "A. Krogh R. Durbin", "S. Eddy", "G. Mitchinson" ], "title": "Biological sequence analysis. probabilistic models of proteins and nucleic acids", "venue": null, "year": 1998 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "CoRR, abs/1511.06434,", "year": 2015 }, { "authors": [ "Predrag Radivojac", "Wyatt T Clark", "Tal Ronnen Oron", "Alexandra M Schnoes", "Tobias Wittkop", "Artem Sokolov", "Kiley Graim", "Christopher Funk", "Karin Verspoor", "Asa Ben-Hur" ], "title": "A large-scale evaluation of computational protein function prediction", "venue": "Nature methods,", "year": 2013 }, { "authors": [ "Donatas Repecka", "Vykintas Jauniskis", "Laurynas Karpus", "Elzbieta Rembeza", "Jan Zrimec", "Simona Poviloniene", "Irmantas Rokaitis", "Audrius Laurynenas", "Wissam Abuajwa", "Otto Savolainen" ], "title": "Expanding functional protein sequence space using generative adversarial networks. bioRxiv", "venue": null, "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "International Conference in Machine Learning,", "year": 2014 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Konstantin Shmelkov", "Cordelia Schmid", "Karteek Alahari" ], "title": "How good is my gan", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Dougal J Sutherland", "Hsiao-Yu Tung", "Heiko Strathmann", "Soumyajit De", "Aaditya Ramdas", "Alex Smola", "Arthur Gretton" ], "title": "Generative models and model criticism via optimized maximum mean discrepancy", "venue": "arXiv preprint arXiv:1611.04488,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ronghui You", "Shuwei Yao", "Yi Xiong", "Xiaodi Huang", "Fengzhu Sun", "Hiroshi Mamitsuka", "Shanfeng Zhu" ], "title": "Netgo: improving large-scale protein function prediction with massive network information", "venue": "Nucleic acids research,", "year": 2019 }, { "authors": [ "Naihui Zhou", "Yuxiang Jiang", "Timothy R Bergquist", "Alexandra J Lee", "Balint Z Kacsoh", "Alex W Crocker", "Kimberley A Lewis", "George Georghiou", "Huy N Nguyen", "Md Nafiz Hamid" ], "title": "The cafa challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens", "venue": "Genome biology,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Designing proteins with a target biological function is an important task in biotechnology with highimpact implications in pharmaceutical research, such as in drug design or synthetic biology (Huang et al., 2016). However, the task is challenging since the sequence-structure-function relationship of proteins is extremely complex and not yet understood (Dill & MacCallum, 2012). Functional protein design is currently done by traditional methods such as directed evolution (Arnold, 1998), which rely on a few random mutations of known proteins and selective pressure to explore a space of related proteins. However, this process can be time-consuming and cost-intensive, and most often only explores a small part of the sequence space. In parallel, data characterizing proteins and their functions is readily available and constitutes a promising opportunity for machine learning applications in protein sequence design. Moreover, the hierarchical organisation of protein functions in a complex ontology of labels could help machine learning models capture sequence-information relationships adequately. Recently, generative models have attempted to design proteins for different tasks, such as developing new therapies (Muller et al., 2018; Davidsen et al., 2019) or enzymes (Repecka et al., 2019). Nonetheless, most of the de novo protein sequence design methods, which generate sequences from scratch, focus on a specific function or on families of short proteins. Instead, we would like to focus on modeling several different biological functions at the same time to eventually be able to freely combine them. To this end, one first requires a model that is able to deal with and to understand the inherent label structure. We concern ourselves with the development of such a generative model.\nIn this work, we introduce the general-purpose generative model ProteoGAN, a conditional generative adversarial network (cGAN) that is able to generate protein sequences given a large set of functions in the Gene Ontology (GO) Molecular Function directed acyclic graph (DAG) (Gene On-\ntology Consortium, 2019). To the extent of our knowledge, we are the first to propose a hierarchical multi-label de novo protein design framework, which does not require prior knowledge about the protein, such as seed sequence fragments or structure.\nOur contributions can be summarized as follows: (i) we propose a data-driven approach to de novo functional protein generation that leverages a large set of annotated sequences, (ii) we present a new extensive evaluation scheme to assess validity, conditional consistency, diversity, and biological relevance of the generated sequences, and (iii) we conduct an in-depth model optimization to derive actionable insights on architectural choices and efficient conditioning mechanisms while outperforming existing state-of-the-art protein generators.\nWe focus on generative adversarial networks, due to their promising performance on specific sequence design tasks (Repecka et al., 2019). We choose a conditional setting not to rely on oracles nor on multiple rounds of training-generation-measurement, since to this date a well performing general-purpose predictor of protein function remains elusive (Zhou et al., 2019). As opposed to most existing methods (see Section 2), we aim to generate a comprehensive variety of proteins exhibiting a wide range of functions, rather than focusing on optimising a single function within a unique protein family. As this is a different task from the ones found in the literature, we need to define an adequate evaluation pipeline.\nTherefore, we establish a multiclass protein generation evaluation scheme centered around validity and conditional consistency. The model should generate protein sequences whose distribution resembles that of natural proteins and hence have similar chemo-physical properties, and it should do so conditionally, namely generating proteins of a given functional class without off-target functions.\nWe are hence confronted with the problem of assessing i) the performance of the generative model in a general sense, which is defined by how well the generated distribution fits the training data distribution, and ii) the conditional performance of the model which we define as a special case of the general performance, where we compare sequence feature distributions between labels. We therefore require distribution-based evaluations. A natural choice to evaluate the performance of a generative model is a two-sample test, which allows to answer whether a generated and a real set of samples (i.e. the dataset) could originate from the same distribution. The difficulty here is to define a measure that can handle the structured data, in our case protein sequences. To this end, we design Maximum Mean Discrepancy (MMD)-based evaluation criteria (Gretton et al., 2012), which ensure good model performance and a functioning conditioning mechanism by measuring differences in empirical distribution between sets of generated and real protein sequences. To ensure diversity, we monitor the duality gap (Grnarova et al., 2019), a domain-agnostic indicator for GAN training. Lastly, we use a series of biologically-driven criteria in the evaluation phase that confirms the biological validity of the generated protein by relying on the standard protein feature software ProFET (Ofer & Linial, 2015).\nWith this arsenal of measures, and given the low computational complexity of our MMD-based criteria, we compare different architectural choices and hyperparameters in an extensive and efficient Bayesian Optimization and HyperBand (BOHB) (Falkner et al., 2018) search. In particular, we develop improved variants of two existing conditional mechanisms on GANs (Odena et al., 2017; Miyato & Koyama, 2018) and show for the first time that the previously unexplored combination of both is beneficial to conditional generation. Moreover, the selected model outperforms (i) de novo conditional model CVAE (Greener et al., 2018), repurposed and trained towards functional protein generation, other introduced baselines (HMM, n-gram model), and (ii) models specifically built to challenge the necessity of a conditional mechanism.\nThe remainder of the document is organized as follows. First, the background and related work section gives a concise overview of the biological mechanisms underlying the function of proteins, summarises the state-of-the-art generative models applied to protein design, details some conditional mechanisms in GANs and identifies existing evaluation criteria for GANs and cGANs. Subsequently, the method section describes ProteoGAN and its components and explains our protein generation evaluation framework. Finally, the results obtained by conditioning the generation of new sequences on 50 GO classes are presented and discussed before concluding with some final remarks." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Biological mechanisms underlying protein functions. Proteins are biological structures that serve a wide variety of purposes in organisms. They are composed of chains of amino acids and can therefore be represented as simple sequences. However, the relationship between physico-chemical properties of amino-acids, three dimensional structure and resulting biological activity of the macromolecule is highly complex (see supplementary Section A.1). Nevertheless, since the advent of modern sequencing techniques, millions of proteins have been registered in databases, along with curated descriptions of their function. For example, the GO is a species-agnostic ontology that aims at classifying genes (and the resulting proteins) according to their functions, locations, and governing biological processes using a hierarchical structure of functional labels. As such, it represents an ideal interface between scientists who wish to design proteins with descriptive and modular labels, and a generative model that captures the complex relationships of sequence, structure and function.\nGuided and conditional generative models. Machine learning models and more recently deep generative models (Eddy, 2004; Goodfellow et al., 2014; Kingma & Welling, 2014; Rezende et al., 2014; Vaswani et al., 2017; Li et al., 2017a) have been used to design in silico biological sequences, such as RNA, DNA or protein sequences (R. Durbin & Mitchinson, 1998; Davidsen et al., 2019; Brookes et al., 2019; Hawkins-Hooker et al., 2020; Costello & Martin, 2019; Anand & Huang, 2018). Among them, several approaches have been developed in order to control sequence generation. They can be sorted in three categories, guided, conditional or combinations thereof. Guided approaches use a predictor oracle in order to guide the design towards target properties, through iterative training, generation and prediction steps (Brookes et al., 2019; Gane et al., 2019; Angermueller et al., 2019; Gupta & Zou, 2019; Killoran et al., 2017; Repecka et al., 2019). While these guided methods have the theoretical advantage to produce proteins with specific characteristics, for example brightness (Brookes et al., 2019), they require an independent oracle. This oracle can be itself hard to train and remains imperfect, even for highly specialized prediction tasks. Moreover, the lack of well-functioning predictors for large numbers of labels impairs the usage of guided-generation techniques to multiclass applications such as functional protein generation (Zhou et al., 2019). On the contrary, conditional approaches integrate the desired properties in the generation mechanism, eliminating the need for an oracle. Karimi et al. (2019) provided a guided conditional WassersteinGAN to generate proteins with novel folds. Interestingly, Madani et al. (2020) developed ProGen, a conditional transformer that enables a controlled generation of a large range of functional proteins. However, the method’s need for sequence context can be experimentally constraining and is not compatible with de novo design. Ingraham et al. (2019) present a graph-based conditional generative model that unfortunately needs only sparsely available structural information. Das et al. (2018) and Greener et al. (2018) train conditional VAEs in order to generate specific proteins, such as metalloproteins.\nConditional mechanisms in GANs. Several conditional mechanisms have been proposed to conditionally generate samples with GANs. Among the most successful ones in conditional image generation tasks, Odena et al. (2017) introduced the auxiliary classifier GAN (AC-GAN), which uses a third integrated network, in addition to the generator and the discriminator, to predict labels of both real and generated inputs to the discriminator. Miyato & Koyama (2018) proposed an alternative conditioning mechanism, where the label information is introduced to the network as the inner product of the embedded label vector and an intermediate layer of the network, a mechanism they refer to as projection. Projections can be seen as an alternative to simple concatenations of label information to the network input (Mirza & Osindero, 2014), in a way that respects the underlying probabilistic model.\nGenerative models evaluation. To this date, there is no definitive consensus on the best evaluation measures for the evaluation of quality, diversity and conditional consistency of the output of a (conditional) generative model (Papineni et al., 2002; Salimans et al., 2016; Heusel et al., 2017; Shmelkov et al., 2018; Kynkäänniemi et al., 2019; DeVries et al., 2019). Most measures that stand out in computer vision such as the Inception Score (IS) (Salimans et al., 2016), the Frechet Inception Distance (FID) (Heusel et al., 2017), GAN-train and GAN-test (Shmelkov et al., 2018) depend on an external, domain-specific predictor. On the contrary, the domain-agnostic duality gap can be computed during training and at test time, and has been shown to correlate well with FID (Grnarova et al., 2019). In functional protein prediction, results obtained by state-of-the-art classification mod-\nels are encouraging but still not good nor fast enough to entirely rely on them when evaluating and training GANs (Fmax = 0.631 (Radivojac et al., 2013; Zhou et al., 2019; You et al., 2019)." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 MODEL ARCHITECTURE", "text": "We chose the framework of Generative Adversarial Networks for our model, specifically the Wasserstein-GAN with Gradient Penalty (Arjovsky et al., 2017; Gulrajani et al., 2017). Our convolutional architecture resembles the funnel-like structure of DCGAN (Radford et al., 2015). We propose variants of existing models in order to adapt the framework to sequence generation and to guide future model development in the field. An extensive parameter search is performed on a validation set to select the best variants and hyperparameters (see Section 4) while a schematic view of the model can be found in Figure 1.\nConditioning mechanism. We allow for the insertion of three types of conditioning mechanisms: projections, auxiliary classifiers, or a combination of both. While projection or auxiliary classifiers are not exclusive, we did not encounter any work that used both in one model. The conditioning mechanisms are further explained in the supplementary Section A.3.1. We also allow for more than one projection from different layers of the discriminator, with the rationale that in this way we could utilize protein sequence information of the different abstraction levels of the convolutional layers. Finally, the embedded label is always concatenated to the latent noise vector input of the generator.\nHierarchical label encoding. Given the hierarchical structure of the functional labels, we allow for three types of label embeddings: a) one-hot encoding, as a common encoding for labels, b) Poincaré encoding (O’neill, 2014), as the hyperbolic space is well-suited for hierarchical data and c) node2vec encoding (Grover & Leskovec, 2016), which preserves neighbourhood relationships by encoding the nodes of the GO DAG based on random walks. One protein can present several GO labels, and these embeddings aim to capture the relations between the labels in the DAG and to incorporate this information into the generative process. We compare against a baseline model without this hierarchical information (named Non-Hierarchical), where sequences are fed to the model for each label independently.\nWe further allow to concatenate chemophysical properties of the respective amino-acids to the onehot encoding of the sequences. Other architectural and optimizer hyperparameters are subject to optimization, whose descriptions and value ranges are detailed in the supplementary Section A.2.3." }, { "heading": "3.2 MODEL EVALUATION", "text": "" }, { "heading": "3.2.1 MMD TO MEASURE SEQUENCE VALIDITY AND CONDITIONAL CONSISTENCY", "text": "Computation of MMD. We propose to use the kernel two-sample test Maximum Mean Discrepancy (MMD) (Gretton et al., 2012; Sutherland et al., 2016) to build evaluation criteria for conditional sequence design. The test has been shown to be suited for distinguishing distributions of structured data such as protein sequences (Borgwardt et al., 2006). MMD has also been explored in the context of GANs, where it was shown to be able to function as a discriminator (Li et al., 2015; Dziugaite et al., 2015). Here we use it to assess model quality based on samples. Let X = {xi}ni=1 and\nY = {yj}mj=1 be samples from the distributions of generated and real proteins sequences, respectively Pg and Pr. When using MMD, we compute the empirical squared MMD statistic between these two distributions using equation (2) of Gretton et al. (2012):\nMMD2[F , X, Y ] = ∥∥∥∥∥ 1n n∑ i=1 φ(xi) ‖φ(xi)‖2 − 1 m m∑ j=1 φ(yj) ‖φ(yj)‖2 ∥∥∥∥∥ 2\n2\n(1)\nwhere φ(·) ∈ F is the variant of the mapping function of the spectrum kernel proposed by Leslie et al. (2001), which accounts for the presence or absence of k-mers in the sequences of interest. The size of the k-mers was set to 3, as suggested for protein sequences by Leslie et al. (2001). This expression is fast to compute as it scales linearly with the number of sequences and can consequently be used as an early stopping criterion during the training process and for model selection. We also report the result of a more powerful Gaussian kernel on top of the 3-mer embeddings. We used the statistic itself rather than the resulting p-values because these latter are too sensitive as soon as more than 3% of random noise is added to the sequences (Table A6 in the Supplementary Material).\nGeneration of in-silico validated sequences with MMD. In order to assess to which extent our model captures the unconditional distribution of our sequences, we use MMD as described above. We therefore first ensure that the proteins generated by the model resemble existing ones. We could show that a 3-mer embedding is sufficient for our context, as the functional classes we are concerned with (see Section 4 for description of functional classes) can be linearly separated in feature space. The functional annotations can be classified with 94% accuracy based on hyperplanes in our embedding space in a one-vs-all scheme.\nGeneration of functional sequences with MRR on conditional MMD. We compute the conditional performance by measuring, for each set of generated proteins for a given target label, how many sets of real proteins with an off-target label are closer than the set of real proteins of the targeted label. The distances between sets are estimated using MMD as defined above. Let R be a set of real sequences, which is composed of the sets {Ri}di=1 of sequences with annotated labels {Li}di=1, where d is the number of labels. Let G = {Gi}di=1 be an equally structured set of generated sequences. We want to maximise the following mean reciprocal rank (MRR):\nMRR(R,G) = 1\nd d∑ i=1\n1\nrank(MMD(Ri, Gi)) (2)\nwhere rank(MMD(Ri, Gi)) is the rank of MMD(Ri, Gi) among sorted elements of the list [MMD(Ri, G1),MMD(Ri, G2), . . . ,MMD(Ri, Gd)]. MRR(G) is maximal and of value 1 when the generated sets of proteins for a given label are the closest to the set of real proteins with the same label. Variants of MRR that give more insight on conditional performance for closely-related functions in the label hierarchy are described in the supplementary Sections A.3.2 and A.3.3." }, { "heading": "3.2.2 MEASURES TO ASSESS QUALITY AND DIVERSITY OF GENERATED SEQUENCES", "text": "We monitor the duality gap (Grnarova et al., 2019) of our GAN model. A small duality gap indicates good convergence and common failure modes, such as mode collapse, can be detected. We follow the authors’ suggestion to approximate the latter with past snapshots of the training. Additionally, to provide a protein-centric evaluation we also report Kolmogorov-Smirnoff (KS) two sample tests (Massey, 1951) between generated and real samples from the ∼ 500 (not k-mer related) sequence-based features from the feature extractor library ProFET (Ofer & Linial, 2015)." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "Data. Sequence data was acquired from the UniProt Knowledgebase (UniProtKB, Consortium (2019)). Out of the 180 million entries, we selected sequences with experimental evidence and at least one annotated GO label. Then we restricted to the standard amino acids and a sequencelength of 2,048, which covers ca. 98.5% of the remaining data points. The resulting dataset contains 149,390 sequences. The ontology is structured as a DAG and labels have a hierarchical relationship,\ni.e. proteins with a given functional label inherit automatically the labels of their parents in the DAG. We restricted the number of labels to the 50 most common molecular functions, imposing a threshold of at least 5,375 sequences per functional label. This is sufficient for a proof-of-principle and would even enable the design of experimental assays for validation. Figure 4 illustrates the selected subset of labels and their hierarchical relationships. We randomly split the dataset in training, validation and test sets keeping ca. 15,000 (10%) sequences each in the validation and test sets. During model optimization, we use smaller splits with ca. 3.000 sequences each. We ensure that all labels have a minimum amount of samples in the test and validation sets, and use the same number of sequences per class for our MRR measure (1.300 and 300 sequences, respectively). Further details about the dataset and splits are available in the supplementary Section A.2.1 and Figure A1.\nComparison partners. We compare ProteoGAN to several baselines to put its performance into perspective. We first use CVAE (Greener et al., 2018). We performed a bayesian optimization hyperparameter search over 1,000 models. The model was adjusted to incorporate the 50 labels of our problem setting. We could not compare fairly to PepCVAE (Das et al., 2018) as the model does not scale to sequence lengths of 2048 amino-acids. We refer the reader to the respective papers and to Section A.2.2, Tables A2-A3 and Figure A2 for further information and results on both baselines.\nAdditionally, we constructed several baselines with the goal to assess the usefulness of the conditional generation mechanism. The first baseline, referred to as Unconditional, consists of as many unconditional GANs as there are labels. To do so, we remove the conditioning mechanism from our model and train multiple instances on the fifty subsets of data that are annotated with a particular label. We generate sequences for a target label by sampling them from the GAN trained on the sequences annotated with the same label. Our second alternative to conditioning replaces a conditional model with the combination of an unconditional model trained on the full data and an established predictor of protein function, NetGO (You et al., 2019), used to predict the labels of generated sequences which replaces the need for conditioning. We refer to this baseline as Predictor-Guided. Further, we assess whether the model utilizes the hierarchical label structure by training a NonHierarchical baseline which only sees the labels independently for each sequence. This is done by replicating and annotating a sequence for each associated label, while keeping the number of gradient updates the same across all models.\nLastly, we compare two traditional generative models for sequences, a (profile) HMM and an ngram model (n=3). Since they are without conditioning mechanism, we train one model for each label combination in our testset (called ’on label set’ in Table 1, in this case a protein’s GO labels can contain several GO terms and their parent terms), as well as one model for each of the 50 labels we are dealing with (called ’on single label’ in Table 1, in this case we discard the hierarchy of labels). Further description of most of the baselines are available in the Supplementary Material Section A.2.2.\nHyperparameter optimization. We conducted two Bayesian Optimization and HyperBand (BOHB) searches (Falkner et al., 2018) on six Nvidia GeForce GTX 1080, first a broad search among 23 hyperparamaters and a second, smaller and more selective, among 9 selected hyperparameters. The optimization objective was set to maximize the ratio of the evaluation measures MRR/MMD, which are detailed in Section 3.2.1, to balance the validity and the conditional consistency of the generated sequences. Both searches were complemented by a functional analysis of variance (fANOVA) (Hutter et al., 2014). The results of the second optimization, for which we evaluated 1, 000 models for a maximum duration of 27 epochs in our experiments, are shown Figure 2. The 27 best selected models of the second hyperparameter search were then trained for a prolonged duration of 100 epochs, where the conditioning mechanism and an associated weighing factor became most important. Further details about hyperparameter optimization are available in the supplementary Section A.2.3 and Table A4." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "" }, { "heading": "5.1 MODELS SELECTED BY THE BOHB OPTIMIZATION", "text": "Insights on cGANs architecture. The results of the fANOVA and of a prediction of hyperparameter marginals on the second automatic hyperparameter optimization led to the following observations. We could show that adding chemophysical features did generally decrease model performance, and\nFigure 2: Hyperparameter importance. The bars show individual importance of each hyperparameter in terms of the variance they explain. We conducted the analysis for all trials of the optimization (left) and for the selected models that were trained for prolonged time (right). The total variance explained by the main effects was 36% and 88%, respectively.\n(a) Use biochemical features (b) Label embedding (c) Conditioning mechanism\nFigure 3: Marginal predictions of biochemical features and the label embedding based on optimization data and of the conditional mechanism based on data of the 27 best selected models. Predictions were obtained training on MMD and MRR. Lower is better for blue and higher is better for red.\nthat the most suitable label embedding is a simple one-hot encoding (Figure 3a-b). More interestingly, the conditioning mechanism showed different impacts when analysing with respect to either MMD and MRR (Figure 3c). Performance increased when changing from projection, to auxiliary classifier, to both when evaluating with respect to MRR, but the opposite occurred for MMD. This indicates that there is a trade-off between conditional generation and general sequence quality. A combination of both conditioning mechanisms is the best option when aiming at conditional performance, although others might prefer an emphasis towards better sequence quality over conditional generation. We further conclude that small and simple model architectures show best results from an analysis of our first optimization, and that many of the various proposed extensions to the GAN framework did not show significant impact on performance in our context (see supplementary Figures A3, A4, A5, A6).\nSelection of the final model. We selected the best model checkpoint over a training phase of 300 epochs based on the validation set. The final hyperparameter configuration of our model (ProteoGAN), as well as loss plots and real-time evaluations during training can be found in supplementary Section A.3.4, Table A5, and Figure A7. Most notably, the final model had both conditioning mechanisms, multiple projections, and one-hot encoding of label information. Additionally, the duality gap evaluations during training (Figure A7) showed no signs of mode collapse." }, { "heading": "5.2 PERFORMANCE EVALUATION OF PROTEOGAN", "text": "We report measures on the test set of the best model in Table A8. We additionally report the model performance per individual GO label in Figure 4. ProteoGAN can reliably generate sequences conditioned on various GO labels, where many of the labels can be very well generated without major off-target effects (23 (resp. 32) of 50 labels were on average ranked first or second (resp. or third) compared to all other labels). The overall conditional performance (MRR=0.545) is reasonably\nclose to a reference set of natural protein sequences (MRR=0.889). With respect to general sequence quality, ProteoGAN reaches MMD values of 0.040, which corresponds to roughly 20% of random mutations in a set of natural sequences (compare supplementary Table A6). While this number is high compared to traditional protein engineering approaches, we note that systematic studies showed that proteins can tolerate a high mutation rate (up to 59% (Markiewicz et al., 1994; Ng & Henikoff, 2001)) as long as critical residues are not affected, and notably Repecka et al. (2019) could experimentally validate GAN-generated sequences with up to 34% difference to its closest homolog. Additionally, we show that the sequences generated by the model are not closer to the training set than the test set is, as measured by squared euclidean distance in kernel feature space (Supplementary Figure A8). This implies that our model is not overfitting on the training set and is able to generate novel sequences.\nTable 1 shows the performance under MMD, Gaussian MMD and MRR for ProteoGAN and various baselines. In general, MMD and Gaussian MMD give similar rankings for the different models. We first note that ProteoGAN outperforms all multi-label conditional baselines by a significant margin in both overall model performance and conditional generation. We could also show that the (conditional) ProteoGAN is comparable to the Unconditional model, which implies that the conditioning mechanism can substitute the training of many models per individual label. It shows that the conditioning mechanism is working well. ProteoGAN scores are also better than the Non-Hierarchical model, which shows that it could incorporate the hierarchical multi-label structure into the generation and that it is beneficial. It remains to be shown that this is sufficient to enable out of distribution functional label generation.\nThe weak conditional performance of the Predictor-Guided model (MRR = 0.109) suggests that the state-of-the-art predictor used (NetGO) is not able to predict the right label for the generated sequences, and therefore fails at guiding conditional generation, possibly because the generated sequences do not have close homologs in the protein databases.\nAs an outlook, we provide some results of ProteoGAN trained on more specific labels (lower panel in Table 1). We kept the same architecture as the one optimised for 50 labels and retrained the model in three different situations. We observe that the performance is still reasonable when the number of labels is doubled (100 labels). With 200 labels the performance starts to drop. It may be that the model capacity is too low in this case, which could be alleviated by tuning the hyperparameters we have identified to be critical. When training the model on the more specific labels of the depth-first sampling of labels the performance stays good, however we note that with increasing specificity the classes get very small.\nProteoGAN further outperformed CVAE and the HMM and n-gram baselines. This becomes evident especially in conditional generation (MRR). Figure 5 confirms these results. It can generally be seen that ProteoGAN shows good agreement with the real protein sequences of the testset in several important biological sequence features. We show exemplary feature distributions for some of the ca. 500 features analyzed by ProFET. For a summary statistic of all features, we report the distribution of KS-statistics between generated and real data across all features. Also here, the ProteoGAN variants outperformed the other models." }, { "heading": "6 CONCLUSION", "text": "In this work, we develop a conditional generative adversarial model, ProteoGAN, which generates sequences validated in-silico according to statistically- and biologically-driven measures. We identify useful architectures and extensions of cGANs for proteins. The measures we propose are fast to compute and therefore can be used during optimization and training, as well as for model assessment. We show that the conditioning on hierarchical Gene Ontology labels was successful, as we could show that a number of labels can be well targeted. Generally, it remains to be shown that the class of multi-label generative models can not only generate the correct feature distributions, but also experimentally valid proteins with the specified functions. This requires further development of evaluation measures, which we hope to have set a basis for. Future improvements to the model might also come from a larger number of labels, more specific targeting for small classes and a proof that such conditional models are able to combine the modular GO labels into new and unseen functions, which would be tremendously useful for biotechnological applications.\nCode Availability. We make source code for ProteoGAN and the evaluations available at https: //github.com/proteogan/proteogan." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 BACKGROUND: BIOLOGICAL MECHANISMS UNDERLYING PROTEIN FUNCTIONS", "text": "Proteins are complex biological structures but can be simply represented as chains of amino-acids, a 20-character alphabet. While this is useful for modeling approaches, it hides the functional complexity of proteins. In fact, amino acids, with their diverse physico-chemical properties, fold and assemble into complex three dimensional constructs at a local and global level, giving rise to the overall protein structure. In turn, the structure of the protein is responsible for its function, where shape and electrochemical properties dictate the behavior and biological activity of the macromolecule. The link between sequence and function is therefore highly complex and still not understood by the research community. The design of proteins has consequently been so far mainly based on relatively uninformed trial-and-error processes with slight random alterations of the protein sequence and a subsequent assay of functionality (Arnold, 1998) or entail computationally heavy simulations of molecular dynamics (Samish, 2017). However, advances in machine learning have enabled the development of novel in silico design methods." }, { "heading": "A.2 EXPERIMENTAL SETUP", "text": "In this section, we describe in detail the dataset and preprocessing steps, the baselines and the hyperparameter searches performed.\nA.2.1 DATA\nSequence data is acquired from the UniProt Knowledgebase (UniProtKB, Consortium (2019)). The database contains more than 180 million protein sequences with rich annotations such as structure and function. Nonetheless, most of these entries are only automatically annotated. To ensure high quality data for our model, we filter for sequences that are manually curated and have experimental evidence of some form. There are also some specialized proteins that have very long sequences, we only keep the sequences whose length is not exceeding 2048 amino acids, which covers ca. 98.5% of the data points. The resulting dataset contains 149, 390 sequences. The cut-off at 2048 amino-acids is multiple times longer than other approaches in this field, which is between 30- to 500-long (Das et al., 2018; Davidsen et al., 2019; Greener et al., 2018; Repecka et al., 2019), and allows for a more complete model of the known sequence space.\nFunctional labels are collected from the same database. The gene ontology (GO) resource is composed of three branches, molecular function, cellular component and biological process. We focus on the molecular function ontology, which contains thousands of terms ranging from description like binding (GO:0005488) to very specific terms such as microtubule-severing ATPase activity (GO:0008568). Each protein is annotated with a set of GO labels describing the molecular function of a protein in modular way. The ontology is structured as a directed acyclic graph with a single root. Further, labels have a hierarchical relationship, i.e. protein with a given functional label inherits automatically the labels of its parents in the DAG (is-a relationship). The molecular function ontology resource currently contains more than ten thousand labels, many of which have a highly specific meaning and only few representatives. We therefore restrict the number of labels to the 50 largest classes, the smallest class containing 8659 proteins. We argue fifty labels is sufficient for a proof-of-principle and would even enable the design of experimental assays for validation. Figure A1 illustrates the selected subset of labels and their relationships.\nTrain, validation and test splits were created to preferably represent all labels uniformly in the test and evaluation sets. We use a 80-10-10 split for evaluation in the main body, and a 94-2-2 split for hyperparameter optimization and the results detailed in the supplement. For the validation and test set, we randomly sample sequences until there is at least 1.300 (300 in the optimization) sequences per class. The selections of hyperparameters by the BOHB hyperparameter optimizations for ProteoGAN and by the hyperparameter searches for the baselines are done on the validation set, while the results presented in the main text were acquired on the test set. For sequence sample generation, the model was conditioned on the label combinations of the evaluation/test set and the resulting sequences then compared with the respective set by MMD and MRR.\nFigure A1: GO DAG of the 50 labels selected for this project." }, { "heading": "A.2.2 BASELINES", "text": "We implement four baselines to put the performance of our model into perspective. In this section, we would like to give additional details concerning the two baselines that we gathered from the literature.\nPepCVAE (Das et al., 2018) uses a VAE framework with a single layer gated recurrent unit (GRU) RNN for both the encoder and the decoder, in a semi-supervised setting. Conditioning is performed by concatenating the encoded target labels to the latent code. In the paper, the sequences of interest are antimicrobial peptides, with a maximum length of thirty amino-acids. The conditioned on label was binary, i.e. antimicrobial activity or not. The RNN component of the model is highly resourceconsuming, therefore we had to trim protein sequences to the first 32 amino-acids to run the model on our data. For a fair assessment, we also run our model ProteoGAN on sequences of 32 aminoacids. We modify PepCVAE by introducing a multiplying factor in front the the KL term of the ELBO loss as suggested by Higgins et al., to increase the stability of the model. We optimise the model with a Bayesian Optimization hyperparameter search, for which we tried 1, 000 combinations of hyperparameters. We do not use BOHB because the early stopping would interfere with the different training phases of the model. The hyperparameters and their value ranges, as well as the final model configuration can be found in Table A1. We refer the reader to Das et al. (2018), Hu et al. (2017) and Bowman et al. (2016) for more information on the model.\nCVAE (Greener et al., 2018) uses a conditional VAE (CVAE) in order to generate either metalloproteins with desired metallo-binding sites or fold properties. In the case of fold properties, the authors introduce iterative sampling and guidance steps in the latent space. The decoder and encoder are both MLPs and the number of layers is chosen with hyperparameter search. Here also, we introduced a KL-balancing term to stabilize training. As for PepCVAE, the model presents a loss scheduling scheme and therefore we could not use the BOHB optimization. However, we performed a Bayesian Optimization hyperparameter search, for which we tried 1, 000 combinations of hyperparameters. Notably, we allowed for an optimization of network architecture by optimizing over the layer numbers for both encoder and decoder, and by optimizing the number of units in the first layer of the encoder and the last layer of the decoder. The unit number then halved towards the latent space with each layer. The hyperparameters and their value ranges, as well as the final model configuration can be found in Table A2. We refer the reader to Greener et al. (2018) for more information on the model.\nTable A1: PepCVAE hyperparameters subject to BO optimization.\nName Values Final Value\nLearning rate [1e-5,1e-2] 4e-3 Pretrain iterations [1,5000] 3181 Latent dimension [10,1000]† 101 Word dropout keep rate [0,1] 0.43 Classifier loss balancing λC [1e-3,100]† 9.9e-3 Latent loss balancing λZ [1e-3,100]† 1.9e-1 KL balancing β [1e-3,100]† 1.5e-2\n† Values were sampled on a logarithmic scale.\nTable A2: CVAE of Greener et al. hyperparameters subject to BO optimization.\nName Values Final Value\nLearning rate [1e-5,1e-2] 7.8e-4 Pretrain start [1,5000] 2598 Pretrain end [1,5000] 1251 Latent dimension [10,1000]† 761 KL balancing β [1e-3,100]† 1.1e-3 Encoder layer number [1,5] 3 Decoder layer number [1,5] 1 Log2(Encoder first layer units) [4,10] 7 Log2(Decoder last layer units) [4,10] 9\n† Values were sampled on a logarithmic scale.\nThe HMM baselines were implemented based on HMMER. For HMM (on label set), all sequences in the training dataset containing a specific label combination were aggregated, for each label set of the test set. For HMM (on single label), all sequences in the training dataset containing a specific label were aggregated, for each of the 50 labels. The resulting sequence sets were aligned with MAFFT (with parameters --retree 1 --maxiterate 0 --ep 0.123). Because of the time-intense multiple sequence alignment the sequences sets were randomly sampled to have a maximum size of 5000 sequences. From the alignment, a profileHMM was built with HMMER which was then sampled to generate a sequence.\nIn the n-gram baseline also, sequences were selected according to label sets and single labels. Here the full data was used. n was set to 3. The sequence lengths were sampled from the training data length distribution.\nThe Predictor Guided baseline was a variant of ProteoGAN without conditioning mechanism trained on the whole data. A sample was generated like in the other models, however the labels were annotated by sampling the per-label probabilities outputted by the NetGO protein function predictor.\nTable A3: Evaluation of ProteoGAN and PepCVAE on truncated sequences with length 32. Shown are mean and standard deviation of five different data splits. The arrows indicate that lower (↓) or higher (↑) is better.\nModel MMD↓ Gaussian MMD↓ MRR↑\nPepCVAE (L=32) 0.122 ± 0.019 0.077 ± 0.012 0.139 ± 0.022 ProteoGAN (L=32) 0.033 ± 0.002 0.022 ± 0.001 0.321 ± 0.029\nFigure A2: Sequence feature analysis of the models trained on truncated data (Sequence length = 32). KS statistics over ∼ 500 ProFET features, lower is better." }, { "heading": "A.2.3 HYPERPARAMETER SEARCH", "text": "Description of the BOHB: For ProteoGAN, we conducted hyperparameter searches with the Bayesian Optimization and HyperBand (BOHB) algorithm. The Hyperband (Li et al., 2017b) algorithm uses successive halving (Jamieson & Talwalkar, 2016) to evaluate a number of models on a given budget of resources. The better half of the models are then evaluated on twice the budget, et cetera. Hyperband is an independent optimization algorithm that has been combined with Bayesian optimization to form Bayesian optimization and Hyperband (BOHB) (Falkner et al., 2018), the optimization strategy used in this project.\nHyperparameter optimization with BOHB: We conducted two BOHB optimizations. For both we evaluated 1, 000 models. All networks were trained with the Adam optimizer (Kingma & Ba, 2015) with β1 = 0 and β2 = 0.9 (following (Gulrajani et al., 2017)). The optimization consisted first of a broad search among 23 hyperparameters and second, of a smaller and more specific search, among 9 selected hyperparameters. For the first BOHB optimization, an optimization iteration was defined as two epochs which we found through pilot experiments was the minimum time to observe a viable trend in the metrics. The parameters R and η (in the notation (Li et al., 2017b)) were set to 9 and 3, respectively, which allowed for a maximum training time of 18 epochs (22.5K gradient updates). The optimization objective was to maximize, in the validation set, the ratio of metrics MMD/MRR which are introduced in the main document Section 3.2.1. During the optimization, BOHB selected the models based on evaluations at the end of a training period. For the second optimization, we reduced the number of hyperparameters to only 9. We selected values for the other hyperparameters based on the analysis of the hyperparameter importance of the first optimization (see paragraph below). The hyperparameters that showed either no importance or that were detrimental to training were removed. For this second optimization, the smaller network size allowed for 3 epochs per iteration, resulting in a maximum training time of 27 epochs (1.2K gradient updates). The list of hyperparameters of the two BOHB optimizations and their ranges is presented in Table A4. The parameters of the best models selected by the two BOHB optimizations are presented Table A5.\nQuantification of hyperparameter importance: After the optimization, we analyze hyperparameter importance with the approach presented in (Hutter et al., 2014). A surrogate model (random forest) is trained on the parameter configurations and the respective evaluation scores. This enables a functional analysis of variance (fANOVA) which allows for a quantification of hyperparameter importance in terms of the variance they explain. It also provides marginal predictions for each hyperparameter which gives insights about their optimal value setting. For the random forest, we used 1, 000 trees with a maximum depth of 64, and repeat the estimation 100 times. We do so for all evaluated models of the first and second BOHB optimizations. The hyperparameter importances obtained from the first optimization (and resp. second optimization) are presented in Figure A5 (resp. Figure 2). The first fANOVA showed that parameters related to the discriminator (learning\nTable A4: Hyperparameters subject to BOHB optimization.\nName Symbol Values\nUse chemophysical properties Yes, No, only Label embedding one-hot, node2vec, Poincaré Conditioning mechanism projection, AC, both AC weighting factor γ [1, 1000]† Label smoothing factor θ [0, 0.5] Latent noise dimension dZ [1, 1000]† Input noise standard deviation σ [0, 1] Generator learning rate ηG [1e-5, 1e-2] Generator learning rate 2 ηG2 [1e-5, 1e-2] Discriminator learning rate ηD [1e-5, 1e-2] Discriminator learning rate 2 ηD2 [1e-5, 1e-2] Training ratio ncritic [1, 10] Learning rate schedule constant, cosine, exponential Schedule interval (in epochs) i [1, 18] Generator layer number nG [1, 10] Discriminator layer number nD [1, 10] Strides s 1, 2, 4, 8 Filter size f [3, 12] Generator skip hG [0, 10] Discriminator skip hD [0, 10] Number of projections nP [1, 5] Output source layers oS [1, 3] Output label layers oL [1, 3]\n† Values were sampled on a logarithmic scale. AC = auxiliary classifier.\nTable A5: Hyperparameters found in the first and second BOHB optimization. Values with an asterisk indicate the preset configurations in the second optimization.\nName First Second\nUse chemophysical properties Yes No Label embedding one-hot one-hot Conditioning mechanism both both AC weighting factor 178 135 Label smoothing factor 0.28 -* Latent noise dimension 91 100* Input noise standard deviation 0.29 -* Generator learning rate 2.0e-3 4.1e-4 Generator learning rate 2 - -* Discriminator learning rate 8.5e-4 4.0e-4 Discriminator learning rate 2 - -* Training ratio 1 1* Learning rate schedule constant constant* Schedule interval (in epochs) - -* Generator layer number 2 2* Discriminator layer number 3 2* Strides 4 8 Filter size 8 12 Generator skip - -* Discriminator skip - -* Number of projections 1 2 Output source layers - 1* Output label layers 2 1*\nAC = auxiliary classifier.\nrate, number of layers) are most important for model performance,1 and helped to select potentially important hyperparameters for the second analysis. Noticeably, the best model of the first optimization was already a well-performing model but we chose to run a second optimization to better understand the role of key hyperparameters, to gain insight in potential good practice when designing conditional generative adversarial networks and to further improve the performance of our model. The second fANOVA clarified the importance of the remaining hyperparameters, such as use of chemophysical features and label embeddings among others (Figure 2).\nWe also show marginal predictions for hyperparameters of the first optimization in Figure A6, and for the second optimization in Figure A3 and Figure A4.\nObtainment of the final model: The 27 best selected models of the second hyperparameter search were then trained for a prolonged duration of 100 epochs, where the conditioning mechanism and an associated weighing factor became most important, according to the last fANONA study (Figure A4). We evaluated twice per epoch and selected the weights of the final model at the checkpoint that showed the best (smallest) ratio MMD/MRR in the validation set. The final model, ProteoGAN, is a convolutional conditional generative adversarial network, with two conditioning mechanisms: an auxiliary classifier and projections. The dimensions of the convolutional layers are following the pyramidal architecture of DCGAN (Radford et al., 2015), i.e. with increasing output length and decreasing filter depth for the generator, and vice versa for the discriminator. The other hyperparameters are presented Table A5.\n1Some other important factors were learning rate schedule-related parameters such as Generator learning rate 2 or schedule. We realized that these were detrimental to model performance as the short duration of training in the optimization did not allow to estimate long term effects seen in the selected models that were trained for 100 epochs.\n(a) Auxiliary Classifier weighing factor (b) Conditioning mechanism (c) Discriminator learning rate\n(d) Generator learning rate (e) Kernel size (f) Label embedding\n(g) Projections (h) Strides (i) Use chemophysical features\nFigure A3: Marginal predictions of hyperparameters based on optimization data in the second optimization. Predictions were obtained training on MMD and MRR. Note that for MMD, lower is better.\n(a) Auxiliary Classifier weighing factor (b) Conditioning mechanism (c) Discriminator learning rate\n(d) Generator learning rate (e) Kernel size (f) Label embedding\n(g) Projections (h) Strides (i) Use chemophysical features\nFigure A4: Marginal predictions of hyperparameters based on the data of the 27 best selected models in the second optimization. Predictions were obtained training on MMD and MRR. Note that for MMD, lower is better.\nFigure A5: Hyperparameter importance for the first BOHB optimization. Shown are all hyperparameters subject to optimization for all models (left), and a manual selection of models that was trained for 100 epochs (right)." }, { "heading": "A.3 METHODS", "text": "In this section, we first describe in details the state-of-the-art conditioning mechanisms and constructed variants that we used in this project. Second, we introduce three variants of MRR suited for hierarchically-structured labels. Finally, we assess the proposed evaluation measures of cGANs by estimating empirical worst and best bounds for our experimental setting." }, { "heading": "A.3.1 CONDITIONING MECHANISM", "text": "In this section, we first detail how we adapted the Wasserstein loss to the conditional setting, then we describe state-of-the-art conditional GANs’ objective functions and variants used in this project.\nLoss function of conditional GANs: Our models are trained with the Wasserstein objective with gradient penalty from (Gulrajani et al., 2017). As a reminder, the WGAN-GP losses can be written as follows:\nLD = Eq(x)[D(x)]− Ep(x)[D(x)] + λEm(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]\nLG = −Eq(x)[D(x)] (3)\nwhere x ∼ p(x) is the data distribution and x ∼ q(x) is the generator model distribution, x̂ is an interpolated sample between a real sequence and a generated one, m is the distribution of interpolated samples, D is the discriminator (or critic), LD the loss of the discriminator and LG the loss of the generator. The term Em(x̂)[(‖∇x̂D(x̂)‖2 − 1)2] ensures that the discriminator is Lipschitz continuous.\nTo be able to use the Wasserstein objective with gradient penalty in the conditional setting of projection cGAN (Miyato & Koyama, 2018) (see below), we had to adapt the objective formula to include the label information. Let (x,y) ∼ p be a sample from the dataset, where x is the sequence and y the label. Let D be the discriminator and G the generator. Let q be the generator model distribution, such that y → q(y) is defined by the user and x → q(x|y) is learned. In practice, q(y) follows the label distribution of the data p(y). Let x̂ be an interpolated sequence between a real sequence and a generated one. We call x̂ → m(x̂|y) the distribution of interpolated sequences given a label y . Let λ be a weighing factor introduced in (Gulrajani et al., 2017). Taking conditional information into\n(a) Auxiliary Classifier weighing factor (b) Discriminator layer number (c) Discriminator learning rate\n(d) Generator layer number (e) Input noise (f) Training ratio\n(g) Strides (h) Use chemophysical features (i) Latent dimensionality\n(j) Auxiliary Classifier weighing factor (Selected)\nFigure A6: Marginal predictions of hyperparameters based on data in the first optimization. We show some selected predictions that allowed for interpretation, all others were inconclusive. If not otherwise noted, data comes from all trials in the optimization. Predictions were obtained training on MMD and MRR. Note that for MMD, lower is better.\naccount, the discriminator and generator losses can be expressed as follows: LD = Eq(y)[Eq(x|y)[D(x,y)]]− Ep(y)[Ep(x|y)[D(x,y)]]\n+ λEp(y)[Em(x̂|y)[(‖∇x̂D(x̂, y)‖2 − 1)2]], LG = −Eq(y)[Eq(x|y)[D(x,y)]].\n(4)\nThis formulation ensures that the Lipschitz constraints imposed on the discriminator in the unconditional WGAN-GP objective holds for each class.\nCase of the projection cGAN model (Miyato & Koyama, 2018): In the conditional GAN with projection discriminator model, the discriminator is decomposed into a sum of two terms, one being the inner product between a label embedding and an intermediate transformation of the input, and the second term being solely depending on the input x. The new expression of the projection discriminator can be derived by assuming that the label is categorical and that both the log-likelihoods of the data and target distribution can be written as log linear models. Let y → v(y) be a linear projection of the label into a label embedding. Let φθ be a vector output function applied to the input x and ψγ a scalar function applied to the vector output functionφθ(x). LetA be an activation function of choice. The projection discriminator in (Miyato & Koyama, 2018) can therefore be written as:\nD(x,y) = A(f(x,y)) = A(v(y)Tφθ(x) + ψγ(φθ(x)))\n(5)\nThe label information is therefore introduced via an inner-product. In practice, the discriminator is equipped with a projection layer that takes the inner product between the embedded label and an intermediate output of the discriminator. This formulation leads to a more stable algorithm compared to a simple concatenation of the label with the input, potentially thanks to the introduction of a form of regularization on the discriminator.\nIn this project, we also tested the possibility to include several projections in the discriminator. In addition to the previous notations of this paragraph, let us assume that we have k projections. Let {gi}ki=1 be k neural networks, which can be decomposed in ni layers gi = lini ◦ l i ni−1 ◦ · · · l i 2 ◦ li1. Let {pi}ki=1 be the layer number at which the inner product with the output of the projection {vi}ki=1 occurs in each neural network. The expression of the discriminator is given by:\nD(x,y) = A(f(x,y))\n= A( k∑ i=1 (vi(y) T lipi ◦ · · · l i 1(x) + gi(x)))\n(6)\nIn practice we share lower layer parameters and allow for up to four projections. Our BOHB hyperparameter searches did not show evidence of the superiority of projection mechanisms for conditioning purposes when they are the unique type of conditional mechanism in the network. However, the projection models were able to generate sequences similar to naturally occurring ones (low MMD).\nCase of the cGAN model with auxiliary classifier (Odena et al., 2017): As opposed to projection cGANs, cGANs with auxiliary classifier add a term to the generator and discriminator losses to incorporate the log-likelihood of the correct labels (compare Equation 4). In addition to notations introduced for Equation 4, let CD be the auxiliary classifier, ce the cross entropy and γ a weighting factor. The loss function of cGANs with auxiliary classifiers can be written as:\nLD = Eq(x)[D(x)]− Ep(x)[D(x)] + λEm(x̂)[(‖∇x̂D(x̂)‖2 − 1)2] + γEp(y)[Ep(x|y)[ce(CD(x), y)]]\nLG = −Eq(x)[D(x)] + γEp(y)[Eq(x|y)[ce(CD(x), y)]]. (7)\nCD typically shares weights with D and is trained when minimising LD but is fixed when minimising LG. In our work, we compare both types of conditional GANs (GAN equiped with auxiliary classifier or with multiple projections at several layers (see Equation 6)) to a third proposed model that combines both mechanisms. It is important to note that in this case the label information introduced in the projection may not be shared with the auxiliary classifier (compare Figure 1). The fANOVA analysis performed on the second BOHB optimization results shows that the combination of both mechanisms helps to obtain a better performing conditioning mechanism, as measured by MRR." }, { "heading": "A.3.2 EVALUATION MEASURES FOR CONDITIONAL GENERATIVE MODELS IN THE CASE OF HIERARCHICAL LABELS", "text": "In addition to the evaluation measures MMD and MRR described in the main document Section 3.2.1, we built three variants of MRR in order to better characterise the effectiveness of the conditional generation and its ability to handle hierarchical multi-label settings. We look at submeasures of MRR where either all parent nodes (MRRP ), all child nodes (MRRC), or both (MRRB) are ignored in the ranking of a conditional MMD term. Removing parent or child terms attempts at understanding to which degree the conditioning mechanism leads to severe off-target generation of sequences of unrelated labels. Indeed, these three alternative measures do not penalize if the generated distribution of sequences for a given label is closer to either parents’ or children’s sequences than it is to its target’s sequences, which is less severe than off-target generation of sequences of an unrelated label. Therefore, getting good MRRX values would indicate that our model is able to conditionally generate sequences up to closely related (parent’s or child’s) functions. Additionally, comparing MRRX values between themselves and with MRR would give some insight in closelyrelated conditional generation performance. For example, a MRRP (resp. MRRC) value much larger than MRR could indicate that sequences that are generated with a target function are often closer to the natural sequences exhibiting the parent (resp. child) functional label, i.e. the model is too general, or too specific, respectively." }, { "heading": "A.3.3 ASSESSMENT OF THE EVALUATION MEASURES FOR CGANS", "text": "We assess the quality of the proposed evaluation measures by constructing ”best case” and ”worst case” scenarios, to understand what would represent a perfect success or failure mode of our model. We consider as ”best case” the case where the generative model generates sequences that are observed. The ”worst case” scenario differs depending on the evaluation measure. Scenarios for MMD: MMD has theoretical bounds of [0, √ 2] if the sequences in both sets are selfsimilar and totally dissimilar from each other. As we aim to compare real sequences to generated sequences that resemble real sequences, we therefore fix one set to be a collection of n natural protein sequences and the second set to be n other natural protein sequences modified with different percentages of random noise. In practice, the set of natural sequences is the test set used to report the results in the main document, and the random noise is injected in the form of single-point mutations to sequences of the second set. The results are reported in Table A6 and indicate that MMD is a proxy for the quality of the generated sequences. We observe that MMD increases with the amount of noise injected in the sequences of the second set. The generation of close to constant sequences is a plausible failure mode of the GAN and would lead to a very high MMD value (last row). The lengths of the sequences of the mutated set were conserved, however we also report MMD with respect to fully random sequences of maximum length. The MMD value between two sets of real sequences is around 0.0237, adding 1% of noise to the sequences in one of the set leads to an MMD value of 0.0240, 10% of noise to 0.0324 and 20% of noise to 0.0484. In comparison, in biology, proteins have been shown to be viable up to 30 - 60% of mutations in the amino-acids of their sequences (Repecka et al., 2019; Markiewicz et al., 1994; Ng & Henikoff, 2001). We also report empirical p-values following Borgwardt et al. (2006), under the null hypothesis that the two sets are from the same distribution. These were obtained by ranking the original MMD statistic in 1000 iterations of statistics where the aggregated sequences were randomly assigned to each of the two sets.\nScenarios for MRR: Since MRR is a conditional measure, we constructed the ”worst case” sample as a set of natural protein sequences with randomized label assignments. This aims to simulate a generative model that produces well-formed sequences, but ignores the conditioning objective. One could also construct a scenario that simulates an antagonistic model that actively assigns wrong labels, instead of random ones. This will likely not occur in practice, though. Table A7 shows the MRR values for a real data sample and the same sample with randomized labels. The reference for MRR was again the test set and the evaluated sample an equally structured set (”Positive Control”, Table A7) where the label annotations were randomly shuffled among the sequences (”Negative Control”, Table A7). The MRR evaluates a set of sequences with respect to the 50 selected labels. We also look at the sub-measures of MRR where either all parent terms (MRRP ), all child terms (MRRC), or both (MRRB) are ignored in the ranking of a term. This gives additional insights on how well the model works with respect to the up- and downstream labels in the GO DAG.\nSample MMD p-value\nDataset Sample 0.0237 0.1499 Dataset Sample + 1% noise 0.0240 0.0370 Dataset Sample + 2% noise 0.0243 0.0050 Dataset Sample + 3% noise 0.0248 0 Dataset Sample + 5% noise 0.0262 0 Dataset Sample + 10% noise 0.0324 0 Dataset Sample + 20% noise 0.0484 0 Dataset Sample + 30% noise 0.0660 0 Dataset Sample + 50% noise 0.1009 0 Dataset Sample + 100% noise 0.1788 0 100% noise (maximum length) 0.3044 0 Constant (all leucine) 1.0258 0\nTable A6: MMD values with different percentage of mutations, p-values\nTable A7: Best and worst case MRR, as well as model evaluations of the main text with the extended set of MRR measures. Positive Control simulates a perfect model, Negative Control a model that ignores conditional information.\nModel MRR MRRP MRRC MRRB Positive Control 0.7887 0.8260 0.8520 0.8925 Negative Control 0.0909 0.1177 0.0923 0.1196\nProteoGAN (ours) 0.5956 ± 0.0237 0.7018 ± 0.0180 0.6494 ± 0.0237 0.7588 ± 0.0166 Unconditional 0.5219 ± 0.0195 0.6089 ± 0.0242 0.5729 ± 0.0205 0.6643 ± 0.0225 Predictor-guided 0.1071 0.1367 0.1261 0.1562 Greener et al. 0.3132 ± 0.0161 0.3658 ± 0.0157 0.3775 ± 0.0154 0.4306 ± 0.0150 PepCVAE (L=32) 0.1910 ± 0.0137 0.2038 ± 0.0156 0.2103 ± 0.0195 0.2267 ± 0.0194 ProteoGAN (L=32) 0.3159 ± 0.0205 0.3464 ± 0.0207 0.3532 ± 0.0240 0.3892 ± 0.0221\nResults for MRR variants: Table A7 shows the results for the MRR variants for some models. The results confirm that our model is better at conditional generation than the baselines, including the baseline that consists of training 50 unconditional models. The comparison between MRR variants suggests that proteins often resemble proteins in the target class. When this is not the case, proteins are often similar to their parent class, which makes sense as the class is then more general. The small difference between the MRRB value of our model and of the positive control indicates that the model rarely generates sequences that resemble proteins in an unrelated class. Additionally, compared to the controls, the MRRC are relatively low compared to MRR, which suggests that the model does not tend to create more specific child labels, which would be detrimental in a biological application." }, { "heading": "A.3.4 LOSSES AND REAL-TIME EVALUATION OF THE FINAL MODEL", "text": "The loss function of the final model presented in the main document, combining projection and auxiliary classifier, is shown Figure A7. We monitored the duality gap (red), for which we split the training data into an adversary finding set and a test set of 1% of the train set each. The duality gap is well-behaved, with a fast convergence to 0, indicating that there is no mode collapse and suggesting that the samples are of reasonable quality. Also, the evaluations of MMD and MRR can be seen during training (evaluated twice per epoch) which provides valuable information for model selection and early stopping.\nA.4 RESULTS WITH A SMALLER TESTSET SIZE\nTable A8: Evaluation of ProteoGAN and various baselines with our proposed measures (MMD and MRR) and NetGO (Fmax) on a smaller testset testset (n=300, ca. 2% of the data). An arrow indicates that lower (↓) or higher (↑) is better. Given are mean values of n = 10 (MMD/MRR) and n = 3 (Fmax) different random seeds for the latent variable of the model. Note that models marked with (L=32) have been trained and evaluated on a set of truncated sequences and are hence not directly comparable to the other values. Also, since without multi-label conditioning, the Unconditional model was conditioned on different label sets as the other models and controls.\nModel MMD↓ MRR↑ Fmax ↑ Positive Control 0.0237 0.7887 0.7705 Negative Control 1.0258 0.0909 0.3485\nProteoGAN (ours) 0.0463 ± 0.0003 0.5956 ± 0.0237 0.4178 ± 0.0004 Unconditional 0.0380 ± 0.0010 0.5219 ± 0.0195 0.3050 ± 0.0024 Predictor-guided 0.0428 0.1071 0.4776 Greener et al. 0.1611 ± 0.0012 0.3132 ± 0.0161 0.4658 ± 0.0020 PepCVAE (L=32) 0.1504 ± 0.0054 0.1910 ± 0.0138 0.4140 ± 0.0003 ProteoGAN (L=32) 0.0372 ± 0.0005 0.3160 ± 0.0205 0.4147 ± 0.0005\nFigure A7: Losses and evaluations at training time. W = Wasserstein, AC = Auxiliary Classifier\n0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 min. sq. euclidean distance between each sequence in the test / generated set and all sequences in the training set\n0\n200\n400\n600\n800\n1000\n1200\nco un\nts\ntrain - test train - generated\nFigure A8: Distributions of pairwise distances in kernel feature space between a testset and the training set (red) and a generated set of ProteoGAN and the training set (green). It can be seen that the generated sequences are not closer to the training set than the testset (which would indicate overfitting). Further the generated sequences are about as far, but not further, away from the training set than the testset." } ]
2,020
null
SP:9ebf89e9e24ce1a745f97b9d33bb5ec9979e60e5
[ "This paper proposes an adaptive neighbor clustering method by estimating normal distribution on the representation space. The proposed neighbor clustering can utilize the acceptable range for each dimension of each instance from the estimated variance, which leads to different size of neighbors for each instance, and improved the neighbor clustering performances. In addition, the proposed neighbor clustering method can replace the KNN-based neighbor clustering in the previous SCAN (Semantic Clustering by Adopting Nearest neighbors) framework for image semantic clustering." ]
Unsupervised representation learning is essential in the field of machine learning, and accurate neighbor clusters of representation show great potential to support unsupervised image classification. This paper proposes a VAE (Variational Autoencoder) based network and a clustering method to achieve adaptive neighbor clustering to support the self-supervised classification. The proposed network encodes the image into the representation with boundary information, and the proposed cluster method takes advantage of the boundary information to deliver adaptive neighbor cluster results. Experimental evaluations show that the proposed method outperforms state-of-the-art representation learning methods in terms of neighbor clustering accuracy. Particularly, AC-VAE achieves 95% and 82% accuracy on CIFAR10 dataset when the average neighbor cluster sizes are 10 and 100. Furthermore, the neighbor cluster results are found converge within the clustering range (α ≤ 2), and the converged neighbor clusters are used to support the self-supervised classification. The proposed method delivers classification results that are competitive with the state-of-the-art and reduces the super parameter k in KNN (K-nearest neighbor), which is often used in self-supervised classification.
[]
[ { "authors": [ "Yuki Markus Asano", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "arXiv preprint arXiv:1911.05371,", "year": 2019 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Julien Mairal", "Armand Joulin" ], "title": "Unsupervised pre-training of image features on non-curated data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big selfsupervised cdels are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Yuanyuan Chen", "Lei Zhang", "Zhang Yi" ], "title": "Subspace clustering using a low-rank constrained autoencoder", "venue": "Inf. Sci.,", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Hongli Deng", "Lei Zhang", "Lituan Wang" ], "title": "Global context-dependent recurrent neural network language model with sparse feature learning", "venue": "Neural Comput. Appl.,", "year": 2019 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Philip Haeusser", "Johannes Plapp", "Vladimir Golkov", "Elie Aljalbout", "Daniel Cremers" ], "title": "Associative deep clustering: Training a classification network with no labels", "venue": "In German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2016 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "arXiv preprint arXiv:1702.08720,", "year": 2017 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Xi Peng", "Huajin Tang", "Lei Zhang", "Zhang Yi", "Shijie Xiao" ], "title": "A unified framework for representation-based subspace clustering of out-of-sample and large-scale data", "venue": "IEEE Trans. Neural Networks Learn. Syst.,", "year": 2016 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Wouter Van Gansbeke", "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Luc Van Gool" ], "title": "Scan: Learning to classify images without labels", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European conference on computer vision,", "year": 2016 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nUnsupervised representation learning is a long-standing interest in the field of machine learning (Peng et al., 2016a; Chen et al., 2016; 2018; Deng et al., 2019; Peng et al., 2016b), which offers a promising way to scale-up the usable data amount for the current artificial intelligence methods without the requirement for human annotation by leveraging on the vast amount of unlabeled data (Chen et al., 2020b;a). Recent works (Chen et al., 2020b;a; He et al., 2020) advocate to structure the unsupervised representation learning at the pre-training stage and then apply semi-supervised or selfsupervised techniques on the learned representations in the fine-tuning stage. So the representation learning acts as a feature extractor, which extracts semantic features from the image, and wellextracted features should lead to excellent classification performance (He et al., 2020). Moreover, representation learning assigns close vectors to images with similar semantic meanings, thus making it possible to cluster the same meaning images together (Xie et al., 2016; Van Gansbeke et al., 2020). When no label is available, unsupervised or self-supervised classification methods rely on the neighbor clustering to provide the supervisory signal to guide the self-supervised fine-tuning process (Van Gansbeke et al., 2020; Xie et al., 2016). In this scenario, accurately clustering neighbors among representations is crucial for the followed classification fine-tuning.\nIn many of the prior unsupervised methods (Van Gansbeke et al., 2020; Xie et al., 2016), the neighbor clustering process is performed by KNN (k-nearest neighbor) based methods. However, KNN based methods introduce k as a super parameter, which needs to be fine-tuned regarding different datasets. In an unsupervised setup, selecting a suitable k without any annotation or prior knowledge is not straightforward. Therefore it is desirable to have a neighbor clustering process that automatically adapts to different datasets, thus eliminating the need for pre-selecting the super parameter k.\nTo achieve adaptive neighbors clustering, the proposed method tries to encode the image representation into the multivariate normal distribution, as the multivariate normal distribution provides distance information, such as z-score, which can naturally adapt to different datasets without the help of any additional mechanism. Prior works (Kingma & Welling, 2013; Higgins et al., 2016; Burgess et al., 2018) showed VAE’s ability to encode images into multivariate normal distributions; nonethe-\nless, these works struggled to extract high-level semantic features, as most of them were trained by image recovery tasks, which encourages the network to focus on the low-level imagery features. Consequently, the extracted low-level features cannot be utilized in the unsupervised classification method, which needs semantic features to function.\nTo provide VAE with the ability to extract the high-level semantic features, as well as to utilize its strength to produce adaptive clusters, this paper proposes a framework, AC-VAE, including a VAE based network and a z-score based clustering method, as shown in Figure 1. The VAE based network encodes the image into the multivariate normal distribution N (µ,Σ), The distribution’s mean µ is taken as the representation; meanwhile, its z-score provides the boundary information that can naturally adapt to different datasets. The proposed clustering method takes advantage of the boundary information to achieve adaptive neighbor clustering. The proposed framework’s efficacy is evaluated on CIFAR10, CIFAR100-20, and SLT datasets, and it surpasses the current state-of-theart methods in neighbor clustering on these datasets. Particularly, AC-VAE achieves 95% and 82% accuracy on CIFAR10 dataset when the average neighbor cluster sizes are 10 and 100, surpassing the current state-of-the-art method by a margin of 10%. Our main innovations and contributions can be summarized as follows:\n• This work proposed a VAE based network to encode the image into the representation with its boundary information. The representation and boundary information are retrieved from the multivariate normal distribution, which encoded from the image. The efficacy of the adaptive boundary is demonstrated by neighbor clustering results.\n• In this work, a loss function is proposed based on consistency regulation to train the VAEbased network for extracting the high-level semantic feature from the image. Experiments demonstrate that the proposed method assigns close vectors to images with similar semantic meanings.\n• This work proposed a clustering method to take advantage of the adaptive boundary of each representation. The proposed method delivers high accuracy neighbor clusters. Besides, the neighbor clusters are found converge within the clustering range (α ≤ 2), and the selfsupervised learning framework utilizing the converged clusters delivers competitive results without the need of a pre-selecting parameter k.\n2 RELATED WORKS\nMany frameworks cluster the dataset directly into semantic classes, and train the network in an endto-end manner (Asano et al., 2019; Caron et al., 2019; Haeusser et al., 2018; Yang et al., 2016; Xie et al., 2016). Although the end-to-end training method is easy to apply, the network’s initialization largely influences these frameworks’ performance. Therefore, complex mechanisms (such as cluster\nreassignment) are needed to assist the clustering process. As an alternative approach, methods (Caron et al., 2018; Hu et al., 2017; Yang et al., 2016) based on maximizing the mutual information between image augmentations are proposed to address this issue.\nIn contrast to end-to-end training, the multi-stage method (Van Gansbeke et al., 2020) is introduced, which first aim to obtain accurate neighbor clusters from the representation learning, then apply these neighbor clusters in the followed finetuning, and this method made breakthroughs in unsupervised classification. This method depend mainly on the accurate neighbor cluster results from the representation learning. A large number of representation learning methods (Doersch et al., 2015; Gidaris et al., 2018; Noroozi & Favaro, 2016; Pathak et al., 2016; Zhang et al., 2016) have been introduced, and these methods usually assign pretext tasks to the network, and the network learns the image representation by solving these tasks. However, most of these methods aim to use the learned representations to serve the following supervised or semi-supervised tasks. Therefore, the neighbor clustering performance of these learned representations is not optimized, and few of these methods have strong neighbor clustering performance. SimCLR (Chen et al., 2020a) and MoCo (He et al., 2020) utilizing consistency regularization outstanding other methods in neighbor clustering performance and assisted SCAN framework (Van Gansbeke et al., 2020) to reach the state-of-the-art results in unsupervised classification tasks. However, the SCAN framework needs to pre-select the super parameter k to perform the KNN clustering from the starter.\nThis paper aims to provide an adaptive clustering method that needs no super parameters by creating the boundary for each representation. The representation and its boundary information are retrieved from a VAE based structure. VAE based networks are typically used for image generation tasks (Kingma & Welling, 2013; Razavi et al., 2019) or disentanglement tasks (Higgins et al., 2016; Burgess et al., 2018). Although VAE shows the potential to encode images into multivariate normal distributions (Razavi et al., 2019; Burgess et al., 2018), the efficacy of utilizing VAE to extracting high-level representations is not heavily studied. Besides, VAEs are usually trained by different forms of image recovery tasks, which keep VAE away from extracting high-level semantic features.\nThis paper adopts the consistency regulation to train the proposed VAE based network for extracting the high-level representation and its boundary information. Moreover, a clustering method is proposed to utilize this boundary information to deliver adaptive cluster results. In the end, these adaptive clusters are utilized in the unsupervised classification task.\n3 METHOD\nThe following sections presents the generative network that produces the representation and its boundary information first, and then introduces the cluster method that benefits from the boundary information.\n3.1 GENERATIVE MODEL\nIn the unsupervised setup, the ground truth of the desired image representation is not available. However, there are general assumptions about the desired presentation’s behavior pattern, i.e., how desired representations should interact with each other, such as images showing the same kind of object should have similar semantic representations. This paper introduces the latent vector that controls the representation’s behavior pattern and utilizes a VEA based network to generate representation from this latent vector. The proposed network aims to generate representations that follow the behavior assumption of the expected representation. In that case, the generated representation and the expected one would share the same latent vector, as the latent vector decides the representation’s behavior pattern. However, the generated representation may differ from the expected one, even when they have the same behavior pattern. It is hard to directly generate the desired representation, which needs the ground truth to train from the starter. Therefore, this paper adopts the latent vector as a close approximation of the desired representation.\nThe proposed VAE based network is shown in Figure 2. This paper states the latent vector as a multivariate normal distribution N (x;µ,Σ) encoded from the image x by an encoder e(x). Then, a sample z is drawn from this distribution with a stochastic process. This random process creates variations that support tryouts to find this latent vector’s acceptable range in the training stage. Besides, the distribution’s mean µ is also taken as a standard latent vector to provide a short-cut for\nbetter encoder training. As the encoder is a deep neural network for extracting high-level semantic features, the stochastically sampled latent vector z cannot provide a stable guide to train the deep encoder. The sum of the sampled latent vector z and the standard latent vector µ is fed into the decoder d(x) to generate the representation r.\nThe network is trained by the behavior regulations, which impose regulations on the generated representation. This work adopts consistency regulation, a commonly used behavior assumption of the semantic representation, which regulates an image and its argumentations to have the same semantic representation. The consistency regulation can be performed by minimizing the behavior loss stated in Equation (1).\nBLx = d(ri, r ′ i) = d(v(xi), v(T (xi))), (1)\nin which, T (xi) is the augmentation of image xi, ri is the representation of xi generated by the proposed network v(.), and d(, ) measures the distance of two representations.\nAs the proposed network is based on VAE, the loss function of the vanilla VAE (Kingma & Welling, 2013) is adapted to train the proposed network by replacing its image recovery loss with the behavior loss BLx. The loss function is used to train the proposed network is shown in Equation (2).\nEz∼Q(z|x)[log(P (r|z))−KL[Q(z|x)||P (z)], (2)\nin which, P (r|z) is distribution of the representation r generate by latent vector z, P (z) is the distribution of the latent vector, Q(z|x) is the distribution of z given x, and KL[Q(z|x)||P (z)] is the KL divergence between Q(z|x) and P (z). As mentioned earlier, the latent distribution will act as a close approximation of the desired representation. The mean µ will be regarded as the image representation, and the z-score of the distribution characterizes its boundary information.\n3.2 NEIGHBOR CLUSTERING METHOD\nThis work clusters the images based on the boundary information based on z-score. The insight is that, when an image is encoded into the distribution, the image’s close variations should be located within a small z-score range of this distribution. Figure 3 (a) illustrates the representation and its boundary information, z-score range, in a five-dimensional distribution. For neighbor clustering, the neighbor that its means µ fall into the required z-score ranges will be clustered, and an illustrated cluster criterion is shown in Figure 3 (b). This cluster criterion is strict, as it requires the clustered neighbor not only close to the root of the cluster but also has a similar flow as the root.\nFor fast calculation, this work proposes a z-score based distance-vector, in which each element of this vector corresponds to the distance at each dimension. The z-score is used because the direct compression of z-scores between different normal distributions is veiled. The proposed z-score based distance-vector d(xi,xj) between xi and xj shows in Equation (3).\nd(xi,xj) = α abs(µi − µj)\n2σi − 0.5, (3)\nin which, µi is the mean of xi’s latent distribution, and σi is the diagonal elements of the covariance matrix Σ. The α controls the z-score range to be used. When α = 1, the distance is normalized by the z-score range [-1, 1]. This work will expand the z-score range to increase the cluster size until it reaches [-2,2], covering more than 95% of the normal distribution. However, the sample falls out of the z-score range of [-2,2] is most unlikely from this distribution’s population, therefore, the z-score range is limited within [-2, 2]. To be clustered, all element of the z-score based distance vector should not larger than 0. Besides, d(xi,xj) may not equal d(xi,xi), as the z-score range of different representations may differ from each other, as demonstrated in Figure 3 (c).\nBy modifying α in Equation (3), the z-score based distance will change accordingly. The cluster threshold α indicates how strict the clustering criterion is: small α will tighten the cluster restriction, and large α will reduce the restriction. As the experiment in section 4 will demonstrate, cluster converging after α surpasses a certain value are observed on all evaluated datasets, so the α needs no fine-tuning for each dataset.\nNotably, the early converge is observed in the experiments, in which the cluster size stops from increasing before reached the desired cluster number. This situation is introduced by the strict clustering method, which requires the neighbor representation to satisfy the criterion in all dimensions. In some cases, the clustering criteria are hard to reach; hence the clustering process stops. To address this issue, this work introduces the loose match strategy and a parameter θ ( 0 < θ < 1 ). Instant of requiring a full match in every dimension as the standard clustering process, a certain mismatch, 1 − θ, is accepted, as demonstrated in Figure 3 (d). The loose match strategy is a backup method and is unnecessary for the unsupervised classification, which will be demonstrated in the experiment section.\n4 EXPERIMENTS\nThis paper first evaluates the neighbor cluster performance of the proposed method. Then it uses the neighbor cluster results in a self-supervised classification framework to demonstrate the potential to adapt the proposed method to support self-supervised classification. At last, additional feature of the proposed method is introduced.\n4.1 EXPERIMENTAL SETUP\nExperiments are performed on CIFAR10 (Krizhevsky et al., 2009), CIFAR100-20 (Krizhevsky et al., 2009) and STL10 (Coates et al., 2011). The proposed network is trained on the training set of all\ndatasets. For all experiences, the same set of configurations is applied. A ResNet-34 is adopted as the encoder network, and a two-layer full connection is utilized as the decoder network. The latent distribution dimension is set as 512, and the decoded representation vector has a size of 64. Both the encoder and the decoder networks are initialized randomly. Besides, the NT Xent loss (Chen et al., 2020a) and its augmentation strategy are used for the behavior loss implementation.\n4.2 EVALUATION OF NEIGHBOR CLUSTER\nThe neighbor cluster performance is evaluated under cluster sizes of 10, 50, and 100, as it is desired that a clustering method can maintain high accuracy when keep increasing the cluster size. The accuracy of the neighbor cluster is obtained by averaging all neighbor clusters’ accuracy. While in the proposed method, neighbor clusters have different cluster sizes, the average cluster size is used for convenient comparison. For comparison, two instant discrimination based methods (Caron et al., 2018; 2019), Rot (Coates et al., 2011), SimCLR (Doersch et al., 2015), and MoCo (Gidaris et al., 2018) are chosen, as they all use consistency regulation to perform unsupervised representation learning. To get the cluster results, a KNN is applied to the learned representation. For the proposed VAE based network, both KNN and z-score based methods are employed for clustering. The comparing methods are not suitable to use the z-score based clustering method, as it requires the distributional information. The neighbor cluster accuracy on the training set is studied, as the neighbor cluster results on the training set will be utilized to support the self-supervised classification training.\nThe neighbor cluster accuracy comparison on the training set is shown in Table 1. The proposed VAE based network with KNN is compared with others to demonstrate its ability to project the same meaning images to close representations. With KNN clustering, the proposed method’s accuracy is lower than the state-of-the-art, SimCLR. This is because the sampling process introduced by the proposed network creates uncertainty in the representation, which contributes the decline of accuracy. After the KNN is replaced by the z-score based clustering method, the proposed methods (AC-VAE) outperformed all other methods in all cases. Notably, around 10% increases are found when cluster size is 10 on CIFAR10 and STL datasets. This performance comes from the z-score based cluster method, which uses the boundary information to exclude those nearby samples that do not have the same flow shape. Figure 4 is used to demonstrate the efficacy of the proposed clustering method. In Figure 4, the representations from different classes overlap each other in some areas. It is hard for the method that only considers the distance to deliver a highly accurate cluster.\nTo utilize the neighbor cluster method in unsupervised classification, the parameter α’s effect on the cluster size and cluster accuracy is also studied. The results are shown in Figure 5. Clusters have been found naturally converged in the training set of all three datasets within the range of 0 < α ≤ 2. As shown in Figure 5 (a), the cluster size remains the same after the threshold α reaches a certain point. These results benefit from encoding the image as the distribution. For a distribution, most of its population will fall into its z-score range of [-2, 2] (α = 2), covering 95% of the distribution population. During the network training, the VAE-based network should push samples with the similar meanings into this high-possibility region. This analysis matches the experiment results, in which the converges happened before the z-score range expands to [-2, 2], as shown in Figure 5.\nTable 2: The α and θ applied in the experiments\nDataset CIFAR10 CIFAR100-20 STL Cluster Size 10 50 100 10 50 100 10 50 100\nα 0.52 0.94 1.28 1.45 1.62 1.72 0.57 1.24 1.42 θ N/A N/A N/A N/A 0.95 0.91 N/A 0.89 0.92\nTo get the results mentioned in Table 1, the α and θ listed in Table 2 are applied. However, these super parameters are precisely selected for easy compassion to the KNN based clustering results. In practice, there is no need to fine-select α nor θ to reach a specific cluster size unless a particular neighbor cluster size is highly desired. number and cluster accuracy under different threshold !. As shown in Figure 4.a the cluster remains the same after the threshold ! reaches a certain point. These results imply that our method proved a converged cluster result. As shown in Figure 4. b, after the cluster converged, the accuracy of the cluster r mains 87% on Cifar10 datas t. This behavior is offered by the cluster method. Our cluster method only clusters the representation when every dimension of the representation falls into the boundary. As the representation is a high dimensional distribution, this boundary requires the neighbor has the same manifold to be clustered.\nFigure 4. (a) the clustered neighbor number when vs. threshold !, (b) the average cluster accuracy vs. threshold !.\nCurriculum learning with phototypes. As training processes, the representation’s clustering ability increased without hugely sacrifice the accuracy of the clustering results. This suggests the framework produces a natural curriculum learning, in which easy samples fist start to collect data, and complex samples will collect data along with the network training progress. Besides, we visualize the porotypes from each class. The results are shown together in Fig. 5.\n(a)\n(b)\n(c)\nFigure5. Prototype images on the different datasets, (a) Cifar10, (b) Cifar100-20, (c) STL\n4.3 SELF SUPERVISED CLASSIFICATION\nIn order to demonstrate the efficacy of utilizing the proposed method to perform unsupervised classification, the self-supervised classification framework SCAN (Van Gansbeke et al., 2020) is adapted by replacing its KNN based neighbor clustering method with the proposed method to perform the unsupervised classification. The converged clusters showed in section 4.2 are utilized, in which all the clusters are naturally converged without using the loose match strategy nor fine-tuning α. Therefore, the self-adaptive cluster results are used to perform the unsupervised classification. As Table 3 demonstrated, the framework utilizing the adaptive cluster results outperform most of the comparing methods. When compared to the state-of-the-art (SCAN, Van Gansbeke et al. (2020)), the proposed method still delivers competitive results without selecting super parameter k.\n7" }, { "heading": "Methods CIFAR10 CIFAR100-20 STL", "text": "4.4 OTHER ADVANTAGES\nThe proposed method also has additional advantages, desired in the unsupervised setup, such as prototype selecting. As different representation has different boundary information, some samples will cluster far more neighbors than others under the same z-score range, so each class’s prototype can be easily identified as those who cluster the most neighbors. The selected prototypes of each dataset are shown in Figure 6.\n5 CONCLUSION\nThis paper proposes AC-VAE, including a VAE based network and a clustering method. The VAE based network encodes the image into the multivariate normal distributions. The semantic representation and its boundary can be retrieved from this distribution. The clustering method takes advantage of the boundary information to achieve adaptive neighbor clustering. Experiments demonstrate that the proposed VAE-based network has the ability to project images with the same semantic meaning into close representations. Experiments also show the efficacy of the proposed method to from adaptive clusters. This work attempts to push the edge of fully unsupervised classification by omitting a critical super-parameter k in the state-of-the-art method. Experiments show that the naturally converged cluster supports the unsupervised classification framework to deliver competitive results.\nREFERENCES\nYuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371, 2019.\nChristopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in beta-vae. arXiv preprint arXiv:1804.03599, 2018.\nMathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 132–149, 2018.\nMathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. Unsupervised pre-training of image features on non-curated data. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2959–2968, 2019.\nJianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering. In Proceedings of the IEEE international conference on computer vision, pp. 5879–5887, 2017.\nTing Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a.\nTing Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big selfsupervised cdels are strong semi-supervised learners. arXiv preprint arXiv:2006.10029, 2020b.\nYuanyuan Chen, Lei Zhang, and Zhang Yi. A novel low rank representation algorithm for subspace clustering. Int. J. Pattern Recognit. Artif. Intell., 30(4):1650007:1–1650007:16, 2016. doi: 10. 1142/S0218001416500075. URL https://doi.org/10.1142/S0218001416500075.\nYuanyuan Chen, Lei Zhang, and Zhang Yi. Subspace clustering using a low-rank constrained autoencoder. Inf. Sci., 424:27–38, 2018. doi: 10.1016/j.ins.2017.09.047. URL https: //doi.org/10.1016/j.ins.2017.09.047.\nAdam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215–223, 2011.\nHongli Deng, Lei Zhang, and Lituan Wang. Global context-dependent recurrent neural network language model with sparse feature learning. Neural Comput. Appl., 31(S-2):999– 1011, 2019. doi: 10.1007/s00521-017-3065-x. URL https://doi.org/10.1007/ s00521-017-3065-x.\nCarl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision, pp. 1422–1430, 2015.\nSpyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.\nPhilip Haeusser, Johannes Plapp, Vladimir Golkov, Elie Aljalbout, and Daniel Cremers. Associative deep clustering: Training a classification network with no labels. In German Conference on Pattern Recognition, pp. 18–32. Springer, 2018.\nKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738, 2020.\nIrina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016.\nWeihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self-augmented training. arXiv preprint arXiv:1702.08720, 2017.\nXu Ji, João F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865–9874, 2019.\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.\nAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.\nMehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69–84. Springer, 2016.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536–2544, 2016.\nXi Peng, Huajin Tang, Lei Zhang, Zhang Yi, and Shijie Xiao. A unified framework for representation-based subspace clustering of out-of-sample and large-scale data. IEEE Trans. Neural Networks Learn. Syst., 27(12):2499–2512, 2016a. doi: 10.1109/TNNLS.2015.2490080. URL https://doi.org/10.1109/TNNLS.2015.2490080.\nXi Peng, Miaolong Yuan, Zhiding Yu, Wei-Yun Yau, and Lei Zhang. Semi-supervised subspace learning with l2graph. Neurocomputing, 208:143–152, 2016b. doi: 10.1016/j.neucom.2015.11. 112. URL https://doi.org/10.1016/j.neucom.2015.11.112.\nAli Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. In Advances in Neural Information Processing Systems, pp. 14866–14876, 2019.\nWouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In European Conference on Computer Vision (ECCV), 2020.\nJunyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pp. 478–487, 2016.\nJianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5147–5156, 2016.\nRichard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pp. 649–666. Springer, 2016." } ]
2,020
null
SP:751b7dfd3e7de8e6f533137fc9ae7a65583c09e0
[ "The goal of this paper is to define multi-view disentanglement in an unsupervised manner. The authors list four principal rules for representation disentanglement, including completeness of combining shared and specific representations, the exclusivity of two specific representations and between specific and shared representation, and commonality of two shared representation. The authors follow the above rules to design a VAE-based model and demonstrate favorable results on image clustering and classification tasks." ]
Learning effective representations for data with multiple views is crucial in machine learning and pattern recognition. Recent great efforts have focused on learning unified or latent representations to integrate information from different views for specific tasks. These approaches generally assume simple or implicit relationships between different views and as a result are not able to accurately and explicitly depict the correlations among these views. To address this, we firstly propose the definition and conditions for unsupervised multi-view disentanglement providing general instructions for disentangling representations between different views. Furthermore, a novel objective function is derived to explicitly disentangle the multiview data into a shared part across different views and a (private) exclusive part within each view. The explicit guaranteed disentanglement is of great potential for downstream tasks. Experiments on a variety of multi-modal datasets demonstrate that our objective can effectively disentangle information from different views while satisfying the disentangling conditions.
[]
[ { "authors": [ "Shotaro Akaho" ], "title": "A kernel method for canonical correlation analysis", "venue": "arXiv preprint cs/0609071,", "year": 2006 }, { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken elbo", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep canonical correlation analysis", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Devon Hjelm", "Aaron Courville" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Diane Bouchacourt", "Ryota Tomioka", "Sebastian Nowozin" ], "title": "Multi-level variational autoencoder: Learning disentangled representations from grouped observations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Xiaochun Cao", "Changqing Zhang", "Huazhu Fu", "Si Liu", "Hua Zhang" ], "title": "Diversity-induced multi-view subspace clustering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Monroe D Donsker", "SR Srinivasa Varadhan" ], "title": "Asymptotic evaluation of certain markov process expectations for large time", "venue": "iv. Communications on Pure and Applied Mathematics,", "year": 1983 }, { "authors": [ "Babak Esmaeili", "Hao Wu", "Sarthak Jain", "Alican Bozkurt", "N Siddharth", "Brooks Paige", "Dana H Brooks", "Jennifer Dy", "Jan-Willem Meent" ], "title": "Structured disentangled representations", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Abel Gonzalez-Garcia", "Joost van de Weijer", "Yoshua Bengio" ], "title": "Image-to-image translation for crossdomain disentanglement", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Harold Hotelling" ], "title": "Relations between two sets of variates", "venue": "In Breakthroughs in statistics,", "year": 1992 }, { "authors": [ "Junlin Hu", "Jiwen Lu", "Yap-Peng Tan" ], "title": "Sharable and individual multi-view metric learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alexander H Liu", "Yen-Cheng Liu", "Yu-Ying Yeh", "Yu-Chiang Frank Wang" ], "title": "A unified feature disentangler for multi-domain image translation and manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Siddharth Narayanaswamy", "T Brooks Paige", "Jan-Willem Van de Meent", "Alban Desmaison", "Noah Goodman", "Pushmeet Kohli", "Frank Wood", "Philip Torr" ], "title": "Learning disentangled representations with semi-supervised deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Guim Perarnau", "Joost Van De Weijer", "Bogdan Raducanu", "Jose M Álvarez" ], "title": "Invertible conditional gans for image editing", "venue": "arXiv preprint arXiv:1611.06355,", "year": 2016 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Searching for activation functions", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Jérémie Sublime", "Basarab Matei", "Pierre-Alexandre Murena" ], "title": "Analysis of the influence of diversity in collaborative and multi-view clustering", "venue": "In International Joint Conference on Neural Networks,", "year": 2017 }, { "authors": [ "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ], "title": "Joint multimodal learning with deep generative models", "venue": "arXiv preprint arXiv:1611.01891,", "year": 2016 }, { "authors": [ "Qiaoyu Tan", "Guo-Xian Yu", "Jun Wang", "Carlotta Domeniconi", "Xiangliang Zhang" ], "title": "Individualityand commonality-based multiview multilabel learning", "venue": "IEEE Transactions on Cybernetics, pp. 1–12,", "year": 2019 }, { "authors": [ "Y Tsai", "P Liang", "A Zadeh", "L Morency", "R Salakhutdinov" ], "title": "Learning factorized multimodal representations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ramakrishna Vedantam", "Ian Fischer", "Jonathan Huang", "Kevin Murphy" ], "title": "Generative models of visually grounded imagination", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Mike Wu", "Noah Goodman" ], "title": "Multimodal generative models for scalable weakly-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xuan Wu", "Qing-Guo Chen", "Yao Hu", "Dengbao Wang", "Xiaodong Chang", "Xiaobo Wang", "Min-Ling Zhang" ], "title": "Multi-view multi-label learning with view-specific information extraction", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Changqing Zhang", "Yeqing Liu", "Huazhu Fu" ], "title": "Ae2-nets: Autoencoder in autoencoder networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multi-view representation learning (MRL) involves learning representations by effectively leveraging information from different perspectives. The representations produced by MRL are effective when correlations across different views are accurately modeled and thus properly exploited for downstream tasks. One representative algorithm, Canonical Components Analysis (CCA) (Hotelling, 1992), aims to maximize linear correlations between two views under the assumption that factors from different views are highly correlated. Under a similar assumption, the extended versions of CCA, including kernelized CCA (Akaho, 2006) and Deep CCA (Andrew et al., 2013), explore more general correlations. There are also several methods (Cao et al., 2015; Sublime et al., 2017) that maximize the independence between different views to enhance the complementarity. Going beyond the simple assumptions above, the latent representation encodes different views with a degradation process implicitly exploiting both consistency and complementarity (Zhang et al., 2019).\nThese existing MRL algorithms are effective, however, the assumed correlations between different views are usually simple thus cannot accurately model or explicitly disentangle complex real-world correlations, which hinders the further improvement and interpretability. Although there are a few heuristic algorithms (Tsai et al., 2019; Hu et al., 2017) that explicitly decompose the multiview representation into shared and view-specific parts, they are especially designed for supervised learning tasks without any disentangled representation guarantee and fall short in formally defining the relationships between different parts. To address this issue, we propose to unsupervisedly disentangle the original data from different views into shared representation across different views and exclusive (private) part within each view, which explicitly depicts the correlations and thus not only enhances the performance of existing tasks but could also inspire potential applications. Specifically, we firstly provide a definition for the multi-view disentangled representation by introducing the sufficient and necessary conditions for guaranteeing the disentanglement of different views. According to these conditions, an information-theory-based algorithm is proposed to accurately disentangle different views. To summarize, the main contributions of our work are as follows:\n• To the best of our knowledge, this is the first work to formally study multi-view disentangled representation with strict conditions, which might serve as the foundations of the future research on this problem. • Based on our definition, we propose a multi-view disentangling model, in which information-\ntheory-based multi-view disentangling can accurately decompose the information into shared\nrepresentation across different views and exclusive representation within each view. The explicit decomposition enhances the performance of multi-view analysis tasks and could also inspire new potential applications. • Different from the single-view unsupervised disentangled representation learning (Locatello\net al., 2019), we provide a new paradigm for unsupervised disentangled representation learning from a fresh perspective - disentangling factors between different views instead of each single view. • Extensive experiments on a range of applications verify that the proposed information-theory-\nbased multi-view disentangling algorithm can accurately disentangle data from multiple views into expected shared and exclusive representations.\n2 MULTI-VIEW DISENTANGLED REPRESENTATION\nExisting multi-view representation learning methods (Wu & Goodman, 2018; Zhang et al., 2019) can obtain a common representation for multi-view data, however, the correlations between different views are not explicitly expressed. The supervised algorithms (Hu et al., 2017; Tan et al., 2019) can decompose multiple views into a common part and private parts, but there is no disentangling guarantee. Therefore, we propose a multi-view disentanglement algorithm that can explicitly separate the shared and exclusive information in unsupervised settings. Formally, we first propose a definition of a multi-view disentangled representation by introducing four criteria, which are considered as sufficient and necessary conditions of disentangling multiple views. The definition is as follows:\nDefinition 2.1 (Multi-View Disentangled Representation) Given a sample with two views, i.e., X = {xi}2i=1, the representation Sdis = {si, ei}2i=1 is a multi-view disentangled representation if the following conditions are satisfied:\n• Completeness: 1 The shared representation si and exclusive representation ei should jointly contain all information of the original representation xi; • Exclusivity: 2 There is no shared information between common representation si and\nexclusive representation ei, which ensures the exclusivity within each view (intra-View). 3 There is no shared information between ei and ej , which ensures the exclusivity between private information of different views (inter-View). • Commonality: 4 The common representations si and sj should contain the same informa-\ntion. Equipped with the exclusivity constraints, the common representations are guaranteed to not only be the same but also contain maximized common information.\nThe necessity for each criterion is illustrated in Fig. 1 (satisfaction of all the four conditions produces exact disentanglement, and violation of any condition may result in an unexpected disentangled representation). Note that, existing (single-view) unsupervised disentanglement focuses on learning\na representation to identify explanatory factors of variation, which has been proved fundamentally impossible (Locatello et al., 2019). The goal of the proposed multi-view disentanglement is to disentangle multiple views into the shared and exclusive parts which can be well guaranteed as illustrated in definition 2.1 and Fig. 1.\nMutual information has been widely used in representation learning (Hjelm et al., 2019; Belghazi et al., 2018). In probability theory and information theory, the mutual information of two random variables quantifies the “amount of information” obtained about one random variable when observing the other one, which is well-suited for measuring the amount of shared information between two different views. To approach the disentangling goal, according to conditions 1 ∼ 4 , the general form of the object function is naturally induced as:\nmax 2∑ i,j=1 [ I(xi; ei, si)︸ ︷︷ ︸\n1\n−I(ei; si)︸ ︷︷ ︸ 2\n] − ∑ i6=j\nI(ei; ej)︸ ︷︷ ︸ 3\n+ ∑ i 6=j\nI(si; sj)︸ ︷︷ ︸ 4 , (1)\nwhere I(·; ·) denotes the mutual information. We provide an implementation in Fig. 2 and, in the following subsections, we will describe this implementation in detail." }, { "heading": "2.1 CONDITION ¬: INFORMATION PRESERVATION FOR THE SHARED AND EXCLUSIVE REPRESENTATIONS", "text": "• How to maximize I(x; e, s)? For simplicity, x, s, e and xi, si, ei are denoted with the same meanings and used alternately, where the former and latter are used for intra-view and inter-view cases, respectively. To preserve the information from the original data in the shared and exclusive representations, the mutual information I(x; e, s) should be maximized. There are different ways to implement the maximization of I(x; e, s) based on the following assumptions.\nAssumption 2.1 The shared representation s and exclusive representation e are simultaneously independent and conditionally independent:\np(s, e) = p(s)p(e), p(s, e|x) = p(s|x)p(e|x). (2)\nFirstly, we expand I(x; e, s) to obtain the following equation (more details are shown in supplement C): I(x; e, s) = ∫ ∫ ∫\np(x)p(e, s|x) log p(e, s|x) p(e, s) dedsdx.\nThen, under Assumption 2.1, the following equation is derived (more details are shown in supplement C): I(x; e, s) = I(x; e) + I(x; s). (3) According to the above equation, it seems that we can maximize I(x; e) + I(x; s) to maximize I(x; e, s), which involves making s and e contain as much information from x as possible (ideally, it will produce e and s to meet I(x; e) = I(x; s) = H(x), where H(x) is the entropy of x). This actually leads to a strong correlation between s and e, which is in conflict with the independence Assumption 2.1 about s and e. In other words, it is difficult to balance the completeness (condition 1 ) and intra-view exclusivity (condition 2 ) (see experimental results in supplement B.4).\nFortunately, there is an alternative strategy which avoids the difficulty in balancing the completeness and intra-view exclusivity. Specifically, we introduce a latent representation r generated by two independent distributions with respect to s and e under a mild assumption:\nAssumption 2.2 (Relationship between s, e and r):\np(s, e, x) = p(r, x). (4)\nIn our formulation, we define r = f(s, e), where r is derived from s and ewith the underlying function f(·) and satisfies p(r, x) = p(s, e, x). Eq. 4 is a mild assumption, for example the invertibility of mapping r = f(s, e) ensuring a sufficient condition which can be easily verified. Note that r = [s, e]\nis one special case and will be discussed later. Based on Assumption 2.2, we can get (more details are shown in supplement C):\np(r) = p(s, e), p(r|x) = p(s, e|x). (5)\nThen, we can induce the following result (more details are shown in supplement C):\nI(x; e, s) = I(x; r). (6)\nThis result indicates that the maximization of I(x; e, s) can be achieved by maximizing the mutual information of agency r and x. In this way, the independence of e and s is well preserved and the previous conflict is dispelled. Next, we will explain how to encode the information of x into independent representations s and e by introducing the agency r.\n• How to obtain independent representations e and s by maximizing I(x; r) ? First, we consider encoding the observed data x into a latent representation r by maximizing the mutual information between x and r. Considering robustness and effectiveness (Alemi et al., 2018), we can maximize the mutual information between r and x through Variational Autoencoders (VAEs) (Kingma & Welling, 2014). Accordingly, we have the following objective function:\nmin qr,d\nEx∼p(x) [ − Er∼qr(r|x) [ log(d(x|r)) ] + Er∼qr(r|x) log qr(r|x) p(r) ] , (7)\nwhere d(x|r) (the “decoder”) is a variational approximation to p(x|r), and qr(r|x) (the “encoder”) is a variational approximation to p(r|x), which converts the observed data x into the latent representation r.\nSecond, we consider how to obtain independent representations e and s by modeling qr(r|x). For this goal, the relationships between s, e and r should be jointly modeled. As shown in Eq. 5, we obtain p(r|x) = p(s, e|x). Under Assumption 2.1, Eq. 5 can be rewritten as p(r|x) = p(s|x)p(e|x), which implies that qr(r|x) can be considered as the product of p(s|x) and p(e|x). Furthermore, we introduce PoE (product-of-experts) (Hinton, 2002; Wu & Goodman, 2018) to model the product of qs(s|x) and qe(e|x), where the variational networks qs(s|x) and qe(e|x) are designed to approximate p(s|x) and p(e|x). It is worth noting that the key difference from MVAE (Multimodal Variational Autoencoder) (Wu & Goodman, 2018) is that our model obtains the latent representation r from two independent components within each single view, while MVAE achieves the unified representation of all views by assuming independence of representations of different views. Under the assumption that the true posteriors for individual factors p(s|x) and p(e|x) are contained in the family of their variational counterparts qs(s|x) and qe(e|x), we have qr(r|x) = qs(s|x)qe(e|x). With Gaussian distributions, we can obtain the closed-form solution for the product of two distributions: µr = µsσ 2 s+µeσ 2 e\nσ2s+σ 2 e\n,\nσ2r = σ2sσ 2 e\nσ2s+σ 2 e . Therefore, the independent representations between e and s are well preserved by modeling qr(r|x). Accordingly, with the results qr(r|x) = qs(s|x)qe(e|x) and p(r) = p(s)p(e), the objective in Eq. 7 is rewritten as:\nmin qs,qe,d\nEx∼p(x) [ − Er∼qs(s|x)qe(e|x) [ log(d(x|r)) ] + Es∼qs(s|x) log qs(s|x) p(s) + Ee∼qe(e|x) log qe(e|x) p(e) ] ,\nwhere p(s) and p(e) are set to Gaussian distributions, which in turn forces qs(s|x) and qe(e|x) to be closer to a Gaussian distribution, allowing us to find the product of the two distributions. The above objective is actually the ELBO (evidence lower bound) (Kingma & Welling, 2014) with the first term being the reconstruction loss, and the second and third terms being the KL divergence.\nThe proposed variant of VAE inherits two advantages from VAE and PoE, respectively. The first is that we can obtain approximate distributions of s and e given x to preserve the independence. The second is that the proposed model still works even when there is a missing case for e or s in the testing. This means that we can use only s or e as input to the decoder to reconstruct x (shown in the experimental section), which is quite different from the concatenation of e and s or other forms that require e and s simultaneously to obtain r. In addition, the way of concatenating s and e does not well exploit the independent prior of s and e." }, { "heading": "2.2 CONDITIONS -®: EXCLUSIVITY", "text": "To fulfill conditions 2 and 3 , we minimize the mutual information between two variables by enhancing the independence. There are different strategies to promote the independence between variables, which are endowed with different properties. Specifically, the straightforward way is to promote the independence by minimizing the linear correlation. Accordingly, we have the following loss function:\nmin qie,q j e\neiej T\n‖ei‖ ‖ej‖ , (8)\nfor condition 2 , and a similar objective could be induced for condition 3 . Although simple and effective, the linearity property may not be powerful enough to handle complex real-world complex correlations. Therefore, we also propose an alternative strategy for general correlation cases in the supplementary material A.1." }, { "heading": "2.3 CONDITION ¯: ALIGNMENT OF THE SHARED REPRESENTATION FROM DIFFERENT VIEWS", "text": "For condition 4 , we ensure the commonality between si and sj by maximizing the mutual information as follows:\nI(si; sj) = ∫ ∫ p(si, sj) log p(si, sj)\np(si)p(sj) dsidsj .\nIt is difficult to calculate the mutual information, since the true distribution is usually unknown. Based on the scalable and flexible MINE (Belghazi et al., 2018), we introduce two different strategies for maximizing the mutual information between the shared representations si ∼ qis(si|xi) and sj ∼ qis(sj |xj) from different perspectives. MINE can estimate mutual information of two variables by training a classifier to distinguish whether the samples come from the joint distribution J or the product of marginals M. MINE actually aims to optimize the tractable lower bound to estimate mutual information based on the Donsker-Varadhan representation (Donsker & Varadhan, 1983) of the KL-divergence, with the following form:\nI(si, sj) ≥ EJ [ Tθ(s i, sj) ] − log ( EM [ eTθ(s i,sj) ]) ,\nwhere Tθ is a discriminator function modeled by a neural network with parameters θ. J and M are the joint and product of marginals, respectively. We can maximize the mutual information of si and sj by maximizing the lower bound.\nAlthough the KL-based MI is effective for some tasks, it tends to overemphasize the similarity between samples and thus cannot thoroughly explore the underlying similarity between different distributions. To address this issue, we could replace the KL divergence with JS divergence (Hjelm et al., 2019), which can focus on the similarity in terms of different distributions instead of samples. Accordingly, we maximize the mutual information of si and sj by the following form:\nmax qis,q j s,Tθ\nEJ [ − sp ( −Tθ(si, sj) )] − EM [ sp ( Tθ(s i, s′ j ) )] ,\nwhere si, sj corresponds to one sample with two views, i.e., the ith and jth views, respectively. s′j corresponds to another sample from the jth view, and sp(z) = log(1 + ez) is the softplus function. Specifically, the inner product is employed in the classifier, i.e., Tθ(a, b) = aT b. We have discussed these two methods in the supplementary material A.2." }, { "heading": "3 RELATED WORK", "text": "The disentanglement of representations aims to depict an object through independent factors in order to provide a more reliable and interpretable representation (Bengio et al., 2013). Most unsupervised disentangled representation learning methods (Chen et al., 2018; Esmaeili et al., 2019; Kumar et al., 2018) are based on Variational Autoencoders (VAEs) (Kingma & Welling, 2014). Existing VAEbased methods basically increase the independence between different factors of the representation. On the basis of VAEs, β-VAE (Higgins et al., 2017) implicitly obtains promising disentanglement\nperformance by increasing β in ELBO to constrain the capacity of the latent space. Different from β-VAE, several methods (Chen et al., 2018; Esmaeili et al., 2019) increase the independence between different factors by minimizing a total correlation (TC) loss. DIP (Disentangled Inferred Prior) (Kumar et al., 2018) encourages disentanglement by introducing a disentangled prior to constrain the disentangled representation. However, there are theoretical problems in unsupervised disentanglement learning (Locatello et al., 2019). Thus there are also several semi-supervised methods (Kingma et al., 2014; Narayanaswamy et al., 2017; Bouchacourt et al., 2018) for disentanglement representation learning which have access to partial real factors of the data. There are also various real-world applications using disentangled representations (Gonzalez-Garcia et al., 2018; Liu et al., 2018).\nMulti-view representation learning aims to jointly utilize information from multiple views for better performance. To jointly learn a unified representation between multiple views, CCA-based (Hotelling, 1992; Akaho, 2006; Andrew et al., 2013; Wang et al., 2015) algorithms maximize the correlation between different views to extract shared information. KCCA (Akaho, 2006) and DCCA (Andrew et al., 2013) extend the traditional CCA using kernel and deep neural networks, respectively. DCCAE (Wang et al., 2015) jointly considers the reconstruction of each single view and the correlation across different views. To jointly encode the shared and view-specific information, some latentrepresentation-based models (Zhang et al., 2019) have been proposed. There are also models (Wu et al., 2019; Suzuki et al., 2016; Vedantam et al., 2018) that employ a VAE to learn a unified multi-modal representation." }, { "heading": "4 EXPERIMENTS", "text": "Experimental Settings. We conduct quite comprehensive experiments to evaluate the disentangled representation. Specifically, we investigate the disentanglement quantitively by conducting clustering and classification (section 4.1), and provide visualization results to intuitively evaluate the disentanglement (section 4.2 and section B.5). Furthermore, we conducted ablation experiments (section B.3) and present application experiments based on the disentangled representation (section B.6). Due to the space limitation, some experiments are detailed in the supplementary material.\nDatasets: Similar to the work (Gonzalez-Garcia et al., 2018), we construct the dataset MNISTCBCD, comprising MNIST-CB (MNIST with Colored Background) and MNIST-CD (MNIST with Colored Digit) as two views, by randomly modifying the color of digits and background of digit images from the dataset MNIST. Intuitively, the shape of a digit corresponds to the shared information, while the colors of background and digit correspond to the private information within each view. The same strategy is applied to FashionMNIST. We also conduct experiments on the face image dataset CelebA (Liu et al., 2015), which is a large-scale face-attributes dataset with more than 200K celebrity images, each of which is with 40 attribute annotations. The image and attribute domains are considered as two different views. We select the 18 most significant attributes as the attribute vector (Perarnau et al., 2016). For these two views, the shared information are attribute-related information (e.g., viewpoint, hair color, w/ or w/o glasses etc), and the exclusive representation is non-attribute information.\nTo verify the disentanglement, we compare our algorithm with: (1) Raw-data (Raw), which reshapes images directly into vectors as representations; (2) Variational autoencoders (VAE (Kingma & Welling, 2014)), which uses VAE to extract features from the data of each view; (3) CCA-based methods (CCA (Hotelling, 1992), KCCA (Akaho, 2006), DCCA (Andrew et al., 2013) and DCCAE (Wang et al., 2015)), which obtain the common representation by maximizing the correlation between different views. (4) Multimodal variational autoencoders (MVAE (Wu et al., 2019)), which can learn a common representation of two views. For Raw-data and VAE, we report the clustering results using the representations obtained by view-1, view-2, and representation by concatenating view-1 and view-2. In our method, we use s1, s2 and the concatenated representation for clustering/classification." }, { "heading": "4.1 QUANTITATIVE ANALYSIS", "text": "To evaluate the disentanglement of our algorithm, we conduct clustering and classification based on the representations, respectively. For simplicity, we employ k-means as the clustering algorithm, since k-means is based on the Euclidean distance, which makes it more objective in measuring the quality of representations. KNN (K-Nearest Neighbour) and linear SVM are employed to conduct\nMVAE Ours\ntop 1 64.12 95.15 top 5 63.24 94.57 top 10 62.85 94.16 top 100 61.53 91.96\nTable 4: Cross-modal retrieval.\nclassification experiments based on the shared representations. All experiments are run 20 times and the means are reported in terms of accuracy (refer to the supplement for standard deviations).\nFrom the quantitative results in Tables 1 and 2, the following observations are drawn: (1) directly using the raw features for clustering/classification is not promising, as the digital and color information are mixed. Moreover, since the background region is much larger than the area of the digit, the accuracy of using view-1 is relatively low on MNIST; (2) compared with the raw features, the shared information extracted by our model is competitive due to the clear semantic information; (3) by extracting the shared (digit) information explicitly, our model obtains much better results.\nFurthermore, we evaluate the exclusive representations on clustering. For the MNIST, the colors of background (MNIST-CB: MNIST with Colored Background) or digits (MNIST-CD: MNIST with Colored Digit) are considered as class labels. According to Table 3, our algorithm obtains more promising clustering performance with the exclusive representation compared with the raw data, while existing algorithms cannot obtain exclusive representation explicitly. The performance improvement of view-2 (on MNIST-CBCD) is not so substantial. The possible reason is that exclusive information (the color of digit) from the images is not so significant due to small area ratio of digits, which increases the difficulty of disentanglement.\nWe verify our disentangled representation with cross-modal retrieval on CelebA (Liu et al., 2015). Specifically, after training the disentangling networks, we can obtain the shared representations from the image and attribute views, respectively. Therefore, the attribute vector can be used to retrieve the related face images (attribute-specific cross-modal retrieval). The quantitative results are reported in Table 4, and examples are in Fig. 9(a) (in the supplement). Given the specific attributes represented as vector ln, we can obtain attribute vector l̂nk for the kth most similar retrieved image, which is associated with D attributes (the value of each one is 0 or 1). Accordingly, for the top K\nretrieved images, we have accuracy = ∑N n=1 ∑K k=1 ∑D d=1 δ(lnkd,l̂nkd)\nN×K×D , where δ(a, b) = 1 when a = b, otherwise δ(a, b) = 0. According to the results in Table 4, the performances of our model are much higher than those of MVAE due to the promising disentanglement." }, { "heading": "4.2 QUALITATIVE ANALYSIS", "text": "We further intuitively demonstrate that our model can well fulfill the four conditions in definition 2.1 with visualization. On MINIST-CBCD, we train the model using the training set and randomly select 64 images in the test set for visual analysis. The disentangled shared and exclusive representations are used as input to decoders corresponding to different views to reconstruct the original data with different combinations. For example, we can input the shared representation extracted from view-1 into the decoder corresponding to view-2 to obtain the reconstructed images from view-2. The visualization of reconstruction results are shown in Fig 3.\nFrom Fig. 3, we have the following observations, which are consistent with the definition of multiview disentanglement: (1) By combining the shared and exclusive information, the original image can be fully reconstructed ((d) and (i)), satisfying condition 1 (completeness); (2) The shared and exclusive representations contain different information. With the shared representation, we can reconstruct images ((b) and (g)) with clear digit shapes rather than color information as in the original images. In contrast, with the exclusive representations, we can reconstruct the color information ((c) and (h)) of the original images rather than the digit shapes. This verifies that the condition 2 (intraview exclusivity) is satisfied. (3) The exclusive representations from different views contain different information. Specifically, the exclusive representation (c) from view-1 contains the information of background color, while the exclusive representation (h) from view-2 contains information of the digit color. This verifies that our model satisfies condition 3 (inter-view exclusivity). (4) The shared representations ((b), (g), (e) and (j)) from different views contain (almost) the same information, i.e., condition 4 (commonality). We verify this by reconstructing digit shapes in view-2 using the shared representations from view-1 and vice versa. Similar experiments are done on CelebA (section B.5)." }, { "heading": "5 CONCLUSION", "text": "In this work, we proposed a formal definition for disentangling multi-view data, and based on this developed a principled algorithm which focuses on automatically disentangling multi-view data into shared and exclusive representations without supervision. Extensive experiments validate that the proposed algorithm can promote subsequent analysis tasks (e.g., clustering/classification/retrieval). We consistently validated that the proposed algorithm can provide promising disentanglement and thus is quite effective and flexible in analyzing and manipulating multi-view data. We will focus on the semi-supervised setting to improve the discriminative ability in the future." }, { "heading": "Supplemental Materials: Multi-View Disentangled Representation", "text": "" }, { "heading": "A SUPPLEMENTAL MATERIAL FOR METHODS", "text": "" }, { "heading": "A.1 SUPPLEMENTAL MATERIAL FOR CONDITIONS -®", "text": "Inspired by Choi et al. (2018), we introduce a classifier to distinguish these (independent) representations generated by the encoders. The loss function of the classification can be defined as:\nmin q,C\nER [ − ∫ p(z) logC(z|R)dz ] , (9)\nwhere C is a classifier that distinguishes the representation from different sources (independent representations), R is from a representation set with different sources, e.g., private representation ei and ej from different views, and q corresponds to the encoder of different views. z is a label which indicates the source of R.\nGenerally, it is difficult to strictly guarantee the independence; however, the two different strategies can promote the independence between different representations (generated by the encoders) to a certain extent. We implement both strategies and similar results are observed in practice." }, { "heading": "A.2 DISCUSSION OF CONDITION ¯", "text": "Both KL- and JS-based estimators can maximize the mutual information. However, due to the different properties of the KL and JS divergence, the two estimators are suitable for different scenarios. Since the JS divergence is bounded, in theory, it prevents the estimator from overemphasizing the similarity of two representations for the same sample (even if they are exactly the same, it will not obviously reduce the loss). This prevents the encoder from paying too much attention to generating the exact si coordination with sj instead of the overall objective function. In contrast, since the estimator based on the KL divergence is unbounded, si and sj are forced to be as similar as possible. Although this is not appropriate for most tasks, it helps us to observe whether si and sj intuitively have high mutual information. For example, we can replace si and sj with each other to see if they can accomplish the same task (which is demonstrated in the experimental part)." }, { "heading": "B SUPPLEMENTAL EXPERIMENTS", "text": "" }, { "heading": "B.1 NETWORK ARCHITECTURES", "text": "For MNIST and FashionMNIST, two convolutional layers and two fully connected layers are used for the encoder, while we employ two fully connected layers and two deconvolution layers for the decoder. For the face image dataset CelebA, we use four convolutional layers and two fully connected layers to build the encoder for handling the image view, while the decoder is built using two fully connected layers and four deconvolutional layers. For the attribute vector view, three fully connected layers are used to construct both the encoder and decoder. The batch normalization Ioffe & Szegedy (2015) and Swish activation functions Ramachandran et al. (2017) are used between the convolutional layers." }, { "heading": "B.2 DETAILED EXPERIMENTAL RESULTS OF QUANTITATIVE EXPERIMENTS", "text": "Due to space limitations, we only report the means of clustering and classification experiments in the text, and here we add their standard deviations in Table 5 and 6." }, { "heading": "B.3 ABLATION EXPERIMENTS", "text": "To verify the necessity of each criterion in definition 2.1, we conduct experiments on the MNISTCBCD dataset. Specifically, we conduct the similar experiments by removing 2 , 3 and 4 in the objective function, and the corresponding results are shown in Fig. 4, 5, and 6 respectively.\nAs shown in Fig. 4, due to the removal of the intra-view exclusivity ( 2 ), there are shared information between si and ei, which is clearly validated by the reconstructed images (g) and (h). Similarly, as shown in Fig. 5, after removing the inter-view exclusivity ( 3 ), the performance of disentanglement becomes much worse. As shown in Fig. 6, after removing the condition 4 , we can hardly disentangle the information from different views due to the significant difference between the shared representation si and sj .\nthe condition 2 (intra-view exclusivity between si and ei). (Zoom in for best view).\nB.4 VERIFICATION: MAXIMIZING I(x; e) + I(x; s) IS CONFLICT WITH MINIMIZING I(e; s)\nIn Section 2.1, we analyze the reason why we do not maximize I(x; s) and I(x; e) to realize minimizing I(e; s). We provide qualitatively verification on the MNIST-CBCD dataset by only changing the way of maximizing I(x; e, s). According to the experimental results in Fig. 7, we can find that both the shared representation (corresponding to Fig. 7(b) and (e)) and the exclusive representation (corresponding to Fig. 7(c) and (f)) contain almost the same information from the original views. This actually leads to poor disentanglement performance and also empirically validates the conflict discussed in Section 2.1." }, { "heading": "B.5 SUPPLEMENTAL VISUALIZATION RESULTS ON CELEBA", "text": "Furthermore, we conduct experiments on the face image-attribute dataset: CelebA Liu et al. (2015). The results are shown in Fig. 8. The reconstructed images with the shared representations from\nthe condition 3 (inter-view exclusivity between ei and ej). (Zoom in for best view).\n(a) (View-1) Original (b) (View-1) Shared (c) (View-1) Exclusive (d) (View-1) S&E (e) (View-2) Shared\n(f) (View-2) Original (g) (View-2) Shared (h) (View-2) Exclusive (i) (View-2) S&E (j) (View-1) Shared\nFigure 6: Visualization of reconstruction with shared and exclusive representations after removing\nthe condition 4 (commonality between si and sj). (Zoom in for best view).\n(a) (View-1) O (b) (View-1) S (c) (View-1) E (d) (View-2) O (e) (View-2) S (f) (View-2) E\nFigure 7: Visualization of reconstruction with shared and exclusive representations.‘O’, ‘S’ and ‘E’ indicate original image, shared representation and exclusive representation respectively. ‘View-1’ and ‘View-2’ in the parentheses indicate the view where these representations come from. (Zoom in for best view).\ndifferent views ((b) and (e)) are relatively similar, and the results of (b) and (e) both reflect the same underlying attributes, including smile, hair color, hairstyle, gender (condition 4 : commonality). The exclusive representation from the image view demonstrates that the reconstructed images correlate little to the attributes. For example, none of the reconstructed people have their mouths open and\ntheir genders cannot be easily identified (condition 2 : intra-view exclusivity). By combining the shared and exclusive representations, the original images can be accurately reconstructed (condition 1 : completeness). Our model recovers the most critical information without emphasizing details because the current task is to obtain good representations for clustering/retrieval. By setting the goal to improve image reconstruction, we can use additional techniques (e.g., using more deeper networks - only 4 layers in our implementation, or using adversarial strategy). It is worth noting that there is a small difference from MNIST-CBCD: the information of view-2 (attributes) is actually contained in view-1 (images). Therefore, it is rather difficult to reconstruct face images using the exclusive representation from view-2 - the reconstructed images are almost all the same (condition 3 : inter-view exclusivity). The visualization experiments on CelebA further verify that our disentangled representation can promisingly satisfy the four conditions in the definition 2.1." }, { "heading": "B.6 SUPPLEMENTAL RESULTS FOR ATTRIBUTE-SPECIFIC CROSS-MODAL RETRIEVAL AND EDITING", "text": "In this section, we validate the potential use of our multi-view disentangled representation in two real applications: attribute-specific retrieval and attribute-specific editing.\nFirst, we verify our disentangled representation on the attribute-specific face retrieval task on the CelebA Liu et al. (2015) dataset. The details are as described in the text, and here we show some examples in Fig. 9(a).\nSecond, we demonstrate the potential use of our model in attribute-specific face editing by manipulating the shared representations. The shared representation from the image view allows us to\nmanipulate the specific properties of an image. The magnitude of the change can be controlled by interpolating between two shared representations. Specifically, to modify the visual properties of a person in an image, we can perform the following steps: (1) disentangling the shared (soimage) and exclusive (eoimage) representations for a given image; (2) modifying the values in the attribute vector corresponding to the properties to be changed, and extracting the shared representation (smattribute) from the modified vector; (3) replacing the original shared representation (soimage) with s\nnew, where snew is the linear interpolation between smattribute and s o image; (4) using s\nnew and eoimage as the input to reconstruct the intended image. Representative experimental results are shown in Fig. 9(b).\nWe provide more results of attribute-specific editing for more attributes. The experimental results are shown in Fig. 10." }, { "heading": "C PROOFS", "text": "C.1 PROOF OF I(x; e, s) = ∫ ∫ ∫\np(x, s, e) log p(s,e|x)p(s,e) dsdedx\nIn order to obtain Eq. 3 in Section 2.1, first according to the chain rule for mutual information, we can get\nI(x; e, s) = I(x; e) + I(x; s|e), (10)\nwhere\nI(x; e) = ∫ ∫\np(x, e) log p(x|e) p(x) dedx\n= ∫ ∫ p(x, e) log p(x|e)dedx− ∫ ∫ p(x, e) log p(x)dedx,\n(11)\nI(x; s|e) = ∫ ∫ ∫\np(x, s, e) log p(x|s, e) p(x|e) dsdedx\n= ∫ ∫ ∫ p(x, s, e) log p(x|s, e)dsdedx− ∫ ∫ p(x, e) log p(x|e)dedx.\n(12)\nThen I(x; e, s) can be formulated as I(x; e, s) = ∫ ∫ ∫ p(x, s, e) log p(x|s, e)dsdedx− ∫ ∫ p(x, e) log p(x)dedx\n= ∫ ∫ ∫ p(x, s, e) log p(x|s, e)dsdedx− ∫ ∫ ∫ p(x, s, e) log p(x)dsdedx\n= ∫ ∫ ∫ p(x, s, e) log\np(x|s, e) p(x) dsdedx\n= ∫ ∫ ∫ p(x, s, e) log\np(s, e|x) p(s, e) dsdedx.\n(13)\nC.2 PROOF OF I(x; e, s) = I(x; e) + I(x; s)\nUnder Assumption 2.1, we can get p(s, e) = p(s)p(e) and p(s, e|x) = p(s|x)p(e|x). Substituting this into Eq. 13 yields Eq. 3 in Section 2.1,\nI(x; e, s) = ∫ ∫ ∫\np(x, e, s) log p(e, s|x) p(e, s) dedsdx\n= ∫ ∫ ∫ p(x)p(e, s|x) log p(e, s|x)\np(e, s) dedsdx\n= ∫ ∫ ∫ p(x)p(e|x)p(s|x) log p(e|x)p(s|x)\np(e)p(s) dedsdx\n= ∫ ∫ ∫ p(x)p(e|x)p(s|x) log p(e|x)p(s|x)\np(e)p(s) dedsdx\n= ∫ ∫ p(x)p(e|x) log p(e|x)\np(e) dedx+\n∫ ∫ p(x)p(s|x) log p(s|x)\np(s) dedx\n= I(x; e) + I(x; s).\n(14)\nC.3 PROOF OF I(x; e, s) = I(x; r)\nFirst, based on Assumption 2.2, we can get\np(r) = ∫ p(r, x)dx = ∫ p(s, e, x)dx = p(s, e), (15)\nand\np(r|x) = p(r, x) p(x) = p(s, e, x) p(x) = p(s, e|x). (16)\nAccordingly, Eq. 6 in Section 2.1 can be derived as follows I(x; e, s) = ∫ ∫ ∫\np(x, e, s) log p(e, s|x) p(e, s) dedsdx\n= ∫ ∫ ∫ p(x)p(e, s|x) log p(e, s|x)\np(e, s) dedsdx\n= ∫ ∫ p(x)p(r|x) log p(r|x)\np(r) drdx\n= I(x; r).\n(17)" } ]
2,020
null
SP:cef0728e41977750c56af5228b0e0dff4ec13358
[ "In this paper, the authors approach the problem of conditional image generation via generative adversarial networks. To this end, they propose an approach that utilizes only semantic segmentation annotations and adversarial loss. No perceptual loss is required. Their discriminator leverages semantic labels to improve the image generations. They evaluate their approach on a variety of datasets including ADE20K, COCO, and CityScapes. They demonstrate substantial quantitative and qualitative performance over baselines and perform an ablation analysis." ]
Despite their recent successes, GAN models for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Historically, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limiting the progress of GAN models for semantic image synthesis. In this work, we propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results. We re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training. By providing stronger supervision to the discriminator as well as to the generator through spatiallyand semantically-aware discriminator feedback, we are able to synthesize images of higher fidelity with better alignment to their input label maps, making the use of the perceptual loss superfluous. Moreover, we enable high-quality multi-modal image synthesis through global and local sampling of a 3D noise tensor injected into the generator, which allows complete or partial image change. We show that images synthesized by our model are more diverse and follow the color and texture distributions of real images more closely. We achieve an average improvement of 6 FID and 5 mIoU points over the state of the art across different datasets using only adversarial supervision. Semantic SPADE (Park et al., 2019) Our model (OASIS), sampled with different noise label map with VGG w/o VGG w/o VGG Figure 1: Existing semantic image synthesis models heavily rely on the VGG-based perceptual loss to improve the quality of generated images. In contrast, our model can synthesize diverse and high-quality images while only using an adversarial loss, without any external supervision. ∗Equal contribution. Correspondence to {edgar.schoenfeld, vadim.sushko}@bosch.com.
[ { "affiliations": [], "name": "Edgar Schönfeld" }, { "affiliations": [], "name": "Vadim Sushko" }, { "affiliations": [], "name": "Dan Zhang" } ]
[ { "authors": [ "Yazeed Alharbi", "Peter Wonka" ], "title": "Disentangled image generation through structured noise injection", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Vijay Badrinarayanan", "Alex Kendall", "Roberto Cipolla" ], "title": "Segnet: A deep convolutional encoderdecoder architecture for image segmentation", "venue": "Transactions on Pattern Analysis and Machine Intelligence,", "year": 2016 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Joan Bruna", "Pablo Sprechmann", "Yann LeCun" ], "title": "Super-resolution with deep convolutional sufficient statistics", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Holger Caesar", "Jasper Uijlings", "Vittorio Ferrari" ], "title": "Coco-stuff: Thing and stuff classes in context", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Semantic image segmentation with deep convolutional nets and fully connected crfs", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Liang-Chieh Chen", "Yukun Zhu", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Encoderdecoder with atrous separable convolution for semantic image segmentation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Qifeng Chen", "Vladlen Koltun" ], "title": "Photographic image synthesis with cascaded refinement networks", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "L.A. Gatys", "A.S. Ecker", "M. Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Leon Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Texture synthesis using convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPs)", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Tuomas Kynkäänniemi", "Tero Karras", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Improved precision and recall metric for assessing generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Implicit maximum likelihood estimation", "venue": "arXiv preprint arXiv:1809.09087,", "year": 2018 }, { "authors": [ "Ke Li", "Tianhao Zhang", "Jitendra Malik" ], "title": "Diverse image synthesis from semantic layouts via conditional imle", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xihui Liu", "Guojun Yin", "Jing Shao", "Xiaogang Wang" ], "title": "Learning to predict layout-to-image conditional convolutions for semantic image synthesis", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": null, "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cGANs with projection discriminator", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Evangelos Ntavelis", "Andrés Romero", "Iason Kastanis", "Luc Van Gool", "Radu Timofte" ], "title": "Sesame: Semantic editing of scenes by adding, manipulating or erasing objects", "venue": null, "year": 2004 }, { "authors": [ "Timo Ojala", "Matti Pietikäinen", "David Harwood" ], "title": "A comparative study of texture measures with classification based on featured distributions", "venue": "Pattern recognition,", "year": 1996 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Xiaojuan Qi", "Qifeng Chen", "Jiaya Jia", "Vladlen Koltun" ], "title": "Semi-parametric image synthesis", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Suman Ravuri", "Oriol Vinyals" ], "title": "Classification accuracy score for conditional generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Scott E. Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "In International Conference on Machine learning (ICML),", "year": 2016 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In MICCAI,", "year": 2015 }, { "authors": [ "Yossi Rubner", "Carlo Tomasi", "Leonidas J Guibas" ], "title": "The earth mover’s distance as a metric for image retrieval", "venue": "International Journal of Computer Vision (IJCV),", "year": 2000 }, { "authors": [ "Mehdi SM Sajjadi", "Olivier Bachem", "Mario Lucic", "Olivier Bousquet", "Sylvain Gelly" ], "title": "Assessing generative models via precision and recall", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Edgar Schönfeld", "Bernt Schiele", "Anna Khoreva" ], "title": "A u-net based discriminator for generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Konstantin Shmelkov", "Cordelia Schmid", "Karteek Alahari" ], "title": "How good is my gan", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Nasim Souly", "Concetto Spampinato", "Mubarak Shah" ], "title": "Semi supervised semantic segmentation using generative adversarial network", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Zhentao Tan", "Dongdong Chen", "Qi Chu", "Menglei Chai", "Jing Liao", "Mingming He", "Lu Yuan", "Nenghai Yu" ], "title": "Rethinking spatially-adaptive normalization", "venue": null, "year": 2004 }, { "authors": [ "Hao Tang", "Song Bai", "Nicu Sebe" ], "title": "Dual attention gans for semantic image synthesis", "venue": "In Proceedings of the 28th ACM International Conference on Multimedia,", "year": 2020 }, { "authors": [ "Hao Tang", "Xiaojuan Qi", "Dan Xu", "Philip HS Torr", "Nicu Sebe" ], "title": "Edge guided gans with semantic preserving for semantic image synthesis", "venue": null, "year": 2020 }, { "authors": [ "Hao Tang", "Dan Xu", "Yan Yan", "Philip HS Torr", "Nicu Sebe" ], "title": "Local class-specific and global imagelevel generative adversarial networks for semantic-guided scene generation", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional GANs", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Zhou Wang", "Eero P Simoncelli", "Alan C Bovik" ], "title": "Multiscale structural similarity for image quality assessment", "venue": "In Asilomar Conference on Signals, Systems & Computers,", "year": 2003 }, { "authors": [ "Tete Xiao", "Yingcheng Liu", "Bolei Zhou", "Yuning Jiang", "Jian Sun" ], "title": "Unified perceptual parsing for scene understanding", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yasin Yaz", "Chuan-Sheng Foo", "Stefan Winkler", "Kim-Hui Yap", "Georgios Piliouras", "Vijay Chandrasekhar" ], "title": "The unusual effectiveness of averaging in gan training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Fisher Yu", "Vladlen Koltun", "Thomas Funkhouser" ], "title": "Dilated residual networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Dan Zhang", "Anna Khoreva" ], "title": "PA-GAN: Improving GAN training by progressive augmentation", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "H. Zhang", "T. Xu", "H. Li", "S. Zhang", "X. Wang", "X. Huang", "D.N. Metaxas" ], "title": "StackGAN++: Realistic image synthesis with stacked generative adversarial networks", "venue": "Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Han Zhang", "Ian J. Goodfellow", "Dimitris N. Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In International Conference on Machine learning (ICML),", "year": 2019 }, { "authors": [ "Hang Zhang", "Chongruo Wu", "Zhongyue Zhang", "Yi Zhu", "Zhi Zhang", "Haibin Lin", "Yue Sun", "Tong He", "Jonas Mueller", "R Manmatha" ], "title": "Resnest: Split-attention networks", "venue": null, "year": 2004 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Bolei Zhou", "Hang Zhao", "Xavier Puig", "Sanja Fidler", "Adela Barriuso", "Antonio Torralba" ], "title": "Scene parsing through ade20k dataset", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Zhen Zhu", "Zhiliang Xu", "Ansheng You", "Xiang Bai" ], "title": "Semantically multi-modal image synthesis", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 } ]
[ { "heading": null, "text": "∗Equal contribution. Correspondence to {edgar.schoenfeld, vadim.sushko}@bosch.com." }, { "heading": "1 INTRODUCTION", "text": "Conditional generative adversarial networks (GANs) (Mirza & Osindero, 2014) synthesize images conditioned on class labels (Zhang et al., 2019; Brock et al., 2019), text (Reed et al., 2016; Zhang et al., 2018a), other images (Isola et al., 2017; Huang et al., 2018), or semantic label maps (Wang et al., 2018; Park et al., 2019). In this work, we focus on the latter, addressing semantic image synthesis. Semantic image synthesis enables rendering of realistic images from user-specified layouts, without the use of an intricate graphic engine. Therefore, its applications range widely from content creation and image editing to generating training data that needs to adhere to specific semantic requirements (Wang et al., 2018; Chen & Koltun, 2017). Despite the recent progress on stabilizing GANs (Gulrajani et al., 2017; Miyato et al., 2018; Zhang & Khoreva, 2019) and developing their architectures (Zhang et al., 2019; Karras et al., 2019), state-of-the-art GAN-based semantic image synthesis models (Park et al., 2019; Liu et al., 2019) still greatly suffer from training instabilities and poor image quality when trained only with adversarial supervision (see Fig. 1). An established practice to overcome this issue is to employ a perceptual loss (Wang et al., 2018) to train the generator, in addition to the discriminator loss. The perceptual loss aims to match intermediate features of synthetic and real images, that are estimated via an external perception network. A popular choice for such a network is VGG (Simonyan & Zisserman, 2015), pre-trained on ImageNet (Deng et al., 2009). Although the perceptual loss substantially improves the accuracy of previous methods, it comes with the computational overhead introduced by utilizing an extra network for training. Moreover, it usually dominates over the adversarial loss during training, which can have a negative impact on the diversity and quality of generated images, as we show in our experiments. Therefore, in this work we propose a novel, simplified model that achieves state-of-the-art results without requiring a perceptual loss.\nA fundamental question for GAN-based semantic image synthesis models is how to design the discriminator to efficiently utilize information from the given semantic label maps. Conventional methods (Park et al., 2019; Wang et al., 2018; Liu et al., 2019; Isola et al., 2017) adopt a multi-scale classification network, taking the label map as input along with the image, and making a global image-level real/fake decision. Such a discriminator has limited representation power, as it is not incentivized to learn high-fidelity pixel-level details of the images and their precise alignment with the input semantic label maps. To mitigate this issue, we propose an alternative architecture for the discriminator, re-designing it as an encoder-decoder semantic segmentation network (Ronneberger et al., 2015), and directly exploiting the given semantic label maps as ground truth via a (N+1)-class cross-entropy loss (see Fig. 3). This new discriminator provides semantically-aware pixel-level feedback to the generator, partitioning the image into segments belonging to one of the N real semantic classes or the fake class. Enabled by the discriminator per-pixel response, we further introduce a LabelMix regularization, which fosters the discriminator to focus more on the semantic and structural differences of real and synthetic images. The proposed changes lead to a much stronger discriminator, that maintains a powerful semantic representation of objects, giving more meaningful feedback to the generator, and thus making the perceptual loss supervision superfluous (see Fig. 1).\nNext, we propose to enable multi-modal synthesis of the generator via 3D noise sampling. Previously, directly using 1D noise as input was not successful for semantic image synthesis, as the generator tended to mostly ignore it or synthesized images of poor quality (Isola et al., 2017; Wang et al., 2018). Thus, prior work (Wang et al., 2018; Park et al., 2019) resorted to using an image encoder to produce multi-modal outputs. In this work, we propose a lighter solution. Empowered by our stronger discriminator, the generator can effectively synthesize different images by simply re-sampling a 3D noise tensor, which is used not only as the input but also combined with intermediate features via conditional normalization at every layer. Such noise is spatially sensitive, so we can re-sample it both globally (channel-wise) and locally (pixel-wise), allowing to change not only the appearance of the whole scene, but also of specific semantic classes or any chosen areas (see Fig. 2). We call our model OASIS, as it needs only adversarial supervision for semantic image synthesis. In summary, our main contributions are: (1) We propose a novel segmentation-based discriminator architecture, that gives more powerful feedback to the generator and eliminates the necessity of the perceptual loss supervision. (2) We present a simple 3D noise sampling scheme, notably increasing the diversity of multi-modal synthesis and enabling complete or partial change of the generated image. (3) With the OASIS model, we achieve high quality results on the ADE20K, Cityscapes and COCO-stuff datasets, on average improving the state of the art by 6 FID and 5 mIoU points, while\nrelying only on adversarial supervision. We show that images synthesized by OASIS exhibit much higher diversity and more closely follow the color and texture distributions of real images. Our code and pretrained models are available at https://github.com/boschresearch/OASIS." }, { "heading": "2 RELATED WORK", "text": "Semantic image synthesis. Pix2pix (Isola et al., 2017) first proposed to use conditional GANs (Mirza & Osindero, 2014) for semantic image synthesis, adopting an encoder-decoder generator which takes semantic label maps as input, and employing a PatchGAN discriminator. Since then, various generator and discriminator modifications have been introduced (Wang et al., 2018; Park et al., 2019; Liu et al., 2019; Tang et al., 2020c;b; Ntavelis et al., 2020). Besides GANs, Chen & Koltun (2017) proposed to use a cascaded refinement network (CRN) for high-resolution semantic image synthesis, and SIMS (Qi et al., 2018) extended it with a non-parametric component, serving as a memory bank of source material to assist the synthesis. Further, Li et al. (2019) employed implicit maximum likelihood estimation (Li & Malik, 2018) to increase the variety of the CRN model. However, these approaches still underperform in comparison to state-of-the-art GAN models. Therefore, next we focus on the recent GAN architectures for semantic image synthesis.\nDiscriminator architectures. Pix2pix (Isola et al., 2017), Pix2pixHD (Wang et al., 2018) and SPADE (Park et al., 2019) all employed a multi-scale PatchGAN discriminator, that takes an image and its semantic label map as input. CC-FPSE (Liu et al., 2019) proposed a feature-pyramid discriminator, embedding both images and label maps into a joint feature map, and then consecutively upsampling it in order to classify it as real/fake at multiple scales. LGGAN (Tang et al., 2020c) introduced a classification-based feature learning module to learn more discriminative and class-specific features. In this work, we propose to use a pixel-wise semantic segmentation network as a discriminator instead of multi-scale image classifiers as in the above approaches, and to directly exploit the semantic label maps for its supervision. Segmentation-based discriminators have been shown to improve semantic segmentation (Souly et al., 2017) and unconditional image synthesis (Schönfeld et al., 2020), but to the best of our knowledge have not been explored for semantic image synthesis and our work is the first to apply adversarial semantic segmentation loss for this task.\nGenerator architectures. Conventionally, the semantic label map is provided to the image generation pipeline via an encoder (Isola et al., 2017; Wang et al., 2018; Tang et al., 2020c;b; Ntavelis et al., 2020). However, it is shown to be suboptimal at preserving the semantic information until the later stages of image generation. Therefore, SPADE introduced a spatially-adaptive normalization layer\nthat directly modulates the label map onto the generator’s hidden layer outputs at various scales. Alternatively, CC-FPSE proposed to use spatially-varying convolution kernels conditioned on the label map. Struggling with generating diverse images from noise, both Pix2pixHD and SPADE resorted to having an image encoder in the generator design to enable multi-modal synthesis. The generator then combines the extracted image style with the label map to reconstruct the original image. By alternating the style vector, one can generate multiple outputs conditioned on the same label map. However, using an image encoder is a resource demanding solution. In this work, we enable multi-modal synthesis directly through sampling of a 3D noise tensor injected at every layer of the network. Differently from structured noise injection of Alharbi & Wonka (2020) and class-specific latent codes of Zhu et al. (2020), we inject the 3D noise along with label maps and adjust it to image resolution, also enabling re-sampling of selected semantic segments (see Fig. 2).\nPerceptual losses. Gatys et al. (2015); Gatys et al. (2016); Johnson et al. (2016) and Bruna et al. (2016) were pioneers at exploiting perceptual losses to produce high-quality images for superresolution and style transfer using convolutional networks. For semantic image synthesis, the VGGbased perceptual loss was first introduced by CRN, and later adopted by Pix2pixHD. Since then, it has become a default for training the generator (Park et al., 2019; Liu et al., 2019; Tan et al., 2020; Tang et al., 2020a). As the perceptual loss is based on a VGG network pre-trained on ImageNet (Deng et al., 2009), methods relying on it are constrained by the ImageNet domain and the representational power of VGG. With the recent progress on GAN training, e.g. by architecture designs and regularization techniques, the actual necessity of the perceptual loss requires a reassessment. We experimentally show that such loss imposes unnecessary constraints on the generator, significantly limiting sample diversity. While our model, trained without the VGG loss, achieves improved image diversity while not compromising image quality." }, { "heading": "3 OASIS MODEL", "text": "In this section, we present our OASIS model, which, in contrast to other semantic image synthesis methods, needs only adversarial supervision for generator training. Using SPADE as a starting point (Sec. 3.1), we first propose to re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as ground truth (Sec. 3.2). Empowered by spatiallyand semantically-aware feedback of the new discriminator, we next re-design the SPADE generator, enabling its effective multi-modal synthesis via 3D noise sampling (Sec. 3.3)." }, { "heading": "3.1 THE SPADE BASELINE", "text": "We choose SPADE as our baseline as it is a state-of-the-art model and a relatively simple representative of conventional semantic image synthesis models. As depicted in Fig. 3, the discriminator of SPADE largely follows the PatchGAN multi-scale discriminator (Isola et al., 2017), adopting two image classification networks operating at different resolutions. Both of them take the channel-wise concatenation of the semantic label map and the real/synthesized image as input, and produce true/fake classification scores. On the generator side, SPADE adopts spatially-adaptive normalization layers to effectively integrate the semantic label map into the synthesis process from low to high scales. Additionally, the image encoder is used to extract the style vector from the reference image and then combine it with a 1D noise vector for multi-modal synthesis. The training loss of SPADE\nconsists of three terms, namely, an adversarial loss, a feature matching loss and the VGG-based perceptual loss: L = maxG minD Ladv + λfmLfm + λvggLvgg. Overall, SPADE is a resource demanding model at both training and test time, i.e., with two PatchGAN discriminators, an image encoder in addition to the generator, and the VGG loss. In the following, we revisit its architecture and introduce a simpler and more efficient model that offers better performance with less complexity." }, { "heading": "3.2 OASIS DISCRIMINATOR", "text": "For the generator to learn to synthesize images that are well aligned with the input semantic label maps, we need a powerful discriminator that coherently captures discriminative semantic features at different image scales. While classification-based discriminators, such as PatchGAN, take label maps as input concatenated to images, they can afford to ignore them and make the decision solely on image patch realism. Thus, we propose to cast the discriminator task as a multi-class semantic segmentation problem to directly utilize label maps for supervision, and accordingly alter its architecture to an encoder-decoder segmentation network (see Fig. 3). Encoder-decoder networks have proven to be effective for semantic segmentation (Badrinarayanan et al., 2016; Chen et al., 2018). Thus, we build our discriminator architecture upon U-Net (Ronneberger et al., 2015), which consists of the encoder and decoder connected by skip connections. This discriminator architecture is multi-scale through its design, integrating information over up- and down-sampling pathways and through the encoder-decoder skip connections. For details on the architecture see App. C.1.\nThe segmentation task of the discriminator is formulated to predict the per-pixel class label of the real images, using the given semantic label maps as ground truth. In addition to the N semantic classes from the label maps, all pixels of the fake images are categorized as one extra class. Overall, we haveN+1 classes in the semantic segmentation problem, and thus propose to use a (N+1)-class cross-entropy loss for training. Considering that the N semantic classes are usually imbalanced and that the per-pixel size of objects varies for different semantic classes, we weight each class by its inverse per-pixel frequency, giving rare semantic classes more weight. In doing so, the contributions of each semantic class are equally balanced, and, thus, the generator is also encouraged to adequately synthesize less-represented classes. Mathematically, the new discriminator loss is expressed as:\nLD = −E(x,t) N∑ c=1 αc H×W∑ i,j ti,j,c logD(x)i,j,c − E(z,t) H×W∑ i,j logD(G(z, t))i,j,c=N+1 , (1) where x denotes the real image; (z, t) is the noise-label map pair used by the generator G to synthesize a fake image; and the discriminator D maps the real or fake image into a per-pixel (N+1)-class prediction probability. The ground truth label map t has three dimensions, where the first two correspond to the spatial position (i, j) ∈ H ×W , and the third one is a one-hot vector encoding the class c ∈ {1, .., N+1}. The class balancing weight αc is the inverse of the per-pixel class frequency\nαc = H ×W∑H×W\ni,j Et [1[ti,j,c = 1]] . (2)\nLabelMix regularization. In order to encourage our discriminator to focus on differences in content and structure between the fake and the real classes, we propose a LabelMix regularization. Based on the semantic layout, we generate a binary mask M to mix a pair (x, x̂) of real and fake images conditioned on the same label map: LabelMix(x, x̂,M) =M x+ (1−M) x̂, as visualized in Fig. 4. Given the mixed image, we further train the discriminator to be equivariant under the LabelMix operation. This is achieved by adding a consistency loss term Lcons to Eq. 1:\nLcons = ∥∥∥Dlogits(LabelMix(x, x̂,M))− LabelMix(Dlogits(x), Dlogits(x̂),M)∥∥∥2, (3)\nwhere Dlogits are the logits attained before the last softmax activation layer, and ‖ · ‖ is the L2 norm. This consistency loss compares the output of the discriminator on the LabelMix image with the LabelMix of its outputs, penalizing the discriminator for inconsistent predictions. LabelMix is different to CutMix (Yun et al., 2019), which randomly samples the binary mask M . A random mask will introduce inconsistency between the pixel-level classes and the scene layout provided by the label map. For an object with the semantic class c, it will contain pixels from both real and fake images, resulting in two labels, i.e. c andN +1. To avoid such inconsistency, the mask of LabelMix is generated according to the label map, providing natural borders between semantic regions, see Fig. 4 (Mask M ). Under LabelMix regularization, the generator is encouraged to respect the natural semantic boundaries, improving pixel-level realism while also considering the class segment shapes.\nOther variants. Besides the proposed (N+1)-class cross entropy loss, there are other ways to train the segmentation-based discriminator with the label map. One can concatenate the label map to the input image, analogous to SPADE. Another option is to use projection, by taking the inner product between the last linear layer output and the embedded label map, analogous to class-label conditional GANs (Miyato & Koyama, 2018). For both alternatives, the training loss is pixel-level real/fake binary cross-entropy (Schönfeld et al., 2020). From the label map encoding perspective, these two variants use labels map as input (concatenated to image or at last linear layer), propagating it forward through the network. The (N+1)-setting uses the label map for loss computation, so it is propagated backward via gradient updates. Backward propagation ensures that the discriminator learns semantic-aware features, in contrast to forward propagation, where the label map alignment is not as strongly enforced. Performance comparison of the label map encodings is shown in Table 5." }, { "heading": "3.3 OASIS GENERATOR", "text": "To stay in line with the OASIS discriminator design, the training loss for the generator is changed to\nLG = −E(z,t) N∑ c=1 αc H×W∑ i,j ti,j,c logD(G(z, t))i,j,c , (4) which is a direct outcome of the non-saturation trick (Goodfellow et al., 2014) to Eq. 1. We next re-design the generator to enable multi-modal synthesis through noise sampling. SPADE is deterministic in its default setup, but can be trained with an extra image encoder to generate multi-modal outputs. We introduce a simpler version, that enables synthesis of diverse outputs directly from input noise. For this, we construct a noise tensor of size 64×H×W , matching the spatial dimensions of the label map H×W . Channel-wise concatenation of the noise and label map forms a 3D tensor used as input to the generator and also as a conditioning at every spatially-adaptive normalization layer. In doing so, intermediate feature maps are conditioned on both the semantic labels and the noise (see Fig. 3). With such a design, the generator produces diverse, noise-dependent images. As the 3D noise is channel- and pixel-wise sensitive, at test time, one can sample the noise globally, per-channel, and locally, per-segment or per-pixel, for controlled synthesis of the whole scene or of specific semantic objects. For example, when generating a scene of a bedroom, one can re-sample the noise locally and change the appearance of the bed alone (see Fig. 2). Note that for simplicity during training we sample the 3D noise tensor globally, i.e. per-channel, replicating each channel value spatially along the height and width of the tensor. We analyse alternative ways of sampling 3D noise during training in App. A.7. Using image styles via an encoder, as in SPADE, is also possible in our setting, by replacing noise with encoder features. Lastly, to further reduce the complexity, we remove the first residual block in the generator, reducing the number of parameters from 96M to 72M (see App. C.2) without a noticeable performance loss (see Table 3)." }, { "heading": "4 EXPERIMENTS", "text": "We conduct experiments on three challenging datasets: ADE20K (Zhou et al., 2017), COCO-stuff (Caesar et al., 2018) and Cityscapes (Cordts et al., 2016). Following Qi et al. (2018), we also evaluate OASIS on ADE20K-outdoors, a subset of ADE20K containing outdoor scenes. We follow the experimental setting of Park et al. (2019). We did not use the GAN feature matching loss for OASIS, as we did not observe any improvement with it (see App. A.5), and used the VGG loss only for ablations with λVGG = 10. We did not experience any training instabilities and, thus, did not employ any extra stabilization techniques. All our models use an exponential moving average (EMA) of the generator weights with 0.9999 decay. For further training details refer to App. C.3.\nFollowing prior work (Isola et al., 2017; Wang et al., 2018; Park et al., 2019; Liu et al., 2019), we evaluate models quantitatively on the validation set using the Fréchet Inception Distance (FID) (Heusel et al., 2017) and mean Intersection-over-Union (mIoU). FID is known to be sensitive to both quality and diversity and has been shown to be well aligned with human judgement (Heusel et al., 2017). We show additional evaluation of quality and diversity with ”improved precision and recall” in App. A.9. Mean IoU is used to assess the alignment of the generated image with the ground truth label map, computed via a pre-trained semantic segmentation network. We use UperNet101 (Xiao et al., 2018) for ADE20K, multi-scale DRN-D-105 (Yu et al., 2017) for Cityscapes, and DeepLabV2 (Chen et al., 2015) for COCO-Stuff. We additionally propose to compare color and texture statistics between generated and real images on ADE20K to better understand how the perceptual loss influences performance. For this, we compute color histograms in LAB space and measure the earth mover’s distance between the real and generated sets (Rubner et al., 2000). We measure the texture similarity to the real data as the χ2-distance between Local Binary Patterns histograms (Ojala et al., 1996). As different classes have different color and texture distributions, we aggregate histogram distances separately per class and then take the mean. Lower values for the texture and color distances indicate a closer similarity to real data." }, { "heading": "4.1 MAIN RESULTS", "text": "We use SPADE as our baseline, using the authors’ implementation1. For a fair comparison, we train this model without the feature matching loss and using EMA (Yaz et al., 2018) at test phase, which\n1github.com/NVlabs/SPADE\nTable 2: Multi-modal synthesis evaluation on ADE20K. Bold and red denote the best and the worst performance, respectively.\nMethod Multi-mod. VGG MS-SSIM↓ LPIPS↑ FID↓mIoU↑ SPADE+ Encoder 3 0.85 0.16 33.4 40.2 SPADE+ 3D noise 7 0.35 0.50 58.4 18.7 3 0.53 0.36 34.4 36.2\nOASIS 3D noise 7 0.65 0.35 28.3 48.8 3 0.88 0.15 31.6 50.8\nOASIS SPADE+\nw/o VGG with VGG\nTexture 3.2\n1.7 1.4 1.8\nColor\nOASIS SPADE+\n3.4\n2.1 2.2 2.4\nFigure 6: Histogram distances to real data.\nwe further refer to as SPADE+. We found that the feature matching loss has a negligible impact (see App. A.5), while EMA significantly increases the performance for all metrics (see Table 1).\nOASIS outperforms the current state of the art on all datasets with an average improvement of 6 FID and 5 mIoU points (Table 1). Importantly, OASIS achieved the improvement via adversarial supervision alone. On the contrary, SPADE+ does not produce images of high visual quality without the perceptual loss, and struggles to learn the color and texture distribution of real images (Fig. 6). A strong discriminator is the key factor for good performance: without a rich training signal from the discriminator, the SPADE+ generator has to learn through minimizing the VGG loss. With the stronger OASIS discriminator, the perceptual loss does not overtake the generator supervision (see App. A.2), allowing to produce images with the color and texture distribution closer to the real data.\nFig. 5 shows a qualitative comparison of our results to previous models. Our approach noticeably improves image quality, synthesizing finer textures and more natural colors. With the powerful feedback from the discriminator, OASIS is able to learn the appearance of small or rarely occurring semantic classes (which is reflected in the per-class IoU scores presented in App. A.3), producing plausible results even for complex scenes with rare classes and reducing unnatural artifacts.\nMulti-modal image synthesis. In contrast to previous work, OASIS can produce diverse images by directly re-sampling input 3D noise. As 3D noise modulates features directly at every layer of the generator at different scales, matching their resolution, it affects both global and local characteristics of the image. Thus, the noise can be sampled globally, varying the whole image, or locally, resulting in the selected object change while preserving the rest of the scene (see Fig. 2).\nTo measure the variation in the multi-modal generation, we evaluate MS-SSIM (Wang et al., 2003) and LPIPS (Zhang et al., 2018b) between images generated from the same label map. We generate 20 images and compute the mean pairwise scores, and then average over all label maps. The lower the MS-SSIM and the higher the LPIPS scores, the more diverse the generated images are. To assess the effect of the perceptual loss and the noise sampling on diversity, we train SPADE+ with 3D noise or the image encoder, and with or without the perceptual loss. Table 2 shows that OASIS, without perceptual loss, improves over SPADE+ with the image encoder, both in terms of image diversity (MS-SSIM, LPIPS) and quality (mean FID, mIoU across 20 realizations). Using 3D noise further increases diversity for SPADE+. However, a strong quality-diversity tradeoff exists for SPADE+: 3D noise improves diversity at the cost of quality, and the perceptual loss improves quality at the cost of diversity. For OASIS, the VGG loss also reduces diversity but does not noticeably affect quality. Note that in our experiments LabelMix does not notably affect diversity (see App. A.1)." }, { "heading": "4.2 ABLATIONS", "text": "We conduct ablations on ADE20K to evaluate our proposed changes. The main ablation shows the impact of our new discriminator, lighter generator, LabelMix and 3D noise. Further ablations are concerned with architecture changes and the label map encodings in the discriminator, where for fair comparison we use no 3D noise and LabelMix.\nMain ablation. Table 3 shows that SPADE+ scores low on the image quality metrics without the perceptual loss. Replacing the SPADE+ discriminator with the OASIS discriminator, while keeping the generator fixed, improves FID and mIoU by more than 30 points. Changing the SPADE+ generator to the lighter OASIS generator leads to a negligible degradation of 0.3 in FID and 0.5 in mIoU. With LabelMix FID improves further by∼ 1 point (more ablations on LabelMix in App. A.4). Adding 3D noise\nimproves FID but degrades mIoU, as diversity complicates the task of the pre-trained semantic segmentation network used to compute the score. For OASIS the perceptual loss deteriorates FID by more than 2 points, but improves mIoU. Overall, without the perceptual loss the new discriminator is the key to the performance boost over SPADE+.\nAblation on the discriminator architecture. We train the OASIS generator with three alternative discriminators: the original multi-scale PatchGAN consisting of two networks, a single-scale PatchGAN, and a ResNet-based discriminator, corresponding to the encoder of the U-Net shaped OASIS discriminator. Table 4 shows that the alternative discriminators only perform well with perceptual supervision, while the OASIS discriminator achieves superior performance independent of it. The single-scale\ndiscriminators even collapse without the perceptual loss (highlighted in red in Table 4).\nAblation on the label map encoding. We study four different label map encodings: input concatenation, as in SPADE, projection conditioned on the label map (Miyato & Koyama, 2018), employing label maps as ground truth for the N+1 segmentation loss, or for the class-balanced N+1 loss (see Sec. 3.2). As shown in Table 5, input concatenation is not sufficient without additional perceptual loss supervision, leading to training collapse. Without perceptual loss, the N+1 loss outperforms the input con-\ncatenation and the projection in both the FID and mIoU metrics. The class balancing noticeably improves mIoU due to better supervision for rarely occurring semantic classes. More ablations can be found in App. A." }, { "heading": "5 CONCLUSION", "text": "In this work we propose OASIS, a semantic image synthesis model that only relies on adversarial supervision to achieve high fidelity image synthesis. In contrast to previous work, our model eliminates the need for a perceptual loss, which often imposes extra constraints on image quality and diversity. This is achieved via detailed spatial and semantic-aware supervision from our novel segmentation-based discriminator, which uses semantic label maps as ground truth for training. With this powerful discriminator, OASIS can easily generate diverse multi-modal outputs by re-sampling the 3D noise, both globally and locally, allowing to change the appearance of the whole scene and of individual objects. OASIS significantly improves over the state of the art in terms of image quality and diversity, while being simpler and more lightweight than previous methods." }, { "heading": "ACKNOWLEDGEMENT", "text": "Jürgen Gall has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2070 -390732324." }, { "heading": "APPENDIX", "text": "This supplementary material to the main paper is structured as follows:\nA Additional quantitative results.\nA.1 Main ablation study on two datasets. A.2 The influence of the perceptual loss on training dynamics. A.3 Per-class IoU scores across different datasets. A.4 Comparing LabelMix and CutMix for consistency regularization. A.5 Ablation on the Feature Matching loss. A.6 Ablation on using multiple OASIS discriminators. A.7 Ablation on noise sampling strategies during training. A.8 Additional experiments on COCO-stuff. A.9Additional image quality metrics.\nB: Additional qualitative results.\nB.1 Visual comparison of OASIS to other works. B.2 Multi-modal synthesis results for different label maps. B.3 Interpolations between multi-modal images in latent space. B.4 Application to unlabelled data. B.5 Additional visual LabelMix examples.\nC: A detailed description of the OASIS architecture and its training details.\nC.1 Discriminator architecture. C.2 Generator architecture. C.3 Learning objective and training details." }, { "heading": "A QUANTITATIVE RESULTS", "text": "" }, { "heading": "A.1 SUMMARIZED MAIN ABLATION OVER TWO DATASETS", "text": "Table A: Summarized ablation on two datasets. Bold denotes the best performance. Red denotes the worst performance among experiments with 3D noise. Green denotes the major performance gains that are caused by the proposed OASIS discriminator and LabelMix.\nMethod Cityscapes ADE20KFID↓ mIoU↑ MS-SSIM↓ FID↓ mIoU↑ MS-SSIM↓ SPADE+ 61.4 47.6 1.0 60.7 21.0 1.0\n+ OASIS D, G 54.1 67.6 1.0 29.3 51.6 1.0 + 3D noise 51.5 66.3 0.62 28.9 47.3 0.63\n+ LabelMix 47.7 69.3 0.64 28.3 48.8 0.65 + VGG 46.1 72.0 0.84 31.6 50.8 0.88\nIn Table A we present a summarized version of our ablations for the ADE20K and Cityscapes dataset. The following observations can be made:\n(1) Looking at the 2nd row of Table A, we see that the main performance gain comes from the discriminator design (major) (OASIS D,G). The OASIS generator is a lighter version of the SPADE generator, which does not result in a performance improvement (Table 3), but has significantly less parameters. A second source of improvement is LabelMix.\n(2) The mIoU can drop when 3D noise is added, as diversity complicates the task of the pre-trained semantic segmentation network that is used to compute the mIoU score. Note that the purpose of noise is not to improve the image quality (FID) but to improve diversity (MS-SSIM).\n(3) The perceptual loss can hurt performance and diversity by biasing the generator towards ImageNet, as in this case the target distribution is more difficult to recreate fully. By punishing diversity, the perceptual loss encourages generating images with more standard semantic features This facilitates the task of external pretrained segmenters, and consequently helps to raise the mIoU metric." }, { "heading": "A.2 THE INFLUENCE OF VGG ON TRAINING DYNAMICS", "text": "Table 1 and Figure 1 illustrate that performance of SPADE+ strongly depends on the perceptual loss. In contrast, OASIS achieves high quality without this loss (Table 1). We find the explanation in the fact, that the SPADE+ Patch-GAN discriminator does not provide a strong training signal for the generator. At the absence of strong supervision from the discriminator, the generator resorts to learning mostly from the VGG loss. The loss curves in Fig. A support this finding: throughout the training the SPADE+ model focuses on minimizing the VGG loss, keeping the adversarial generator loss more or less constant. In contrast, OASIS significantly improves adversarial generator loss during training, learning to fool the segmentation-based OASIS discriminator. That indicates a better adversarial balance, when the generator learns semantically meaningful features that the segmenter judges as real. The difference in scales of G loss for models comes from different objectives, since SPADE+ optimizes binary cross entropy, and OASIS minimizes multi-class cross entropy withN+1 classes.\n0 50 100 150 200 Training epochs\n9.4\n9.6\n9.8\n10.0\n10.2\n10.4\nVG G\nlo ss\nSPADE+ OASIS\n0 50 100 150 200 Training epochs\n0.5\n1.0\n1.5\n2.0\n2.5\n3.0\n3.5\nG ad\nve rs ar ia l l os s\nSPADE+ OASIS\nFigure A: VGG and adversarial G losses for SPADE and OASIS, trained with the perceptual loss" }, { "heading": "A.3 PER-CLASS IOU SCORES", "text": "As seen in Table 1 in the main paper, OASIS significantly outperforms previous approaches in mIoU. We found that the improvement comes mainly from the better IoU scores achieved for lessrepresented semantic classes. To illustrate the gain, we report per-class IoU scores on ADE20k, COCO-Stuff and Cityscapes in Tables B, C and D. For visualization purposes, we sorted the semantic classes of all datasets, ordering by their pixel-wise frequency in the training images.\nTaking ADE20k as an example, Table B highlights that the relative gain in mIoU is especially high for the group of less-represented semantic classes, that cover less than 3% of all the images. For these rare classes the relative gain over the baseline exceeds 40%. We found that the gain majorly comes from the per-class balancing applied in the OASIS loss function. In order to illustrate this effect, we train OASIS without the proposed balancing. Table B reveals, this baseline reaches a bit higher score for frequent classes, but shows worse performance for the rarely occurring ones. This is expected, as the balancing down-weights the objects met frequently while up-weights infrequent classes. We thus conclude that the balancing draws the attention of the discriminator to rarely occurring semantic classes, which results in a much higher quality of the generation." }, { "heading": "A.4 ABLATION ON LABELMIX", "text": "Consistency regularization for the segmentation output of the discriminator requires a method of generating binary masks. Therefore, we compare the effectiveness of CutMix (Yun et al., 2019) and our proposed LabelMix. Both methods produce binary masks, but only LabelMix respects the boundaries between semantic classes in the label map. Table E compares the FID and mIoU scores of OASIS trained with both methods on the Cityscapes dataset. It can be seen that LabelMix improves both FID (51.5 vs. 47.7) and mIoU (66.3 vs. 69.3), in comparison to OASIS without consistency regularization. CutMix-based consistency regularization only improves the mIoU (66.3 vs. 67.4),\nTable B: Per-class IoU scores on ADE20k. Bold denotes the best performance.\nClasses IDs Occupied area mIoUSPADE+ (with VGG) OASIS without per-class balancing (without VGG) OASIS (without VGG)\n0 - 29 86.4% 63.7 69.1 68.8 30 - 59 7.2% 47.4 52.4 56.6 60 - 89 3.5% 45.3 47.0 51.5 90 - 119 1.8% 29.3 36.2 41.5\n120 - 149 1.0% 26.2 31.2 39.7 0-149 (all classes) 100% 42.4 47.2 51.6\nTable C: Per-class IoU scores on COCO-Stuff. Bold denotes the best performance.\nClasses IDs Area mIoUSPADE+ OASIS 0 - 35 69.3% 51.1 59.0\n36 - 69 15.9% 43.9 50.3 70 - 103 8.7% 40.5 40.9 104 - 137 4.5% 35.9 36.6 138 - 171 1.4% 22.1 40.6\n0-171 (all classes) 100% 38.8 45.5\nTable D: Per-class IoU scores on Cityscapes. Bold denotes the best performance.\nClasses IDs Area mIoUSPADE+ OASIS 0 - 2 75.6% 91.6 89.6 3 - 6 18.3% 75.7 74.9\n7 - 10 3.9% 60.0 66.9 11 - 14 1.4% 60.3 66.0 15 - 18 0.6% 38.1 55.1\n0-18 (all classes) 100% 63.8 69.3\nbut not as much as LabelMix (69.3). We suspect that since the images are already partitioned through the label map, an additional partition through CutMix results in a dense patchwork of areas that differ by semantic class and real-fake class identity. This may introduce additional label noise during training for the discriminator. To avoid such inconsistency between semantic classes and real-fake identity, the mask of LabelMix is generated according to the label map, providing natural borders between semantic regions, so that the real and fake objects are placed side-by-side without interfering each other. Under LabelMix regularization, the generator is encouraged to respect the natural semantic class boundaries, improving pixel-level realism while also considering the class segment shapes." }, { "heading": "A.5 ABLATION ON FEATURE MATCHING LOSS", "text": "We measure the effect of the feature matching loss (FM) in the absence and presence of the perceptual loss (VGG). Table F and G present the results for OASIS on Cityscapes and SPADE+ on ADE20K. For both SPADE+ and OASIS we observe that the feature matching loss does only affect the FID notably when no perceptual loss is used.\nIn the case where no perceptual loss is used, we observe that the feature matching prolongs the time until SPADE+ collapses, resulting in a better FID score (49.7 vs 60.7). Consequently, the mIoU also improves. Hence, the role of the FM loss in the training of SPADE+ is to stabilize the training through additional self-supervision. This observation is in line with the general observation that SPADE and other semantic image synthesis models require the help of additional losses because the adversarial supervision through the discriminator is not strong enough. In comparison, we did not observe any training collapses in OASIS, despite not using any extra losses. For OASIS, the feature matching loss results in a worse FID (by 0.8 points) in the absence of the perceptual loss. We also observe a degradation of 1.1 mIoU points through the FM loss, in the case where the perceptual supervision is present. This indicates that the FM loss negatively affects the strong supervision from the semantic segmentation adversarial loss of OASIS.\nTable E: Ablation study on the impact of LabelMix and CutMix for consistency regularization (CR) in OASIS on Cityscapes. Bold denotes the best performance.\nTransformation FID↓ mIoU ↑ No CR 51.5 66.3 CutMix 52.1 67.4 LabelMix 47.7 69.3\nTable F: OASIS on Cityscapes. Bold denotes the best performance.\nVGG FM FID↓ mIoU↑ 7 7 47.7 69.3 7 3 48.5 69.1 3 7 46.1 72.0 3 3 46.5 70.9\nTable G: SPADE+ on ADE20K. Bold denotes the best performance.\nVGG FM FID↓ mIoU↑ 7 7 60.7 21.0 7 3 49.7 32.5 3 7 32.9 42.5 3 3 32.6 42.9" }, { "heading": "A.6 ABLATION ON USING MORE THAN ONE OASIS DISCRIMINATOR", "text": "A major difference between SPADE and OASIS is that OASIS employs only one discriminator, while SPADE uses two PatchGAN discriminators at different scales. Naturally, the question arises how OASIS performs with two discriminators at different scales, as in SPADE. For this, Table H presents the FID and mIoU performance of OASIS with two discriminators operating at scales 1 and 0.5 on Cityscapes. One can see that an additional discriminator at scale 0.5 does not improve performance, but slightly worsens it. The reason that no performance gain is visible is that the OASIS discriminator already encodes multi-scale information through its U-Net structure: skip connections between encoder, decoder and individual blocks integrate information at all scales. In contrast, SPADE requires two discriminators to capture information at different scales. Table H: Comparison of using 1 and 2 discriminators at different scales for OASIS on Cityscapes. Bold denotes the best performance.\n# of OASIS D FID↓ mIoU↑ 1 discriminator 47.7 69.3\n2 discriminators at different scales (1 and 0.5) 48.7 68.8" }, { "heading": "A.7 ABLATION ON NOISE SAMPLING STRATEGIES DURING TRAINING", "text": "Our 3D noise can contain the same sampled vector for each pixel, or different vectors for different regions. This allows for different noise sampling schemes during training. Table I shows the effect of using different methods of sampling 3D noise for different locations during training: Image-level sampling creates one global 1D noise vector and replicates it along the height and width of the label map to create a 3D noise tensor. Region-level sampling relies on generating one 1D noise vector per label, and stacking them in 3D to match the height and width of the label map. Pixel-level sampling creates different noise for every spatial position, with no replication taking place. Mix switches between image-level and region-level sampling via a coin flip decision at every training step. With no obvious winner in performance, we choose the simplest scheme (image-level) for our experiments.\nBy choosing image-level sampling for training, we thus generate a single 1D latent noise vector of size 64, broadcast it to 64xHxW and concatenate with the label map (NxHxW). This new composite tensor is used as input to the 1st generator layer and at all SPADE-norm layers. The noise is not ignored for the following reasons:\n(1) The noise modulates the activations directly at every layer, so it is very hard to ignore. Here, it is important to emphasize how the noise is used: For SPADE it was observed that label maps are paid more attention to if they are used for location-sensitive conditional batch normalization (CBN). Analogously, we observe that the noise is also paid more attention to when it is injected via CBN. Like label maps, which are 3D tensors of stacked one-hot vectors, we stack noise vectors into a 3D\ntensor of the same dimensions. Thus, in the same way that SPADE is spatially sensitive to labels, OASIS is spatially sensitive to both labels and noise.\n(2) The 3D broadcasting strategy provides a spatially uniform signal making it easy to embed semantic meaning into the latent code (see interpolations, Fig. I , J). As noise modulates features at different scales in the generator, matching their resolution, it affects both global and local characteristics. This is why a generator trained with image-level noise can perform region-level manipulation at inference (Fig. F, H). However, more evolved spatial noise sampling schemes can be explored in the future." }, { "heading": "A.8 ADDITIONAL EXPERIMENTS ON COCO-STUFF", "text": "" }, { "heading": "A.9 ADDITIONAL EVALUATION METRICS", "text": "Currently, the FID score is the most widely adopted metric for quantifying image quality of GAN models. However, it is often argued that the FID score does not adequately disentangle synthesis quality and diversity (Kynkäänniemi et al., 2019). Recently, a series of metrics have been proposed to address this issue by measuring scores related to the concepts of precision and recall (Ravuri & Vinyals, 2019; Shmelkov et al., 2018; Sajjadi et al., 2018; Kynkäänniemi et al., 2019). Here, we have a closer look at the ”improved precision and recall” score proposed by (Kynkäänniemi et al., 2019), where precision is the probability that a generated image falls into the estimated support of the real image distribution, and recall is the probability that a real image falls into the estimated support of the generator distribution. Precision and recall can be interpreted as sample quality and diversity. Table K presents a comparison of precision (P) and recall R) between SPADE+ and OASIS. It can be seen that OASIS outperforms SPADE+ both in terms of image quality and variety. Table K: Comparison of the precision and recall metric between SPADE+ and OASIS. Bold denotes the best performance.\nModel ADE20K ADE-outd. Cityscapes COCO-StuffP↑ R↑ P↑ R↑ P↑ R↑ P↑ R↑ SPADE+ 0.71 0.52 0.62 0.51 0.54 0.34 0.63 0.56 OASIS 0.77 0.57 0.77 0.56 0.58 0.55 0.67 0.59" }, { "heading": "B QUALITATIVE RESULTS", "text": "" }, { "heading": "B.1 COMPARISON TO OTHER METHODS", "text": "In this section we present a visual comparison between OASIS and other semantic image synthesis methods. Firstly, we show images generated by SPADE (Park et al., 2019), CC-FPSE (Liu et al., 2019) and OASIS on ADE20k, COCO-Stuff, and Cityscapes (in Figures B, C, and D, respectively). A further comparison for SPADE, SPADE+ and OASIS is presented in Figure E. We observed that OASIS often produces more visually plausible images than the previous methods. Our method commonly produces finer textures, especially for complex and large semantic objects, e.g building facades, mountains, water.\nWe also note that OASIS usually generates brighter and more diverse colors, compared to other methods. As we showed in Section 4 in the main paper, the diversity in colors partially comes from the fact that the feature space of the OASIS generator is not constrained by the VGG loss. We observed that images, generated by SPADE and CC-FPSE, typically have closer colors, while OASIS frequently generates objects with completely different color tones. This also forms one of the failure modes of our approach, when the colors of objects fall out of distribution and seem unnatural (see Figure G)." }, { "heading": "B.2 MULTI-MODAL IMAGE SYNTHESIS", "text": "Multi-modal image synthesis for a given label map is easy for OASIS: we simply re-sample noise like in a conventional unconditional GAN model. Since OASIS employs a 3D noise tensor (64- channels×height×width), the noise can be re-sampled entirely (”globally”) or only for specific regions in the 2D image plane (”locally”). For our visualizations, we replicate a single 64-dimensional noise vector along the spatial dimensions for global sampling. For local sampling, we re-sample a new noise vector and use it to replace the global noise vector at every spatial position within a restricted area of interest. The results are shown in Figure F. The generated images are diverse and of high quality. We observe different degrees of variety for different object classes. For example, buildings change drastically in appearance and often change their spatial orientation with respect to the road. On the other side, many common objects (like tables) vary in color, texture, and illumination, but do not change shapes as they are restricted by the fine details of the region that is outlined for them in the label map.\nLocal noise re-sampling does not have to be restricted to only semantic class areas: in Figure H we sample a different noise vector for the left and right half of the image, as well as for arbitrarily shaped regions. In effect, the two areas can differ substantially. However, often a bridging element is found between two areas, such as clouds extending partly from one region to the other region of the image." }, { "heading": "B.3 LATENT SPACE INTERPOLATIONS", "text": "In Figure I we present images that are the results of linear interpolations in the latent space (see Fig. I), using an OASIS model trained on the ADE20K dataset. To generate the images, we sample two noise vectors z ∈ R64 and interpolate them with three intermediate points. The images are synthesized for these five different noise inputs while the label map is held fixed. Note that in Figure I we only vary the noise globally, not locally (See Section 3.3 in the main paper). In contrast, Figure J shows local interpolations. For this, we only re-sample the 3D noise in the area corresponding to a single semantic class. The effect is that only the appearance of the selected semantic class varies while the rest of the image remains fixed. It can be observed that strong changes in a local area can slightly affect the surroundings if the local area is also very big. As such, the clouds are slightly different in the first and last panel of the mountain row and tree row in Figure J.\nWe see from Figure I and J that the trajectories in the latent space are smooth and semantically meaningful. For example, we observe transitions from winter to summer, day to night, green trees to leafless trees, shiny parquet to matt carpet, as well as smooth transitions between buildings with different architectural styles." }, { "heading": "B.4 APPLICATION TO UNLABELLED DATA", "text": "OASIS has a unique property that its discriminator is trained to be an image segmenter. We observed that it shows good performance on this task, reaching the mIoU of 40.0 on ADE20K validation set. For comparison, current state of the art on ADE20K is a mIoU of 46.91, achieved by ResNeST (Zhang et al., 2020). Such a good segmentation performance allows OASIS to be applied to unlabelled images: given an unseen image without a ground truth annotation, OASIS can predict a label map via the discriminator. Subsequently feeding this prediction to the generator allows to synthesize a scene with the same layout but different style. This property is shown in Fig. K. Due to the good segmentation performance, the recreated scenes closely follow the ground truth label map of the original image. The high sensitivity of OASIS to the 3D noise enforces good variability, so the recreations are different from each other. We believe that creating multiple versions of one image while retaining the layout can be useful for data-augmentation." }, { "heading": "B.5 LABELMIX", "text": "Figure L shows additional visual examples of LabelMix regularization, as described in Section 3.2 in the main paper. It can be seen that the discriminator prediction on the mixed images often differs from the mix of individual predictions on real and fake images. In particular, regions that are classified as real in the latter are classified as fake when the images are mixed. This means that the discriminator takes the global context into account for local predictions and thereby often bases the prediction on arbitrary details that should not affect the real-fake class identity. In return, the consistency regularization helps to minimize the difference between these two predictions.\nLabel map Ground truth SPADE CC-FPSE OASIS\nFigure B: Qualitative comparison of OASIS with other methods on ADE20K.\nLabel map Ground truth SPADE CC-FPSE OASIS\nFigure C: Qualitative comparison of OASIS with other methods on COCO-Stuff.\nLabel map Ground truth Pix2pixHD Wang et al. (2018)\nSPADE Park et al. (2019) CC-FPSE Liu et al. (2019) OASIS\nLabel map Ground truth Pix2pixHD Wang et al. (2018)\nSPADE Park et al. (2019) CC-FPSE Liu et al. (2019) OASIS\nLabel map Ground truth Pix2pixHD Wang et al. (2018)\nSPADE Park et al. (2019) CC-FPSE Liu et al. (2019) OASIS\nLabel map Ground truth Pix2pixHD Wang et al. (2018)\nSPADE Park et al. (2019) CC-FPSE Liu et al. (2019) OASIS\nFigure D: Qualitative comparison of OASIS with other methods on Cityscapes.\nLabel map Ground truth SPADE SPADE+ OASIS\nFigure E: Qualitative comparison of OASIS with SPADE and SPADE+ using ADE20K (row 1-3), COCO-stuff (row 4-6) and Cityscapes (row 7-9).\nLabel map OASIS 1 OASIS 2 OASIS 3 OASIS 4 OASIS 5 G lo ba\nl L oc al\nG lo\nba l\nL oc al G lo ba\nl L oc al\nG lo\nba l\nL oc\nal\nFigure F: Images generated by OASIS on ADE20K with 256 × 256 resolution using different 3D noise inputs. For each label map the noise is re-sampled globally (first row) or locally in the areas marked in red (second row). Note that the images are not stitched together but generated in single forward passes.\nLabel map Ground truth SPADE CC-FPSE OASIS\nFigure G: Failure mode of OASIS. Our model generates diverse images, sometimes producing object with outlier colors and textures. We compare to Park et al. (2019) and Liu et al. (2019).\nFigure H: Images generated by OASIS in one forward pass (no collage), with different noise vectors for different image regions.\nM ou\nnt ai n Sk\ny Tr\nee W at er\nB ui\nld in g Fl oo r\nW al\nl\nFigure J: Latent space interpolations in local regions of the 3D noise, corresponding to a single semantic class. The noise is only changed within the restricted area. Trained on the ADE20K dataset at resolution 256× 256.\nGT label map Input image Segmentation Recreation 1 Recreation 2 Recreation 3\nFigure K: After training, the OASIS discriminator can be used to segment images. Columns 1- 3 show the ground truth label map, real image, and segmentation of the discriminator. Using the predicted label map the generator can produce multiple versions of the original image by resampling noise (Recreations 1-3). Note that this alleviates the need of ground truth maps during inference.\nLabel map Real image x Fake image x̂ Mask M LabelMix(x,x̂) DLabelMix(x,x̂) LabelMix(Dx,Dx̂)\nFigure L: Visual examples of LabelMix regularization. Real x and fake x̂ images are mixed using a binary mask M , sampled based on the label map, resulting in LabelMix(x,x̂). The consistency regularization then minimizes the distance between the logits of DLabelMix(x,x̂) and LabelMix(Dx,Dx̂). In this visualization, black corresponds to the fake class in the N+1 segmentation output." }, { "heading": "C ARCHITECTURAL AND TRAINING DETAILS", "text": "The architecture of OASIS builds upon SPADE Park et al. (2019). In the following, we describe in detail our proposed changes to the discriminator and the generator." }, { "heading": "C.1 DISCRIMINATOR ARCHITECTURE", "text": "This OASIS discriminator follows a U-Net architecture and is built from ResNet blocks, inspired in their design by Brock et al. (2019). The architecture of the OASIS discriminator is outlined in Table L. It has in total 22M learnable parameters and is bigger than the multi-scale PatchGAN discriminator (5.5M) used by SPADE Park et al. (2019). The increased capacity of the OASIS discriminator allows it to learn a more powerful representation and provide more informative feedback to the generator.\nTable L: The OASIS discriminator. N refers to the number of semantic classes.\nOperation Input Size Output Size ResBlock-Down image (3,256,256) down 1 (128,128,128) ResBlock-Down down 1 (128,128,128) down 2 (128,64,64) ResBlock-Down down 2 (128,64,64) down 3 (256,32,32) ResBlock-Down down 3 (256,32,32) down 4 (256,16,16) ResBlock-Down down 4 (256,16,16) down 5 (512,8,8) ResBlock-Down down 5 (512,8,8) down 6 (512,4,4) ResBlock-Up down 6 (512,4,4) up 1 (512,8,8) ResBlock-Up cat(up 1, down 5) (1024,8,8) up 2 (256,16,16) ResBlock-Up cat(up 2, down 4) (512,16,16) up 3 (256,32,32) ResBlock-Up cat(up 3, down 3) (512,32,32) up 4 (128,64,64) ResBlock-Up cat(up 4, down 2) (256,64,64) up 5 (128,128,128) ResBlock-Up cat(up 5, down 1) (256,128,128) up 6 (64,256,256) Conv2D up 6 (64,256,256) out (N+1,256,256)" }, { "heading": "C.2 GENERATOR ARCHITECTURE", "text": "The generator architecture is built from SPADE ResNet blocks and includes a concatenation of 3D noise with the label map along the channel dimension as an option. The generator can be either trained directly on the label maps or with 3D noise concatenated to the label maps. The latter option is shown in Table M.\nOASIS generator drops the first residual block used in Park et al. (2019), which decreases the number of learnable parameters from 96M to 72M. The optional 3D noise injection brings additionally 2M parameters. This sampling scheme is five times lighter than the image encoder used by SPADE (10M)." }, { "heading": "C.3 LEARNING OBJECTIVE AND TRAINING DETAILS", "text": "Learning objective. We train our model with (N+1)-class cross entropy as an adversarial loss. Additionally, the discriminator is regularized with the proposed LabelMix consistency regularization. The full OASIS learning objective thus takes the following form:\nLOASISG = −E(z,t) N∑ c=1 αc H×W∑ i,j ti,j,c logD(G(z, t))i,j,c , LOASISD = −E(x,t) N∑ c=1 αc H×W∑ i,j ti,j,c logD(x)i,j,c − E(z,t) H×W∑ i,j logD(G(z, t))i,j,c=N+1\n+ + λLM ∥∥∥Dlogits(LabelMix(x, x̂,M))− LabelMix(Dlogits(x), Dlogits(x̂),M)∥∥∥2 2 ,\nwhere x denotes the real image and (z, t) is the noise-label map.\nTable M: The OASIS generator. N refers to the number of semantic classes, z is noise sampled from a unit Gaussian, y is the label map, interp interpolates a given input to the appropriate spatial dimensions of the current layer.\nOperation Input Size Output Size\nConcatenate z 3D (64,256,256) z y (64+N,256,256) y (N,256,256) Conv2D interp(z y) (64+N,8,8) x (1024,8,8) SPADE-ResBlock x (1024,8,8) up 1 (1024,16,16) interp(z y) (64+N,8,8) SPADE-ResBlock up 1 (1024,16,16) up 2 (512,32,32) interp(z y) (64+N,16,16) SPADE-ResBlock up 2 (512,32,32) up 3 (256,64,64) interp(z y) (64+N,32,32) SPADE-ResBlock up 3 (256,64,64) up 4 (128,128,128) interp(z y) (64+N,64,64) SPADE-ResBlock up 4 (128,128,128) up 5 (64,256,256) interp(z y) (64+N,128,128) Conv2D, LeakyRelu, TanH up 5 (64,256,256) x (3,256,256)\nOur objective function is different from SPADE. Their model uses hinge adversarial loss and adds the VGG perceptual loss and a feature matching loss to train the generator. For an easier comparison, we provide the objective function of SPADE:\nLSPADEG = −E(z,t) [D(t, G(z, t))] + λFM E(z,t,x) T∑\ni=1\n‖D(i)k (t, x)−D (i) k (t, G(z, t))‖1+\n+ λVGGE(z,t,x) N∑ i=1 ‖F (i)(x)− F (i)(G(z, t))‖1,\nLSPADED = −E(t,x) [min(0,−1 +D(t, x))]− E(z,t) [min(0,−1− logD(t, G(z, t))] , where F is the pre-trained VGG network.\nTraining details. We follow the experimental setting of (Park et al., 2019). The image resolution is set to 256x256 for ADE20K and COCO-Stuff and 256x512 for Cityscapes. The Adam (Kingma & Ba, 2015) optimizer was used with momentums β = (0, 0.999) and constant learning rates (0.0001, 0.0004) for G and D. We did not apply the GAN feature matching loss, and used the VGG perceptual loss only for ablations with λVGG = 10. The coefficient for LabelMix λLM was set to 5 for ADE20k and Cityscapes, and to 10 for COCO-Stuff. All our models use an exponential moving average (EMA) of the generator weights with 0.9999 decay (Brock et al., 2019). All the experiments were run on 4 Tesla V100 GPUs, with a batch size of 20 for Cityscapes, and 32 for ADE20k and COCO-Stuff. The training epochs are 200 on ADE20K and Cityscapes, and 100 for the larger COCO-Stuff dataset. On average, a complete forward-backward pass with batch size 32 on Ade20k takes around 0.95ms per training image." } ]
2,021
SEMANTIC IMAGE SYNTHESIS
SP:f4fafd66830ad4d90a13395ac5327de33d127a73
[ "This paper addresses the problem of estimating the distribution parameters of features extracted from a set of high dimensional observations, a problem that is common in the physical sciences. To solve this problem, the authors present a deep learning approach that utilises a combination of (i) deep ensemble training, (ii) post hoc model selection, and (iii) importance weighted parameter estimation. First, a deep ensemble is trained to solve a regression task (observation -> feature). During testing, this ensemble is frozen and used to generate feature samples from unseen observations. Using these feature samples, it is possible to estimate the distribution parameters using maximum likelihood estimation. The authors evaluate their method on X-ray polarimetry, and compare it with two other approaches, one of which is also a deep learning approach. On all tasks, the presented method outperforms both baseline approaches." ]
In parametric density estimation, the parameters of a known probability density 1 are typically recovered from measurements by maximizing the log-likelihood. 2 Prior knowledge of measurement uncertainties is not included in this method, po3 tentially producing degraded or even biased parameter estimates. We propose 4 an efficient two-step, general-purpose approach for parametric density estimation 5 using deep ensembles. Feature predictions and their uncertainties are returned 6 by a deep ensemble and then combined in an importance weighted maximum 7 likelihood estimation to recover parameters representing a known density along 8 with their respective errors. To compare the bias-variance tradeoff of different 9 approaches, we define an appropriate figure of merit. We illustrate a number of 10 use cases for our method in the physical sciences and demonstrate state-of-the-art 11 results for X-ray polarimetry that outperform current classical and deep learning 12 methods. 13
[]
[ { "authors": [ "Christopher Bishop" ], "title": "Mixture Density Networks", "venue": null, "year": 1994 }, { "authors": [ "January" ], "title": "ISSN 0090-5364, 2168-8966", "venue": "doi: 10.1214/aos/1176344552. URL https://", "year": 1979 }, { "authors": [ "Barcelona", "Spain", "July" ], "title": "AUAI Press", "venue": "ISBN 978-0-9749039-7-2.", "year": 2011 }, { "authors": [ "George Papamakarios" ], "title": "Neural Density Estimation and Likelihood-free Inference", "venue": null, "year": 1910 } ]
[ { "heading": null, "text": "In parametric density estimation, the parameters of a known probability density1 are typically recovered from measurements by maximizing the log-likelihood.2 Prior knowledge of measurement uncertainties is not included in this method, po-3 tentially producing degraded or even biased parameter estimates. We propose4 an efficient two-step, general-purpose approach for parametric density estimation5 using deep ensembles. Feature predictions and their uncertainties are returned6 by a deep ensemble and then combined in an importance weighted maximum7 likelihood estimation to recover parameters representing a known density along8 with their respective errors. To compare the bias-variance tradeoff of different9 approaches, we define an appropriate figure of merit. We illustrate a number of10 use cases for our method in the physical sciences and demonstrate state-of-the-art11 results for X-ray polarimetry that outperform current classical and deep learning12 methods.13\n1 INTRODUCTION14\nThe majority of state-of-the-art NN performances are single (high-dimensional) input, multiple-15 output tasks, for instance classifying images (Krizhevsky et al., 2012), scene understanding (Red-16 mon et al., 2015) and voice recognition (Graves et al., 2006). These tasks typically involve one input17 vector or image and a single output vector of predictions.18\nIn parametric density estimation, there is a known probability density that the data (or latent features19 of the data) are expected to follow. The goal is to find representative distribution parameters for a20 given dataset. In simple cases where the likelihood is calculable, maximum likelihood estimation21 can be used effectively. In cases where latent features of the data follow a known distribution (e.g.,22 heights of people in a dataset of photographs), NNs can potentially be used to directly estimate the23 distribution parameters. For clarity, we define this direct/end-to-end approach as parametric feature24 density estimation (PFDE). Such an approach requires employing entire datasets (with potentially25 thousands to millions of high-dimensional examples) as inputs in order to output a vector of den-26 sity parameters. Furthermore, to be useful these NNs would need to generalize to arbitrarily sized27 dataset-inputs.28\nOne example of NNs making sense of large dataset-inputs is found in natural language processing.29 Here large text corpora, converted to word vectors (Pennington et al., 2014; Devlin et al., 2019),30 can be input and summarized by single output vectors using recurrent neural networks (RNNs), for31 instance in sentiment analysis (Can et al., 2018). However, these problems and RNNs themselves32 contain inductive bias – there is inherent structure in text. Not all information need be given at once33 and a concept of memory or attention is sufficient (Vaswani et al., 2017). The same can be said34 about time domain problems, such as audio processing or voice recognition. Memory is inherently35 imperfect – for PFDE, one ideally wants to know all elements of the ensemble at once to make36 the best prediction: sequential inductive bias is undesirable. Ultimately, memory and architectural37 constraints make training NNs for direct PFDE computationally intractable.38\nOn the other hand, density estimation on data directly (not on its latent features), is computationally39 tractable. Density estimation lets us find a complete statistical model of the data generating process.40 Applying deep learning to density estimation has advanced the field significantly (Papamakarios,41 2019). Most of the work so far focuses on density estimation where the density is unknown a priori.42 This can be achieved with non-parametric methods such as neural density estimation (Papamakarios43\net al., 2018), or with parametric methods such as mixture density networks (Bishop, 1994). In PFDE,44 however, we have a known probability density over some features of the whole dataset. The features45 may be more difficult to predict accurately in some datapoints than others.46\nTypical parametric density estimation does not make use of data uncertainties where some elements47 in the dataset may be more noisy than others. Not including uncertainty information can lead to48 biased or even degraded parameter estimates. The simplest example of parametric density estimation49 using uncertainties is a weighted mean. This is the result of a maximum likelihood estimate for a50 multi-dimensional Gaussian. For density estimation on predicted data features, PFDE, we would51 like a way to quantify the predictive uncertainty. A general solution is offered by deep ensembles52 (Lakshminarayanan et al., 2017). While these are not strictly equivalent to a Bayesian approach,53 although they can be made such using appropriate regularization (Pearce et al., 2018), they offer54 practical predictive uncertainties, and have been shown to generalize readily (Fort et al., 2019).55 Additionally Ovadia et al. (2019) have shown deep ensembles perform the best across a number56 of uncertainty metrics, including dataset shift, compared to competing methods such as stochastic57 variational inference and Monte Carlo methods.58\nIn this work, we propose a NN approach that circumvents large dataset-input training or recurrent59 architectures to predict known feature density parameters over large input datasets. We use predic-60 tive uncertainties on features of individual dataset elements as importance weights in a maximum61 likelihood estimation. We will show that estimating known density parameters in a 2-step approach62 provides greater interpretability and flexibility. We are able to predict uncertainties on our density63 parameter estimates using bootstrap methods (Efron, 1979). Our method is widely applicable to a64 number of applied machine learning fields; §3 showcases a few important examples.65\nContributions: Our contributions in this paper are as follows: (1) We introduce a general, flexi-66 ble method for PFDE using NNs. The method can be applied to any domain requiring PFDE. We67 illustrate a number of varied domain examples in the physical sciences in §3. (2) In an in-depth68 evaluation we show that our method outperforms not only classical methods for density estimation,69 but also standard NN implementations in an application to X-ray polarimetry. (3) We investigate the70 bias-variance tradeoff associated with our method and introduce a tuneable hyperparameter to con-71 trol it. Note: In the following we focus on regression examples, (since unbinned density estimation72 is preferable to binned). However, a similar method can be applied to prediction examples where73 softmax class probabilities are used as heteroscedastic aleatoric uncertainty.74\n2 IMPORTANCE WEIGHTED ESTIMATION WITH DEEP ENSEMBLES75\n2.1 PROBLEM SETUP AND HIGH-LEVEL SUMMARY76\nWe wish to estimate the feature density parameters of N high dimensional data points {x}:77 f({xn}Nn=1). Here x ∈ RD can be any high dimensional data (e.g. images, time series). N is78 arbitrary, although usually large since otherwise density estimation is inaccurate. For example, con-79 sider estimating the mean and variance of human heights from a dataset consisting of photographs80 of people. A person’s height in each photograph is the image feature and we know this feature81 approximately follows a Gaussian distribution. We develop a method that can estimate the density82 parameters (mean and variance) and generalize to any dataset of photographs.83\nIn general, the function f mapping the high dimensional data points to the desired density parame-84 ters is unknown, since the high dimensional data is abstracted from its features. Learning f directly85 is typically infeasible because an entire ensemble of inputs {xn}Nn=1 must be processed simultane-86 ously to estimate density parameters, and this approach would have to generalize to arbitrary N and87 density parameter values. We discuss some special cases where this is possible in §1. However, the88 function g mapping data features yn to the density parameters is known.89\nWe cast this as a supervised learning problem where we have a datasetD consisting of N data points90 D = {xn, yn}Ntrainn=1 with labels y ∈ RK where x ∈ RD. We want to estimate the density parameters91 ψ1, ψ2, ...ψk for an unseen test set g({yn}Ntestn=1 ) for arbitrary Ntest.92\nThe basic recipe that comes to mind is training a single NN to predict output labels {yn}Nn=1 then93 evaluate g directly. This ignores the high variance in single NN predictions (dependent on train-94\ning/random initialization), that some individual examples may be more informative than others, and95 that an objective to predict the most accurate output labels may not be the best for predicting good96 density parameters (high bias may be introduced, for instance).97\nOur hybrid approach is as follows. (i) Train a deep ensemble of M NNs1 to predict {yn, σn}Nn=198 where σn is the total uncertainty on each prediction yn, (ii) use the {σn}Nn=1 as weights in an99 importance weighted maximum likelihood estimate. The next section, §2.2, describes procedure (i).100\n2.2 DEEP ENSEMBLES101\nDeep ensembles (Lakshminarayanan et al., 2017) return robust and accurate supervised learning pre-102 dictions and predictive uncertainties, which enable the best density parameter predictions. These use103 an ensemble of individual NNs (with different random initializations) trained to predict features and104 their aleatoric uncertainties. Final predictions and their epistemic uncertainties are then recovered105 by combining the estimates from each of the NNs in the ensemble.106\nIn regression, deep ensembles model heteroscedastic aleatoric σa uncertainty by modifying the typ-107 ical mean-squared errors (MSE) objective to a negative log-likelihood (NLL) (Lakshminarayanan108 et al., 2017),109\nLoss(y|x) = 1 2 logσ2a(x) + 1 2σ2a(x) ‖y − ŷ(x))‖22. (1)\nExtensions using more complex distributions like mixture density networks or heavy tailed distribu-110 tions may be more applicable to certain problems with prior knowledge about the error distribution.111 In practice, the log-likelihood of any exponential family could be used; we find this simple Gaussian112 approach to be sufficient and robust for regression problems. Our results in §3.4 for a compare a113 Gaussian and Von Mises distribution.114\nEpistemic uncertainty σe is modelled using a uniformly weighted ensemble of M NNs each115 trained starting from a different random initialization. The regression prediction and uncertainty116 are approximated by the mean and standard deviation over the M NN ensemble predictions re-117 spectively (each NN in the ensemble contributes equally) i.e. ŷ(x) = M−1 ∑M m=1 ŷm(x) and118 σ2e(x) = Var({ŷm(x)}Mm=1). The epistemic uncertainty is then combined with the aleatoric in119 quadrature to arrive at the total uncertainty: σ2 = σ2a + σ 2 e . Typically M ∼ 5− 15.120\nIn part (i) of our hybrid approach for PFDE, we train a deep ensemble to minimize the NLL (1) on121 desired features y. We follow the deep ensemble training procedure outlined in Lakshminarayanan122 et al. (2017) (with recast loss function from Kendall & Gal (2017)) without using adversial examples,123 using the full dataset for each NN. Since the individual density parameters over predicted features124 are the final desired values in PFDE, it is possible that an objective maximizing feature accuracy on125 the validation set is not the true objective. This is possible if the training dataset is biased or the126 model (1) is highly misspecified for the particular problem. The Kitaguchi et al. (2019) single CNN127 method in table 1, §3.4, shows a clear case of training bias. If de-biasing the training dataset or using128 a more appropriate model is not possible, we have identified two potential ways of ameliorating this129 issue for PFDE:130\n1. Include terms in the individual NN objectives to penalize known sources of bias.131\n2. Select the top M performing NNs, as measured by a criterion that includes density param-132 eter prediction bias on a held out test set.133\nIn practice both can be used simultaneously. However, the former runs into batch size problems134 (since one needs a large sample size to accurately estimate bias), and the source of bias is not always135 well understood. The latter naturally arises from the use of deep ensembles, but could include its136 own unwanted bias and risk underestimating the epistemic uncertainty. We compare selecting the137 top performing NNs for the ensemble by a domain specific criterion against randomly selecting NNs138 for the ensemble in §3.139\n1We note that the NN architecture used will of course depend on the dataset domain.\n2.3 IMPORTANCE WEIGHTED LOG-LIKELIHOOD140\nProvided a mapping between high dimensional inputs and interpretable features xn 7→ yn, we can141 calculate the density parameters ψ1, ψ2, ...ψk by minimizing the appropriate negative log-likelihood142 function p({yn}|ψ1, ψ2, ...ψk). Some feature predictions yn will have greater total predictive uncer-143 tainties, σn. We estimate feature density parameters by incorporating the total uncertainty into an144 importance weighted maximum likelihood estimate. This makes up part (ii) of our hybrid method.145\nAn importance weight quantifies the relative importance of one example over another. Importance146 weighting an element should be the same as if that element were included multiple times in the147 dataset, proportional to its importance weight Karampatziakis & Langford (2011). The deep ensem-148 ble, once trained, will act as mapping between high dimensional inputs xn and feature-uncertainty149 output pairs yn, σn. For each input xn there will be M output pairs {ŷnm, (σa)nm}Mm=1, one for150 each NN in the deep ensemble. Both the features ŷnm and aleatoric uncertainty variances (σa)2nm151 can be combined by taking the appropriate mean over m; this mean may depend on the distribution152 used in (1), but for the simple Gaussian case the standard mean is sufficient. Taking the mean results153 in a single output pair (ŷn, (σa)n) for each input. Epistemic uncertainties are included as in §2.2,154 resulting in the final output (ŷn, σn).155\nIn order to use all possible information when estimating the desired density parameters ψ1, ψ2, ...ψk,156 we define an importance weighted negative log-likelihood function157\nLw({ŷn}, ψ1, ψ2, . . . , ψk) = − N∑ n=1 wnlogL(ŷn|ψ1, ψ2, . . . , ψk), (2)\n158\nwn = σ −λ n (3)\nEach individual prediction yn has an associated importance weight wn. The σ−λn term weights each159 yn by its predictive uncertainty. The hyperparamter λ ≥ 0 controls the importance weighting distri-160 bution. A high λ means the yn with the lowest (estimated) MSE will dominate the final ensemble161 statistic. As always in estimation problems, there is a trade-off between lower variance predictions162 and more bias. This can be tuned for a specific application using λ; we discuss the procedure in163 detail in our example application, §3. Final density parameters are found by minimizing (2) over the164 domain of the density parameters ψ.165\nTypically, the weights in weighted likelihood estimation are determined heuristically (Hu & Zidek,166 2002). In this example, we choose w = σ−λ since it approximates the simple functional form of167 the likelihood used in a weighted mean estimate (λ = 2). This weighting choice is also inspired168 by the dispersion parameter used in generalized linear models (GLMs) (Nelder & Wedderburn,169 1972). We expect that this weighting will retain similar robustness properties in terms of model170 fitting, and will generalize well to many domains. However, of course, any decreasing function171 f : R+ → R+ may be used to determine weights, with the most suitable choice of function f172 within a given class of functions (in our case, parameterized by λ) to be determined by either cross-173 validation or performance on a holdout set. In some applications it is possible to find the exact174 weighting function [in prep., reference deleted to maintain integrity of review process]. Further175 discussion of weight choice in our application is given in section §3.4.176 Confidence intervals on the density parameters can be calculated using the non-parametric bootstrap177 Efron (1979): select N yn, σn pairs with replacement and minimize (2). In the limit of many trials178 with different random subsamples, this will give the output distribution on the density parameters.179\n2.4 DENSITY PARAMETER REGRESSION180\nFor a special class of parameterized densities it is possible to find the global minimizer or minimize181 (2) analytically (e.g. for a multivariate Gaussian). In practice, the majority of parametric densities182 of interest for PFDE are likely to be convex (exponential families, our application example §3, etc.),183 so will fall into this special class. In the general case, minimization is performed numerically to find184 locally optimal solutions.185\nIn this work, we employ Ipopt (Wächter & Biegler, 2006), an open-source interior-point solver for186 large-scale non-convex optimization problems, to minimize (2). This method can be used for convex187\nor non-convex parametric density estimates, but only convex ones are guaranteed to be global opti-188 mal. Because Ipopt finds locally optimal solutions, which are highly dependent upon an initial guess189 of the parameters provided to the solver, in the non-convex case, we recommend nested sampling190 Feroz et al. (2009) to test many initial guesses and then select the best local solution. Constraints191 on the density parameters, for instance if they have a finite domain, can be incorporated for both the192 convex and non-convex case. Of course, any optimizer appropriate for (2) can be used and this will193 depend on the problem.194\nThe overall training and evaluation procedure is summarized in Algorithm 1.195\nAlgorithm 1: Pseudocode for our PFDE method. 1: Identify output features yn relevant to the desired density parameter(s) (e.g., subject height in photographs). 2: Train a deep ensemble of NNs using loss function (1) to maximise accuracy on the desired output features 3: Evaluate the density parameter(s) using importance weights by minimizing (2). 4: Tune λ hyperparameter for the specific application. 196\n3 EXPERIMENTS197\n3.1 X-RAY POLARIMETRY198\nMeasuring X-ray polarization has been a major goal in astrophysics for the last 40 years. X-ray po-199 larization can provide essential measurements of magnetic fields very close to high energy sources,200 such as accreting black holes and astrophysical jets (Weisskopf, 2018). The recent development201 of photoelectron tracking detectors (Bellazzini et al., 2003) has greatly improved the prospects of202 doing so. X-ray polarization telescopes with photoelectron tracking detectors directly image elec-203 tron tracks formed from photoelectrons scattered by the incoming X-ray photons. We describe an204 application of our hybrid PFDE method to X-ray polarimetry using photoelectron tracking detec-205 tors. We use data from the upcoming NASA Imaging X-ray Polarization explorer (IXPE) (Sgrò &206 IXPE Team, 2019) as a working example. The problem of recovering polarization parameters from207 a dataset of (IXPE) electron track images has recently been announced as an open problem in the208 machine learning community (Moriakov et al., 2020).209\nThe linear polarization of light can be fully described by two degrees of freedom: the polarization210 fraction 0 ≤ Π ≤ 1, (0% – 100%), and the electric vector position angle −π/2 ≤ φ ≤ π/2.211 These can be thought of as the magnitude and direction of a vector perpendicular to the direction212 of propagation of the light. In imaging X-ray polarimetry, when the detector images an X-ray213 source, it measures individual 2D images of electron tracks excited by incoming X-ray photons.214 The initial directions the electrons travel follow a known probability density that depend on the215 source polarization, and the problem is to recover the polarization parameters Π and φ from the216 collected dataset of 2D track images.217\nIn the case of IXPE, charge tracks are imaged by hexagonal pixels. Fig. 1 shows some example218 photoelectron tracks at different X-ray energies. Each track represents the interaction of a single219 photon with a single gas molecule. The initial track angle y follows the probability density220\np(y | Π, φ) = 1 2π\n( 1 + Πcos ( 2(y + φ) ) , (4)\nwhere Π and φ are fixed polarization parameters that depend on the source. By estimating y for a221 large number of tracks, we may recover the original polarization parameters Π and φ, using para-222 metric density estimation.223\nTrack morphologies vary greatly with energy (and even for the same energy); this affects how dif-224 ficult it is to recover an accurate intial photoelectron angle y. Low energy tracks are typically less225 elliptical and so more difficult to estimate. For this reason it is essential to incorporate some form of226 quality control in the tracks used for polarization estimates.227\nCurrent IXPE methods estimate individual track y using a moment analysis (Sgro, 2017). This228 calculates the first, second and third charge moments using the 2D coordinates of the hexagonal229\ndetector pixels, combining them to extract y. For each track, a single −π ≤ y ≤ π is output.230 The polarization parameters are then estimated using a standard (unweighted) MLE. The moment231 analysis additionally outputs an estimate of the track ellipticity, which can be used as a proxy for232 y estimation accuracy. The standard moment analysis uses a track cut to improve polarization re-233 covery – 20% of the tracks are cut based on ellipticity. NNs have also recently been applied to234 this problem Kitaguchi et al. (2019). This approach uses single CNNs for classification on y, with235 binned fits to y histograms to extract polarization parameters and track quality cuts. Our hybrid236 method exhibits significantly improved performance over both the standard IXPE method and this237 basic NN approach.238\n3.2 PARAMETRIC FEATURE DENSITY ESTIMATION239\nFollowing §2, we define CNNs that take single track images as input and (ŷ, σ̂) as output. In this240 case the track angles y are the data features that follow the known density (4), the density parameters241 Π ≡ ψ1, φ ≡ ψ2, and the CNNs will make up the deep ensemble.242 To make the hexagonal track images admissable inputs to standard CNN architectures, we first243 convert the hexagonal images to square image arrays by shifting every other column and rescaling244 the distance between points, as described in Steppa & Holch (2019). Since there are two possible245 shifts (odd and even rows), we apply both and stack the two shifted images, similar to color channels246 in rgb images. We do this to more closely approximate spatial equivariance of the CNN convolution247 kernels in the hexagonal space. At test time, we apply the deep ensemble to the same track 3 times,248 each time rotated by 120◦ in hexagonal space. We find this reduces all relevant prediction bias on ŷ249 (and later Π, φ) introduced when converting from hexagonal to square coordinates.250\nTo recover Π, φwe need to predict 2y, so we use the loss function (1) but parameterize the true angle251 y as a 2D vector v = (cos2y, sin2y) to capture the periodicity. The loss function is as follows:252\nLoss(v, v̂) = 1\n2 logσ̂2 +\n1\n2σ̂2 ‖v − v̂‖22. (5)\nThe final NN ensembles output the 3-vector (v̂, σ̂). In this case the mean over ensemble predictions253 is calculated using the circular mean of {v̂m}Mm=1. Then ŷ = 12arctan v̂2 v̂1\n. To calculate the final254 Π, φ with an ensemble ofM NNs for a given test dataset withN tracks we minimize the importance255 weighted NLL (2) with likelihood256\nL(ŷn|Π, φ) = 1\n2π (1 + Πcos(2(ŷn + φ))). (6)\nWe can recast this as the convex optimization problem257\nminimize x − N∑ n=1 σ̂−λn log ( 1 + vTnx ) subject to ‖x‖2 ≤ 1\n(7)\nwhere vn = (cosŷn, sinŷn) and x = (Πcosφ,Πsinφ). By recasting (2) as a convex optimization258 problem, we have a guaranteed globally optimal solution for (Π, φ). We can solve (7) quickly and259 efficiently using second order Newton methods. In practice we use the robust open source software260 IpOpt, §2.4.261 We also consider a more domain specific, non-Gaussian likelihood function for our loss, (5). We262 use the log-likelihood of the von Mises distribution for the NN loss:263\nLoss(v, v̂) = log ( I0(σ̂ −2) ) − 1 σ̂2 vTv̂, (8)\nwhere I0 is the modified Bessel function of the first kind. This is a close approximation of the264 wrapped Gaussian on the circle. It is more appropriate than the Gaussian (5) for angular estimates265 since it can capture the π periodicity in ŷ. For very small σ̂ this is equivalent to the Gaussian. We266 compare the results from both losses in §3.4 and table 1.267\n3.2.1 FIGURE OF MERIT268\nIn polarization estimation, we want high recovered Π̂100% (and accurate φ) for a known 100%269 polarized source (Π = 1), and low recovered Π̂0% for an unpolarized source (Π = 0). Since there270 is irreducible noise in the tracks, it is impossible for any method to achieve Π̂100% ∼ 1, so Π̂meas271 estimates are calibrated to get the final Π̂ for an unknown source2: Π̂ = Π̂meas/Π̂100%. We define a272 figure of merit for polarization estimation:273\nFoM = 100× Π̂0%/Π̂100%. (9)\nWe use the FoM to evaluate model performance: a lower FoM means better polarization estimation.274 This is effectively a measure of the signal to noise ratio, a simplified extension of the minimum275 detectable polarization (MDP) typically defined for X-ray polarization (Weisskopf et al., 2010) that276 does not preclude biased estimators. It is evaluated on unseen polarized and unpolarized datasets.277 In estimating the FoM, we take the number of tracks N ∼ 360, 000 so we can compare directly to278 Kitaguchi et al. (2019). We average the FoM over 200 independent track dataset samples of size N .279 We use the FoM as the criterion to select the hyperparameter λ in (2). In this way we can tradeoff280 accuracy and bias in our Π, φ estimates.281\n3.3 NN TRAINING AND SELECTION282\nOur training dataset consists of 3 million simulated tracks, examples of which are shown in fig. 1.283 The track energies uniformly span 1.0 − 9.0keV, IXPE’s most sensitive range and are unpolarized284 (uniform track angle distribution). Since we don’t know a priori what energy each track is, we want285\n2Π̂100% is measured before on a source with the same track energy distribution.\nNNs that can make predictions for tracks of all energies. This also makes for a more generalizable286 system, since some high energy tracks have similar characteristics to lower energy ones. Each track287 is labelled with its 2D angle vector v.288\nWe use a ResNet-19 (He et al., 2015) convolutional NN architecture as our base NN. This particular289 architecture is large enough to overfit the training set, and trains in a reasonable amount of time.290 Before training we preprocess the training data (square track images). We apply pixelwise centering291 and rescaling. We use stochastic gradient descent with momentum and a decaying learning rate292 starting at 1e − 2. We choose batch sizes 512, 1024, 2048 (tracks per batch). We trained for 150293 epochs, using early stopping to prevent overfitting. We use L2-norm regularization 5 × 10−5. We294 train 30 NNs and compare randomly selecting M = 10 NNs to selecting M = 10 NNs with the top295 MSEs on y for an unseen test dataset spanning all energies to make up our final NN ensemble. The296 results for both methods are shown in table 1.297\n3.4 RESULTS298\nTable 1 shows the results of our deep ensemble PFDE method alongside the current state of the art299 methods. The single CNN method with optimized cuts, developed in (Kitaguchi et al., 2019), pro-300 vides significant improvements in Π100% over the moment analysis, but adds bias to the unpolarized301 measurement Π0%, increasing its FoM and making it a worse method for all energies. We perform302 an ablation study over our method, testing a single NN without using weighting when estimating303 (Π, φ) (i.e. wn = 1 ∀n, (3)), an ensemble of NNs without weighting, a randomly selected ensemble304 with weighting, a top MSE selected ensemble with weighting and a von Mises loss weighted en-305 semble. We find a single NN without weighting beats the classical moments and moments with cuts306 baselines. This result is visualized in the right panel of fig. 3.3 for the 6.4keV dataset: the single NN307 shows improved ŷ estimates and thus a density that more closely resembles the ground truth. Using308 an ensemble of NNs improves this result slightly, but the real power of our method comes with the309 importance weights. Our final importance weighted ensemble method, with λ tuned accordingly for310 each energy, significantly outperforms the rest, especially in the power law datasets, where there is311 a reduction in FoM of almost a factor of 1.5. This shows the power of a simple weighted scheme312 over quality cuts in PFDE, it allows our method to take advantage of higher signal (Π100%) at higher313 energies in the power law datasets. The λ tuning procedure is shown in the left panel of fig.3.3.314\nComparing a randomly selected ensemble with a top MSE selected ensemble we find the results315 are almost identical. Random selection should yield more accurate approximations of the epistemic316 uncertainty and thus better weights, while selecting top performing NN on MSE should improve ŷ317 accuracy. Since the results are identical, but selecting NNs has the potential to bias density estima-318 tion, we recommend randomly selecting NNs. We note that, although not included in the table, a319 single NN with importance weighting performs only slightly worse than than the weighted ensem-320 ble. Since a single NN only produces aleatoric uncertainties, this suggests, as expected, that for a321 correctly specified model aleatoric uncertainties dominate epistemic ones. Finally, the von Mises322\nloss shows a small improvement over the simple Gaussian. This is expected, since characterizing323 the predictive uncertainties by a periodic distribution is more appropriate for the polarimetry ap-324 plication, but the improvement is small, suggesting that the Gaussian is a robust starting point for325 many applications. We plan to release further results and more domain specific information for this326 particular application [reference deleted to maintain integrity of review process].327\n3.5 OTHER APPLICATIONS328\nThere are numerous application of PFDE with uncertainty in the physical sciences and engineering.329 In high energy particle physics massive, short-lived particles can be detected by fitting a Cauchy330 distribution to the frequencies of measured decay states. Raw sensor data from hadronic particle331 colliders like the LHC are very noisy with variable uncertainty, meaning our PFDE approach to332 estimate the Cauchy distribution parameters could be very fruitful. This especially true with the333 widespread current use of deep learning in particle physics (Guest et al., 2018). Our approach is334 heuristically justified due to the asymptotic efficiency of the maximum likelihood estimator in a335 Cauchy location model (Cohen Freue, 2007). In manufacturing, GLMs fit to binomial distributions336 are commonly used to assess product quality, or the probability of a product being defunct. Today,337 computer vision is used for much of the inspection (Rossol, 1983), making our hybrid PFDE method338 a potential step forward. These are just a few application examples – our method may be useful for339 any GLM based method with high dimensional data.340\n4 DISCUSSION341\nWe have proposed a supervised learning framework for parametric feature density estimation. Our342 method uses deep ensembles to predict high dimensional data features, their aleatoric and epistemic343 uncertainties. We estimate feature density parameters by incorporating both of these uncertainties344 into an importance weighted maximum likelihood estimate. We include a tuneable weighting hyper-345 parameter λ, allowing one to control the bias-variance tradeoff for density estimation. Intuitively,346 in many real feature density estimation problems, some high dimensional data points may be much347 more informative than others due to complex noise or differing generative distributions. Our method348 models this explicitly, weighting datapoint features by their predictive uncertainty when estimating349 density parameters. This avoids throwing away valuable data with quality cuts, yielding improved350 density estimates. Our method is scaleable to any feature dataset size and is completely flexible for351 specific domain applications; most NN architectures can be used. We achieve state-of-the-art results352 over standard deep learning methods and classical algorithms in X-ray polarimetry - a recent open353 problem in ML. We expect our method would provide similar improvements to a number of PFDE354 application fields, including high energy particle physics and manufacturing.355\nWe perform an ablation study comparing a single NN, a deep ensemble, and various importance356 weighted deep ensembles. A single NN approach or standard deep ensemble improves slightly357 on the classical baselines, but importance weighting by predictive uncertainty provides the main358 improvements to our method. Selecting NNs for the deep ensemble based on quality of density359 estimation provides no additional gain in performance compared to random selection – since it is360 possible performance-based NN selection can degrade epistemic uncertainty estimates, we recom-361 mend randomly selecting NNs for the ensemble. Comparing the Gaussian and von Mises distribution362 for feature prediction we find the standard Gaussian likelihood (1) an effective and robust approx-363 imation, although results can potentially be improved for specific applications by choosing a more364 appropriate distribution over the predictive uncertainties.365\nWhile our method works well for densities with convex log-likelihoods, non-convex ones will not366 necessarily yield globally optimal solutions and may be very time consuming to evaluate. Future367 Work: Future additions to the method include more complex aleatoric uncertainty modelling. We368 assume a Gaussian distribution for our feature prediction (1), but for domain applications where369 there is an expected feature uncertainty, one could use an alternative distribution, or even a mixture370 density network (Bishop, 1994) for more flexibility. In that case the functional form of weighting371 would have to be reconsidered. Additionally, finding the optimal weighting function for specific372 problem applications is likely to yield significant improvements.373\nREFERENCES374\nRonaldo Bellazzini, F. Angelini, Luca Baldini, Alessandro Brez, Enrico Costa, Giuseppe Di375 Persio, Luca Latronico, M. M. Massai, Nicola Omodei, Luigi Pacciani, Paolo Soffitta,376 and Gloria Spandre. Novel gaseous x-ray polarimeter: data analysis and simulation.377 In Polarimetry in Astronomy, volume 4843, pp. 383–393. International Society for Op-378 tics and Photonics, February 2003. doi: 10.1117/12.459381. URL https://www.379 spiedigitallibrary.org/conference-proceedings-of-spie/4843/0000/380 Novel-gaseous-x-ray-polarimeter-data-analysis-and-simulation/381 10.1117/12.459381.short.382\nChristopher Bishop. Mixture Density Networks. January 1994. URL383 https://www.microsoft.com/en-us/research/publication/384 mixture-density-networks/.385\nEthem F. Can, Aysu Ezen-Can, and Fazli Can. Multilingual Sentiment Analysis: An RNN-Based386 Framework for Limited Data. arXiv:1806.04511 [cs], June 2018. URL http://arxiv.org/387 abs/1806.04511. arXiv: 1806.04511.388\nGabriela V. Cohen Freue. The Pitman estimator of the Cauchy location parameter. Jour-389 nal of Statistical Planning and Inference, 137(6):1900–1913, June 2007. ISSN 0378-3758.390 doi: 10.1016/j.jspi.2006.05.002. URL http://www.sciencedirect.com/science/391 article/pii/S0378375806001285.392\nJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep393 Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs], May 2019.394 URL http://arxiv.org/abs/1810.04805. arXiv: 1810.04805.395\nB. Efron. Bootstrap Methods: Another Look at the Jackknife. Annals of Statistics, 7(1):1–26,396 January 1979. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176344552. URL https://397 projecteuclid.org/euclid.aos/1176344552. Publisher: Institute of Mathematical398 Statistics.399\nF. Feroz, M. P. Hobson, and M. Bridges. MultiNest: an efficient and robust Bayesian inference tool400 for cosmology and particle physics. Monthly Notices of the Royal Astronomical Society, 398(4):401 1601–1614, October 2009. ISSN 00358711, 13652966. doi: 10.1111/j.1365-2966.2009.14548.x.402 URL http://arxiv.org/abs/0809.3437. arXiv: 0809.3437.403\nStanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep Ensembles: A Loss Landscape Per-404 spective. arXiv:1912.02757 [cs, stat], December 2019. URL http://arxiv.org/abs/405 1912.02757. arXiv: 1912.02757.406\nAlex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist tem-407 poral classification: labelling unsegmented sequence data with recurrent neural networks. In408 Proceedings of the 23rd international conference on Machine learning, ICML ’06, pp. 369–409 376, Pittsburgh, Pennsylvania, USA, June 2006. Association for Computing Machinery. ISBN410 978-1-59593-383-6. doi: 10.1145/1143844.1143891. URL https://doi.org/10.1145/411 1143844.1143891.412\nDan Guest, Kyle Cranmer, and Daniel Whiteson. Deep Learning and its Application to LHC Physics.413 Annual Review of Nuclear and Particle Science, 68(1):161–181, October 2018. ISSN 0163-8998,414 1545-4134. doi: 10.1146/annurev-nucl-101917-021019. URL http://arxiv.org/abs/415 1806.11484. arXiv: 1806.11484.416\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image417 Recognition. arXiv:1512.03385 [cs], December 2015. URL http://arxiv.org/abs/418 1512.03385. arXiv: 1512.03385.419\nFeifang Hu and James V. Zidek. The Weighted Likelihood. The Canadian Journal of Statistics / La420 Revue Canadienne de Statistique, 30(3):347–371, 2002. ISSN 0319-5724. doi: 10.2307/3316141.421 URL https://www.jstor.org/stable/3316141.422\nNikos Karampatziakis and John Langford. Online importance weight aware updates. In Proceedings423 of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, UAI’11, pp. 392–399,424 Barcelona, Spain, July 2011. AUAI Press. ISBN 978-0-9749039-7-2.425\nAlex Kendall and Yarin Gal. What Uncertainties Do We Need in Bayesian Deep Learning for426 Computer Vision? In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish-427 wanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30,428 pp. 5574–5584. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/429 7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.430 pdf.431\nTakao Kitaguchi, Kevin Black, Teruaki Enoto, Asami Hayato, Joanne E. Hill, Wataru B. Iwakiri,432 Philip Kaaret, Tsunefumi Mizuno, and Toru Tamagawa. A convolutional neural network ap-433 proach for reconstructing polarization information of photoelectric X-ray polarimeters. Nuclear434 Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors435 and Associated Equipment, 942:162389, October 2019. ISSN 01689002. doi: 10.1016/j.nima.436 2019.162389. URL http://arxiv.org/abs/1907.06442. arXiv: 1907.06442.437\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with438 Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou, and439 K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp.440 1097–1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/441 4824-imagenet-classification-with-deep-convolutional-neural-networks.442 pdf.443\nBalaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive444 uncertainty estimation using deep ensembles. In Proceedings of the 31st International Conference445 on Neural Information Processing Systems, NIPS’17, pp. 6405–6416, Long Beach, California,446 USA, December 2017. Curran Associates Inc. ISBN 978-1-5108-6096-4.447\nNikita Moriakov, Ashwin Samudre, Michela Negro, Fabian Gieseke, Sydney Otten, and Luc Hen-448 driks. Inferring astrophysical X-ray polarization with deep learning. arXiv:2005.08126 [astro-449 ph], May 2020. URL http://arxiv.org/abs/2005.08126. arXiv: 2005.08126.450\nJ. A. Nelder and R. W. M. Wedderburn. Generalized Linear Models. Journal of the Royal Statistical451 Society. Series A (General), 135(3):370–384, 1972. ISSN 0035-9238. doi: 10.2307/2344614.452 URL https://www.jstor.org/stable/2344614. Publisher: [Royal Statistical Soci-453 ety, Wiley].454\nYaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V.455 Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can You Trust Your Model’s Uncertainty?456 Evaluating Predictive Uncertainty Under Dataset Shift. arXiv:1906.02530 [cs, stat], December457 2019. URL http://arxiv.org/abs/1906.02530. arXiv: 1906.02530.458\nGeorge Papamakarios. Neural Density Estimation and Likelihood-free Inference. arXiv:1910.13233459 [cs, stat], October 2019. URL http://arxiv.org/abs/1910.13233. arXiv:460 1910.13233.461\nGeorge Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density462 Estimation. arXiv:1705.07057 [cs, stat], June 2018. URL http://arxiv.org/abs/1705.463 07057. arXiv: 1705.07057.464\nTim Pearce, Mohamed Zaki, and Andy Neely. Bayesian Neural Network Ensembles.465 arXiv:1811.12188 [cs, stat], November 2018. URL http://arxiv.org/abs/1811.466 12188. arXiv: 1811.12188.467\nJeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global Vectors for Word468 Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan-469 guage Processing (EMNLP), pp. 1532–1543, Doha, Qatar, October 2014. Association for Com-470 putational Linguistics. doi: 10.3115/v1/D14-1162. URL https://www.aclweb.org/471 anthology/D14-1162.472\nJoseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified,473 Real-Time Object Detection. arXiv:1506.02640 [cs], June 2015. URL http://arxiv.org/474 abs/1506.02640. arXiv: 1506.02640.475\nLothar Rossol. Computer Vision in Industry. In Alan Pugh (ed.), Robot Vision, International476 Trends in Manufacturing Technology, pp. 11–18. Springer, Berlin, Heidelberg, 1983. ISBN 978-477 3-662-09771-7. doi: 10.1007/978-3-662-09771-7 2. URL https://doi.org/10.1007/478 978-3-662-09771-7_2.479\nCarmelo Sgro. The gas pixel detector on board the IXPE mission. In Oswald H.480 Siegmund (ed.), UV, X-Ray, and Gamma-Ray Space Instrumentation for Astron-481 omy XX, pp. 16, San Diego, United States, August 2017. SPIE. ISBN 978-1-482 5106-1251-8 978-1-5106-1252-5. doi: 10.1117/12.2273922. URL https://www.483 spiedigitallibrary.org/conference-proceedings-of-spie/10397/484 2273922/The-gas-pixel-detector-on-board-the-IXPE-mission/10.485 1117/12.2273922.full.486\nC. Sgrò and IXPE Team. The Imaging X-ray Polarimetry Explorer (IXPE). Nuclear Instru-487 ments and Methods in Physics Research A, 936:212–215, August 2019. ISSN 0168-9002. doi:488 10.1016/j.nima.2018.10.111. URL http://adsabs.harvard.edu/abs/2019NIMPA.489 936..212S.490\nConstantin Steppa and Tim Lukas Holch. HexagDLy - Processing hexagonally sampled data with491 CNNs in PyTorch. SoftwareX, 9:193–198, January 2019. ISSN 23527110. doi: 10.1016/j.softx.492 2019.02.010. URL http://arxiv.org/abs/1903.01814. arXiv: 1903.01814.493\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,494 Łukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In I. Guyon, U. V. Luxburg,495 S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neu-496 ral Information Processing Systems 30, pp. 5998–6008. Curran Associates, Inc., 2017. URL497 http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.498\nMartin Weisskopf. An Overview of X-Ray Polarimetry of Astronomical Sources. Galaxies, 6:33,499 March 2018. doi: 10.3390/galaxies6010033. URL http://adsabs.harvard.edu/abs/500 2018Galax...6...33W.501\nMartin C. Weisskopf, Ronald F. Elsner, and Stephen L. O’Dell. On understanding the figures of merit502 for detection and measurement of x-ray polarization. arXiv:1006.3711 [astro-ph], pp. 77320E,503 July 2010. doi: 10.1117/12.857357. URL http://arxiv.org/abs/1006.3711. arXiv:504 1006.3711.505\nAndreas Wächter and Lorenz T. Biegler. On the implementation of an interior-point filter line-506 search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):507 25–57, March 2006. ISSN 1436-4646. doi: 10.1007/s10107-004-0559-y. URL https://508 doi.org/10.1007/s10107-004-0559-y.509" } ]
2,020
null
SP:ff7570a39b118ef58a9bf05561824c85c5b48535
[ "This paper explores the usage of EBMs in continual learning for classification. Although the application of EBMs in continual learning is novel, the general idea is a special case of the usage of EBMs for structured prediction, which has been widely studied. For instance, multi-class classification can be considered as a special version of multi-label classification, which has been studied in Belanger and McCallum (2016) and a set of follow-up works. The main difference here is that multi-class classification is a simpler problem, and all possible classes can be enumerated in O(N), but in multi-label classification, more complicated inference such as gradient-descent based approaches must be used." ]
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs have a natural way to support a dynamically-growing number of tasks and classes and less interference with old tasks. We show that EBMs are adaptable to a more general continual learning setting where the data distribution changes without the notion of explicitly delineated tasks. We also find that EBMs outperform the baseline methods by a large margin on several continual learning benchmarks. These observations point towards EBMs as a class of models naturally inclined towards the continual learning regime.
[]
[ { "authors": [ "Maruan Al-Shedivat", "Trapit Bansal", "Yuri Burda", "Ilya Sutskever", "Igor Mordatch", "Pieter Abbeel" ], "title": "Continuous adaptation via meta-learning in nonstationary and competitive environments", "venue": "arXiv preprint arXiv:1710.03641,", "year": 2017 }, { "authors": [ "Rahaf Aljundi", "Klaas Kelchtermans", "Tinne Tuytelaars" ], "title": "Task-free continual learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "David Belanger", "Andrew McCallum" ], "title": "Structured prediction energy networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yilun Du", "Shuang Li", "Igor Mordatch" ], "title": "Compositional visual generation and inference with energy based models", "venue": "arXiv preprint arXiv:2004.06030,", "year": 2020 }, { "authors": [ "Sebastian Farquhar", "Yarin Gal" ], "title": "Towards robust evaluations of continual learning", "venue": "arXiv preprint arXiv:1805.09733,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": null, "year": 1912 }, { "authors": [ "Fredrik K Gustafsson", "Martin Danelljan", "Goutam Bhat", "Thomas B Schön" ], "title": "Energy-based models for deep probabilistic regression", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Michael Gygli", "Mohammad Norouzi", "Anelia Angelova" ], "title": "Deep value networks learn to evaluate and iteratively refine structured outputs", "venue": "arXiv preprint arXiv:1703.04363,", "year": 2017 }, { "authors": [ "Chen He", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Exemplar-supported generative reproduction for class incremental learning", "venue": "In BMVC,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Saihui Hou", "Xinyu Pan", "Chen Change Loy", "Zilei Wang", "Dahua Lin" ], "title": "Learning a unified classifier incrementally via rebalancing", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wenpeng Hu", "Zhou Lin", "Bing Liu", "Chongyang Tao", "Zhengwei Tao", "Jinwen Ma", "Dongyan Zhao", "Rui Yan" ], "title": "Overcoming catastrophic forgetting for continual learning via model adaptation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges" ], "title": "The mnist database of handwritten digits", "venue": "URL http://yann. lecun. com/exdb/mnist,", "year": 1998 }, { "authors": [ "Timothée Lesort", "Hugo Caselles-Dupré", "Michael Garcia-Ortiz", "Andrei Stoian", "David Filliat" ], "title": "Generative models from the perspective of continual learning", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Davide Maltoni", "Vincenzo Lomonaco" ], "title": "Continuous learning in single-incremental-task scenarios", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Nicolas Y Masse", "Gregory D Grant", "David J Freedman" ], "title": "Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Martin Mundt", "Yong Won Hong", "Iuliia Pliushch", "Visvanathan Ramesh" ], "title": "A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning", "venue": "arXiv preprint arXiv:2009.01797,", "year": 2020 }, { "authors": [ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt in dynamic, real-world environments through metareinforcement learning", "venue": "arXiv preprint arXiv:1803.11347,", "year": 2018 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Tetiana Parshakova", "Jean-Marc Andreoli", "Marc Dymetman" ], "title": "Distributional reinforcement learning for energy-based sequential models", "venue": "arXiv preprint arXiv:1912.08517,", "year": 2019 }, { "authors": [ "Ameya Prabhu", "Philip HS Torr", "Puneet K Dokania" ], "title": "Gdumb: A simple approach that questions our progress in continual learning", "venue": null, "year": 2020 }, { "authors": [ "Jathushan Rajasegaran", "Munawar Hayat", "Salman H Khan", "Fahad Shahbaz Khan", "Ling Shao" ], "title": "Random path selection for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jathushan Rajasegaran", "Salman Khan", "Munawar Hayat", "Fahad Shahbaz Khan", "Mubarak Shah" ], "title": "itaml: An incremental task-agnostic meta-learning approach", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "Amirmohammad Rooshenas", "Dongxu Zhang", "Gopal Sharma", "Andrew McCallum" ], "title": "Searchguided, lightly-supervised training of structured prediction energy networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Joan Serra", "Didac Suris", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": "arXiv preprint arXiv:1801.01423,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Xiaoyu Tao", "Xinyuan Chang", "Xiaopeng Hong", "Xing Wei", "Yihong Gong" ], "title": "Topology-preserving class-incremental learning", "venue": null, "year": 2020 }, { "authors": [ "Xiaoyu Tao", "Xiaopeng Hong", "Xinyuan Chang", "Songlin Dong", "Xing Wei", "Yihong Gong" ], "title": "Fewshot class-incremental learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Lifu Tu", "Kevin Gimpel" ], "title": "Benchmarking approximate inference methods for neural structured prediction", "venue": "arXiv preprint arXiv:1904.01138,", "year": 2019 }, { "authors": [ "Lifu Tu", "Richard Yuanzhe Pang", "Sam Wiseman", "Kevin Gimpel" ], "title": "Engine: Energy-based inference networks for non-autoregressive machine translation", "venue": "arXiv preprint arXiv:2005.00850,", "year": 2020 }, { "authors": [ "Gido M van de Ven", "Andreas S Tolias" ], "title": "Three scenarios for continual learning", "venue": "arXiv preprint arXiv:1904.07734,", "year": 2019 }, { "authors": [ "Gido M van de Ven", "Hava T Siegelmann", "Andreas S Tolias" ], "title": "Brain-inspired replay for continual learning with artificial neural networks", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Yue Wu", "Yinpeng Chen", "Lijuan Wang", "Yuancheng Ye", "Zicheng Liu", "Yandong Guo", "Yun Fu" ], "title": "Large scale incremental learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Yingnian Wu" ], "title": "A theory of generative convnet", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Ruiqi Gao", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Cooperative training of descriptor and generator networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "arXiv preprint arXiv:1708.01547,", "year": 2017 }, { "authors": [ "Guanxiong Zeng", "Yang Chen", "Bo Cui", "Shan Yu" ], "title": "Continual learning of context-dependent processing in neural networks", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chen Zeno", "Itay Golan", "Elad Hoffer", "Daniel Soudry" ], "title": "Task agnostic continual learning using online variational bayes", "venue": "arXiv preprint arXiv:1803.10123,", "year": 2018 }, { "authors": [ "Bowen Zhao", "Xi Xiao", "Guojun Gan", "Bin Zhang", "Shu-Tao Xia" ], "title": "Maintaining discrimination and fairness in class incremental learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "van de Ven" ], "title": "The model architectures", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans are able to rapidly learn new skills and continuously integrate them with prior knowledge. The field of Continual Learning (CL) seeks to build artificial agents with the same capabilities (Parisi et al., 2019). In recent years, CL has seen increased attention, particularly in the context of classification problems. A crucial characteristic of continual learning is the ability to learn new data without forgetting prior data. Models must also be able to incrementally learn new skills, without necessarily having a notion of an explicit task identity. However, standard neural networks (He et al., 2016; Simonyan & Zisserman, 2014; Szegedy et al., 2015) experience the catastrophic forgetting problem and perform poorly in this setting. Different approaches have been proposed to mitigate catastrophic forgetting, but many rely on the usage of external memory (Lopez-Paz & Ranzato, 2017; Li & Hoiem, 2017), additional models (Shin et al., 2017), or auxiliary objectives and regularization (Kirkpatrick et al., 2017; Schwarz et al., 2018; Zenke et al., 2017; Maltoni & Lomonaco, 2019), which can constrain the wide applicability of these methods.\nIn this work, we focus on classification tasks. These tasks are usually tackled by utilizing normalized probability distribution (i.e., softmax output layer) and trained with a cross-entropy objective. In this paper, we argue that by viewing classification from the lens of training an un-normalized probability distribution, we can significantly improve continual learning performance in classification problems. In particular, we interpret classification as learning an Energy-Based Model (EBM) across seperate classes (Grathwohl et al., 2019). Training becomes a wake-sleep process, where the energy of an input data and its ground truth label is decreased while the energy of the input and another selected class is increased. This offers freedom to choose what classes to update in the CL process. By contrast, the cross entropy objective reduces the likelihood of all negative classes when given a new input, creating updates that lead to forgetting.\nThe energy function, which maps a data and class pair to a scalar energy, also provides a way for the model to select and filter portions of the input that are relevant towards the classification on hand. We show that this enables EBMs training updates for new data to interfere less with previous data. In particular, our formulation of the energy function allows us to compute the energy of a data by learning a conditional gain based on the input label which serves as an attention filter to select the most relevant information. In the event of a new class, a new conditional gain can be learned.\nThese unique benefits are applicable across a range of continual learning tasks. Most existing works (Kirkpatrick et al., 2017; Zhao et al., 2020) toward continual learning typically learn a sequence of distinct tasks with clear task boundaries (Boundary-Aware). Many of these methods depend on knowing the task boundaries that can provide proper moments to consolidate knowledge. However, this scenario is not very common in the real world, and a more natural scenario is the Boundary-\nAgnostic setting (Zeno et al., 2018; Rajasegaran et al., 2020), in which data gradually changes without a clear notion of task boundaries. This setting has also been used as a standard evaluation in the continual reinforcement learning (Al-Shedivat et al., 2017; Nagabandi et al., 2018). Many common CL methods are not applicable to the Boundary-Agnostic scenario as the task boundaries are unknown or undefined. In contrast, EBMs are readily applied to this setting without any modification and are able to support both Boundary-Aware and Boundary-Agnostic settings.\nThere are four primary contributions of our work. First, we introduce energy-based models for classification CL problems in both boundary-aware and boundary-agnostic regimes. Secondly, we use the standard contrastive divergence training procedure and show that it significantly reduces catastrophic forgetting. Thirdly, we propose to learn new conditional gains during the training process which makes EBMs parameter updates cause less interference with old data. Lastly, we show that in practice EBMs bring a significant improvement on four standard CL benchmarks, split MNIST, permuted MNIST, CIFAR-10, and CIFAR-100. These observations towards EBMs as a class of models naturally inclined towards the CL regime." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 CONTINUAL LEARNING SETTINGS", "text": "Boundary-aware versus boundary-agnostic. In most existing continual learning studies, models are trained in a “boundary-aware” setting, in which a sequence of distinct tasks with clear task boundaries is given (e.g., Kirkpatrick et al., 2017; Zenke et al., 2017; Shin et al., 2017). Typically there are no overlapping classes between any two tasks; for example task 1 has data with ground truth class labels “1,2” and task 2 has data with ground truth class labels “3,4”. In this setting, models are first trained on the entire first task and then move to the second one. Moreover, models are typically told when there is a transition from one task to the next. However, it could be argued that it is more realistic for tasks to change gradually and for models to not be explicitly informed about the task boundaries. Such a boundary-agnostic setting has been explored in (Zeno et al., 2018; Rajasegaran et al., 2020; Aljundi et al., 2019). In this setting, models learn in a streaming fashion and the data distributions gradually change over time. For example, the percentage of “1s” presented to the model might gradually decrease while the percentage of “2s” increases. Importantly, most existing continual learning approaches are not applicable to the boundary-agnostic setting as they require the task boundaries to decide when to consolidate the knowledge (Zeno et al., 2018). In this paper, we will show that our proposed approach can also be applied to the boundary-agnostic setting.\nTask-incremental versus class-incremental learning. Another important distinction in continual learning is between task-incremental learning and class-incremental learning (van de Ven & Tolias, 2019; Prabhu et al., 2020). In task-incremental learning, also referred to as the multi-head setting (Farquhar & Gal, 2018), models have to predict the label of an input data by choosing only from the labels in the task where the data come from. On the other hand, in class-incremental learning, also referred to as the single-head setting, models have to chose between the classes from all tasks so far when asked to predict the label of an input data. Class-incremental learning is substantially more challenging than task-incremental learning, as it requires models to select the correct labels from the mixture of new and old classes. So far, only methods that store data or use replay have been shown to perform well in the class-incremental learning scenario (Rebuffi et al., 2017; Rajasegaran et al., 2019). In this paper, we try to tackle class-incremental learning without storing data and replay." }, { "heading": "2.2 CONTINUAL LEARNING APPROACHES", "text": "In recent years, numerous methods have been proposed for CL. Here we broadly partition these methods into three categories: task-specific, regularization, and replay-based approaches.\nTask-specific methods. One way to reduce interference between tasks is by using different parts of a neural network for different problems. For a fixed-size network, such specialization could be achieved by learning a separate mask for each task (Fernando et al., 2017; Serra et al., 2018), by a priori defining a different, random mask for every task to be learned (Masse et al., 2018) or by using a different set of parameters for each task (Zeng et al., 2019; Hu et al., 2019). Other methods let a neural network grow or recruit new resources when it encounters new tasks, examples of which are progressive neural networks (Rusu et al., 2016) and dynamically expandable networks (Yoon et al., 2017). Although these task-specific approaches are generally successful in reducing catastrophic forgetting, an important disadvantage is that they require knowledge of the task identity at both training and test time. These methods are therefore not suitable for class-incremental learning.\nRegularization-based methods. Regularization is used in continual learning to encourage stability of those aspects of the network that are important for previously learned tasks. A popular strategy is to add a regularization loss to penalise changes to model parameters that are important for previous tasks. EWC (Kirkpatrick et al., 2017)) and online EWC (Schwarz et al., 2018) evaluate the importance of each parameter using the diagonal elements in the fisher information matrices, while SI (Zenke et al., 2017) tracks the past and current parameters and estimates their importance online. An alternative strategy is to regularize the network at the functional level. Learning without Forgetting (Li & Hoiem, 2017) uses knowledge distillation to encourage stability of the network’s learned input-output mapping. However, these regularization-based approaches gradually reduce the model’s capacity for learning new tasks and they have been shown to consistently fail in the class-incremental learning (Farquhar & Gal, 2018; van de Ven & Tolias, 2019).\nReplay methods. To preserve knowledge, replay methods periodically rehearse previously acquired information during training (Robins, 1995). One way to do this, referred to as exact or experience replay, is to store data from previous tasks and revisit them when training on a new task. Although this might seem straightforward, critical non-trivial questions are how to select the data to be stored as well as exactly how to use them (Rebuffi et al., 2017; Lopez-Paz & Ranzato, 2017; Rajasegaran et al., 2019; Hou et al., 2019; Wu et al., 2019; Mundt et al., 2020). An alternative to storing data is to generate the data to be replayed. In such generative replay (Shin et al., 2017), a generative model is sequentially trained to generate input samples representative of those from previously seen tasks. While both types of replay can prevent catastrophic forgetting in both task- and class-incremental learning settings, an important disadvantage is that these methods are computationally relatively expensive as each replay event requires at least one forward and one backward pass through the model. In addition, storing data might not always be possible while incrementally training a generative model is a challenging problem in itself (Lesort et al., 2019; van de Ven et al., 2020).\nIn contrast, our EBMs reduce catastrophic forgetting without requiring knowledge of task-identity, without gradually restricting the model’s learning capabilities and without using stored data." }, { "heading": "3 BACKGROUND", "text": "In this section, we will introduce traditional continual learning methods, which are typically modified based on the feed-forward classifier. We will show the limitation of using such structures and how EBMs can be applied to these problems in the next section." }, { "heading": "3.1 CONTINUAL LEARNING WITH FEED-FORWARD CLASSIFIER", "text": "Define T as a classification task with inputs x and associated discrete classification labels y ∈ Y , where Y = {1, . . . , N} contains N classes. Traditional classifier approaches to continual learning typically predict the probabilities of all the N classes using a feed-forward network\npθ(y|x) = exp([f(x)]y)∑ exp([f(x)]yi) , yi ∈ Y (1)\nThe input is a data x ∈ RD and the output is probabilities of all classes pθ ∈ RN . f(x) : RD → RN is the feed-forward function that maps data x into a N -dimensional vector. The final layer in such classifier networks is predefined to predict up to N different classes.\nExisting continual learning methods (Kirkpatrick et al., 2017; Masse et al., 2018) tackle the continual learning problem typically using a feed-forward classifier. However, such classifier architectures do not perform well in the Class-IL setting, where models have to predict the label of a given data from all classes (Tao et al., 2020a; He et al., 2018). The standard feed-forward classifier computes the Softmax and cross-entropy loss over all seen classes. When training on a new task, they improve the likelihood of ground truth classes but suppress the likelihood of old classes since they are all negative classes for data in the current task. This operation introduces competitive, winner-take-all dynamics that impact the strength of old classes and thus make the classifier forget past tasks." }, { "heading": "3.2 ENERGY-BASED MODELS", "text": "Energy based models (LeCun et al., 2006) are a class of maximum likelihood models that define the likelihood of a data point x ∈ RD using the Boltzmann distribution\npθ(x) = exp(−Eθ(x))\nZ(x) , Z(x) = ∫ x exp(−Eθ(x)) (2)\nwhere Eθ(x) : RD → R, known as the energy function, maps each input data to a scalar, and Z(x) is the partition function over all data points.\nEBMs are powerful generative models that have been applied to different tasks, such as structured prediction (Belanger & McCallum, 2016; Gygli et al., 2017; Rooshenas et al., 2019; Tu & Gimpel, 2019), machine translation (Tu et al., 2020), and image generation (Xie et al., 2016; 2018; Grathwohl et al., 2019). (Belanger & McCallum, 2016) introduce energy networks for structured prediction. They propose energy networks can be naturally posed as structured prediction, since the labels exhibit rich interaction structure. Tu & Gimpel (2019) propose a structured energy function for sequence labeling tasks. Rooshenas et al. (2019) introduce a search-guided approach to obtain training pairs using a combination of the energy function and reward function that allows the lightly-supervised training of structured prediction energy networks. Tu et al. (2020) treat the nonautoregressive translation system as an inference network that finds the translation by minimizing the energy of a pretrained autoregressive model. Xie et al. (2016) is the first paper to use ConvNetparameterized EBMs with Langevin dynamics for image generation. They show that a generative random field model can be derived from the commonly used discriminative ConvNet. (Xie et al., 2018) cooperatively train the descriptor and generator using MCMC sampling." }, { "heading": "4 CONTINUAL LEARNING WITH ENERGY-BASED MODELS", "text": "In this section, we describe how EBMs can be used for classification and we discuss how they can overcome the above mentioned problems in the continual learning setting. Note our analysis is based on the Class-IL setting which is one of the most natural and also the hardest settings used in CL (van de Ven & Tolias, 2019; He et al., 2018; Tao et al., 2020b)." }, { "heading": "4.1 ENERGY-BASED MODELS FOR CLASSIFICATION", "text": "Different from existing works using EBM for structured prediction (Belanger & McCallum, 2016; Gygli et al., 2017; Rooshenas et al., 2019), such as multi-label classification tasks, CL does not access previous data while training on new data and thus the biggest challenge is to prevent catastrophic forgetting. The main focus of this paper is to mitigate the catastrophic forgetting in CL without using replay buffers to store past data or models.\nTo make the EBM suitable for classification, we use the Boltzmann distribution to define the likelihood of y conditioned on x:\npθ(y|x) = exp(−Eθ(x,y))\nZ(y|x) , Z(y|x) = ∑ yi exp(−Eθ(x,yi)) (3)\nwhere Eθ(x,y) : (RD,N) → R is the energy function that maps x and y to a scalar energy value, and Z(y|x) is the partition function that ensures ∑ y pθ(y|x) = 1 for any given x.\nTo minimize the negative log likelihood of the ground truth label y+, inspired by the contrastive divergence proposed by (Hinton, 2002), we minimize the energy of x at the ground truth label y+ while increasing the energy of x at other labels. We use the following loss and its corresponding gradient (see Appendix B for the derivation):\nLML(θ) = Ex∼T [Eθ(x,y+) + log ∑\nyi∈{N ,y+}\nexp(−Eθ(x,yi))]. (4)\n∂LML ∂θ\n= Ex∼T ( ∂Eθ(x,y +)\n∂θ − Eyi∼{N ,y+}\n[ ∂Eθ(x,yi)\n∂θ\n]) . (5)\nwhereby N is the set of ‘negative classes’. Following common practice in contrastive divergence, we sample the set of negative classes N from the set of class labels in the current batch YB , using only a single negative class per training sample. We note however that it is possible to use different strategies for choosing the negative classes, and in the experiments reported in the top half of Table 2 we explore alternative strategies. If there are batches that contain only a single class label, sampling a negative class from the current batch does not work, but one could instead select whichever other class was seen most recently as a negative class, or one could sample from all other classes seen so far. Another solution might be to always use a fake label representing “not exist” as negative sample.\nAs opposed to the classifier architectures that output a fixed N -dimensional probability vector of all classes, this formulation thus provides freedom in the choice of which class labels to train on.\nThis is important because an important issue with standard classifier models is that they suppress the strength of old classes and thus make the classifier forget past tasks as mentioned in Section 3.1. In contrast, EBMs define the probability without normalizing over all classes but instead sample a limited number of negative classes. This operation cause less interference with old classes without using any extra information which contributes to why EBMs suffer less from catastrophic forgetting.\nAnother advantage of our EBMs is that the choice of model architectures also becomes more flexible. The traditional classification models can only feed in x as input and the classification results only depend on the neural network trained on the current data x. Thus the models usually fail on the old data after updating network parameters trained on the new data. In contrast, EBMs have many different ways to combine x and y in the energy function Eθ(x,y) with the only requirement that Eθ(x,y) : (RD,N)→ R. In EBMs, we can treat y as an attention filter or gate to select the most relevant information between x and y as described in Section 4.2.\nSince the negative sample can be sampled from the current batch, EBMs do not require knowledge of the task on hand, allowing applications across different continual learning scenarios, including those when the underlying task is unclear. More broadly speaking, EBMs can go out of the scope of classification problems, and sim-\nilar ideas can be applied to regression (Gustafsson et al., 2020), generation (Xie et al., 2016; Du et al., 2020), and reinforcement learning tasks (Parshakova et al., 2019), although we do not explore them in this work." }, { "heading": "4.2 ENERGY NETWORK.", "text": "To compute the energy of any data x and class label y pair, we use y to influence a conditional gain on x, which serves as an attention filter (Xu et al., 2015) to select the most relevant information between x and y. In Figure 1, we first send x into a small network to generate the feature f(x). The label y is mapped into a same dimension feature space g(y) using a FC layer or a random projection. We use the Softmax function over each channel of g(y) and combine it with x:\nmc(x,y) = fc(x) · exp(gc(y))∑ j exp(gj(y)) , (6)\nwhere fc and gc are the cth channel of f(x) and g(y) respectively. The output is finally sent to a fully connected layer to generate the energy value Eθ(x,y). For more details on our model architectures, see Appendix C.\nOur EBMs allow any number of classes in new batches by simply training or defining a new conditional gain g(y) for the new classes and generating its energy value with data point x. This formulation gives us freedom to learn new classes without pre-defining their number in advance. We note that also in a standard classifier new classes can be dynamically added, but there new class heads need to be added to the softmax output layer. Class inference. During inference, the model must predict a class label from all classes seen so far. let xk be one data point from a batch Bk with an associated discrete label y ∈ Yk, where Yk contains classes in Bk. There are Y = ⋃K k=1 Yk different classes in total after seeing all the batches. The MAP estimate is ŷ = argmin\ny EθK (xk,y), y ∈\n⋃ Yk, (7)\nwhere EθK (xk,y) is the energy function with parameters θK resulting from training on the batches {B1, · · · , BK}. The energy function can compute an energy for any discrete class input, including unseen classes. This avoids needing to predefine the number of classes in advance, as is necessary with traditional CL models." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we want to investigate several questions, including how does the proposed EBMs perform on different CL settings? Can we qualitatively understand the differences between EBMs and baselines? Is the proposed EBM training objective better than other objectives? Can we apply the EBM training objective to other baseline methods? And finally what is the the best architecture for\nlabel conditioning that causes the minimum interference with old data? To answer these questions, we first introduce experiments on the Boundary-Aware setting which typically learn a sequence of distinct tasks with clear task boundaries in Section 5.1. We then show that EBMs can be naturally applied to the Boundary-Agnostic setting where the data gradually changing in a steaming fashion without the notion of task boundaries in Section 5.2." }, { "heading": "5.1 EXPERIMENTS ON BOUNDARY-AWARE SETTING", "text": "" }, { "heading": "5.1.1 DATASETS AND EVALUTION PROTOCOLS", "text": "Datasets. We evaluate the proposed EBMs on the split MNIST (Zenke et al., 2017), permuted MNIST (Kirkpatrick et al., 2017), CIFAR-10 (Krizhevsky et al., 2009), and CIFAR-100 (Krizhevsky et al., 2009) datasets. We follow existing approaches and separate the datasets into several splits. The split MNIST dataset is obtained by splitting the original MNIST dataset (LeCun et al., 1998) into 5 tasks with each task has 2 classes. It has 60,000 training images and 10,000 test images. The permuted MNIST has 10 tasks, each task with 10 classes. For each task, the original images pixels are randomly permuted to generate 32 × 32 images. We separate CIFAR-10 into 5 tasks, each task with 2 classes. There are 50,000 training images and 10,000 test images. Similarly, CIFAR-100 is split into ten tasks with each task has 10 classes. Evaluation protocols. Task-incremental Learning (Task-IL), Domain-incremental Learning (Domain-IL), and Class-incremental Learning (Class-IL) are three common evaluation metrics used by existing works (van de Ven & Tolias, 2019; Prabhu et al., 2020). Most approaches perform fairly good on the first two simpler settings, but fail on the Class-IL setting, which is considered as the most natural and also the hardest setting for continual learning (Tao et al., 2020a; He et al., 2018; Tao et al., 2020b). Class-IL predicts the label of a given data from all seen classes. In this paper, we consider Class-IL as our evaluation metric." }, { "heading": "5.1.2 COMPARISONS WITH EXISTING METHODS", "text": "Due to the difficulty on the class incremental setting, most approaches rely an external quota of memory. Such replay based methods (Shin et al., 2017; Rebuffi et al., 2017) use extra memory, larger or extra models, and are computationally relatively expensive. In this paper, we focus on continual learning without using replay and without storing data.\nWe compare the proposed method with available baseline models that do not use replay or stored data, including a standard classification model (CLS), EWC (Kirkpatrick et al., 2017), Online EWC (Schwarz et al., 2018), SI (Zenke et al., 2017), LwF (Li & Hoiem, 2017), MAS (Aljundi et al., 2019), and BGD (Zeno et al., 2018). The continual learning results on four datasets are shown in Table 1. All the baselines and EBMs are based on the same model architecture. For split MNIST and permuted MNIST, we use several fully-connected layers as in (van de Ven & Tolias, 2019). For CIFAR-10 and CIFAR-100, we use the convolutional network (see Appendix C). For all the baselines and EBMs on CIFAR-100, we pre-train the convolutional layers on CIFAR-10 and only fine-tune the fully-connected layers.\nSimilar training regimes are used for the EBMs and baselines. On the split MNIST, permuted MNIST, and CIFAR-10 datasets, we train for 2000 iterations per task. On the CIFAR-100 dataset we train for 5000 iterations per task. For all experiments, we use the adam optimizer with learning rate 1e−4. Each experiment in Table 1 is run 20 times with different random seeds, with results reported as the mean ± SEM. EBMs have a significant improvement on all the datasets, showing that EBMs forget less when updating models for new tasks." }, { "heading": "5.1.3 QUALITATIVE ANALYSIS", "text": "Energy landscape. To better understand why EBMs suffer less from catastrophically forgetting, we qualitatively compare the change in energy landscapes of EBMs and baseline methods as the learning progresses. We show the energy landscapes after training on task 9 and task 10 on the permuted MNIST dataset in Figure 2. For the classifier, we show its negative probabilities on all classes. Each datapoint has 100 energy values (EBM) or probabilities (CLS) corresponding to the 100 labels in the dataset and the values are normalized over 100 classes for each datapoint. The dark diagonal elements indicate the model predictions are correct for both old and new data. After training on task T9, CLS assigns high probabilities to classes from task T9 (80-90) for almost all the data from task T1 to T9. However, the highest probabilities shift to classes from task T10 (90-100) after learning task T10. This means classifier tends to assign high probabilities to new classes for both old and new data, indicating forgetting. EBM on the other hand, tends to have low energies across the diagonal, which means that after training on new tasks, EBM still assigns low energies to true labels of previous data. This indicates that EBM is better at learning new tasks without catastrophically forgetting of old tasks.\nLabel Distributation (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel Distributation (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel Distr buta ion (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel Distributa ion (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel Distribu a ion (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10EBM\nLabel Distributation (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel Distributa ion (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\nLabel D istributation (Class-IL) splitM N IST C LS EB M\n12\n34\n5\n6\n78\n9\n10\n12\n1 23 4\n1 2\n3 45\n6\n1 2 3 456 7 8\n1 2 3\n4 5 67 8 9 10\nLabel Distributation (Class-IL)\nsplitMNIST\nCLS\nEBM\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1\n2\n12\n3 4\n1 23\n4 5\n6\n1 2345 6 7 8\n1 23 45 6\n7 8 9 10\n7\n8\n9\n10\n5\n6 CLS\nFigure 3: Predicted label distribution after learning each task on the split MNIST dataset. CLS only predicts classes from the current task while EBMs can predict classes for all seen classes.\nPredicted class distribution. In Figure 3, we plot the proportional distribution of all seen classes so far on the splitMNIST dataset. Taking the second figure in the first row as an example, it shows the distribution of predicted labels after training on the second task which means there are four seen classes {1, 2, 3, 4}. For each image from the first and second task, we compute its top-1 class prediction and accumulate the prediction on each class to obtain the proportional distribution graph over all seen classes. Since the number of images in each class are almost the same, the ground truth proportional distribution should be\nuniform over all the classes. CLS can predict the first two classes uniformly after training on task 1 (first column). However, after learning new tasks, classes from the new task have larger proportions than old classes which means the model tends to predict new classes and forget the old classes. The last figure in row one gives very high proportions for classes from the last task but very low proportions for previously classes. CLS almost forgets all previous tasks and only predict classes from the last task even the input images are from old tasks. EBM covers all seen classes, indicating memorizing the old task and preventing forgetting.\nWe provide more analysis about EBMs and baselines from different perspectives in the Appendix, including the confusion matrix in Appendix A.2, the model capacity in Appendix A.3, and parameter importance in Appendix A.4." }, { "heading": "5.1.4 IS THE STRONG PERFORMANCE OF EBMS DUE TO THE ENERGY TRAINING OBJECTIVE OR DUE TO THE LABEL CONDITIONING?", "text": "Effect of energy training objective. We conduct an experiment on the CIFAR-100 dataset to investigate how different training objectives influence the CL results. In Equation 4, the negative labels yi ares sampled from YB . We test three different sampling strategies in Table 2. The first one uses all seen classes so far as negative labels which is similar to the traditional feed-forward classifiers (All Neg Seen (V4)). The second one takes all the classes in the current batch as negative labels (All Neg Batch (V4)). The last one randomly selects one class from the current batch as the negative as we reported in Section 4.1 (1 Neg Batch (V4)). Note that the negative labels do not include the ground truth class in all these three strategies. We found using only one negative sample generates the best result and using negatives sampled from classes in the current batch is better than from all seen classes. This conclusion is consistent with our analysis in Section 4.1. Since the loss in Equation 4 aims at improving the energy of negative samples while decreasing the energy of positive ones, sampling negatives from the current batch has less interference with previous classes than sampling\nfrom all seen classes. Using a single negative causes the minimum suppression on negative samples and thus has the best result. Overall, our results indicate that surprisingly, and counterintuitively, directly optimizing the cross-entropy loss used by existing approaches may not be the best way for continual learning.\nEffect of label conditioning. Next, we test whether the label conditioning in our EBMs is important for their performance. We test this by modifying the training objective of a standard classifier to that of our proposed energy objective, which is the same as running our EBMs without the label conditioning. The results of this comparison are listed in Table 3. The new training objective does not suppress the probability of old classes when improving the probability of new classes and thus achieves better results. However, EBMs still outperform the baselines, implying that the label conditioning architecture also contributes to why EBMs suffer less from catastrophic forgetting. We also show the testing accuracy curve in Appendix A.1.\nTo summarize, we showed that the strong performance of our EBMs is due to both the energy training objective and the label conditioning." }, { "heading": "5.1.5 COMPARISON OF DIFFERENT ENERGY NETWORK ARCHITECTURES", "text": "EBMs allow flexibility in integrating data information and label information in the energy function. To investigate where and how to combine the information from x and y, we conduct a series of experiments on the CIFAR-10 dataset. Table 2 shows four model architectures (V1-V4) that combine x and y in the early, middle, and late stages, respectively (see Appendix C for the architecture details). We find combining x and y in the late stage (V4) performs the best. We note that instead of learning a feature embedding of label y, we can use a fixed projection matrix which is randomly sampled from the uniform distribution U(0, 1). Even using this fixed random projection can already generates better results than most baselines in Table 1. Note further that the number of trainable parameters in the “Fix” setting is much lower than that of the baselines. Using a learned feature embedding of y can further improve the performance. We may also apply different normalization\nmethods over the feature channel of y. We find that Softmax (End Softmax (V4)) is better than the L2 normalization (End Norm2 (V4)) and no normalization (End (V4))." }, { "heading": "5.1.6 INTERFERENCE WITH PAST DATA", "text": "Here we test the effects of label conditioning and the energy training objective in a different way, by evaluating their ability to prevent interference with past data when learning classification on new classes. Formally, let xi and xj be two data points and θi be the model parameters after training on xi. The model parameters change4θ(xj) after learning new data xj . We test the difference between the losses LML(xi | θi) and LML(xi | θi +4θ(xj)). Ideally, we expect LML(xi | θi +4θ(xj)) ≤ LML(xi | θi) for xi 6= xj , which means the update on new data has no influence or positive influence on the old data. The first-order expansion gives LML(xi | θi + 4θ(xj) ) ≈ LML(xi | θi ) + ∇θiLML(xi | θi )\nT4θ(xj). To make our desired equality hold, the gradient at xi should be ∇θiLML(xi | θi ) T4θ(xj) ≤ 0. φ = ∇θiLML(xi | θi ) T4θ(xj) can be used to measure the influence of updating models on new data on the performance of old data. The smaller the value is, the less negative influence on old data, and thus the better the model is in preventing forgetting.\nWe compare EBMs with the standard classifier (CLS) and CLS using our EBM training objective (CLS*) (see Appendix A.5 for the implementation details). For results on split MNIST, the average value of φ over all tasks of CLS, CLS*, and EBMs are φCLS = 0.168, φCLS* = 0.004, and φEBM = −3.743e−5, respectively. For CIFAR-10, the results are φCLS = 1.104, φCLS* = 0.043, and φEBM = 4.414e−5, respectively. We found EBMs have smaller φ than CLS and CLS*, which is consistent with our analysis that the conditional gain g(y) serves as an attention filter that could select the\nrelevant information between x and y and make parameter updates cause less interference with old data, which contributes to why EBMs suffer less from catastrophic forgetting." }, { "heading": "5.2 EXPERIMENTS ON BOUNDARY-AGNOSTIC SETTING", "text": "When applying continual learning in real life, boundaries are not usually well defined between different tasks. Most existing continual learning methods are often tailored to the problem setup where a fixed boundary between tasks is defined, and must be informed when the tasks are switched during training. We show that EBMs are able to flexibly perform continual learning across different problem setups, and perform well on the Boundary-Agnostic setting as well." }, { "heading": "5.2.1 DATASETS AND EVALUTION PROTOCOLS", "text": "We evaluate our proposed continual learning method on the Boundary-Agnostic setting on four datasets, including split MNIST, permuted MNIST, CIFAR-10, and CIFAR-100. We use the code of “continuous task-agnostic learning” proposed by (Zeno et al., 2018) to generate the training/testing data. The transition between tasks goes slowly over time using a sampling function based on the probability of each task. There is a mixture of data from two different tasks during the transition and the proportion of new data gradually increases. Similar to the Boundary-Aware setting, We use Class-IL as our evaluation metric for the Boundary-Agnostic continual learning." }, { "heading": "5.2.2 COMPARISON WITH EXISTING METHODS", "text": "We focus on the investigation on the performance of different model architectures with similar model size and memory footprint. We do not directly compare with replay-based methods as they use extra memory, larger or extra models. Since there is no knowledge on the number of tasks, previous methods of continual learning that rely on task boundaries are generally inapplicable. One trivial adaptation is to take the core action after every batch step instead of every task. However, doing such adaptation is impractical for most algorithms, such as EWC, because of the large computational complexity. We successfully get the result of CLS, Online EWC, SI, and BGD baselines. Overall classification results on four datasets are shown in Table 4.\nAll the baselines and EBM are based on the same model architecture as in the Boundary-Aware setting. Each experiment was performed 5 times with different random seeds, the results are reported as the mean ± SEM over these runs. We observe that EBMs have a significant improvement on all the datasets. EBMs can be naturally applied to both the Boundary-Aware and BoundaryAgnostic settings without any modification, as EBMs have a very flexible training objective. The experiments demonstrate EBMs have good generalization ability for different continual learning problems. EBMs can naturally handle different number of new classes, new tasks, data streams with and without clear task boundaries." }, { "heading": "6 CONCLUSION", "text": "In this paper, we found that energy-based models are a promising class of models towards a variety of different continual learning settings. We demonstrated that EBMs exhibit many desirable characteristics to prevent catastrophic forgetting in CL, and we experimentally showed that EBMs obtain the state-of-the-art performance on the Class-IL and Boundary-Agnostic settings on multiple benchmarks." }, { "heading": "A ADDITIONAL ANALYSES", "text": "Extending the results presented in Section 5.1, here we further compare EBMs with the baseline models by providing additional quantitative analyses of their performance. We show testing accuracy curves in Section A.1, confusion matrices between the ground truth labels and model predictions in Section A.2, model capacity comparisons in Section A.3, and parameter importance measurement in Section A.4. In Section A.5, we provide more details of the label conditioning analysis mentioned in Section 5.1.5. Finally, in Section A.6 we perform the split CIFAR-100 protocol with several different number of tasks to test the generality of the proposed EBMs.\nA.1 TESTING ACCURACY CURVE\nIn Section 5.1.4, we show that the proposed EBM training objective is also applicable to baseline approaches and improves their performance significantly on the Class-IL setting. Figure 4 shows the testing accuracy of each task as the training progresses. We compare the standard classifier (CLS), classifier using our training objective (CLS*), and our EBMs. The left figures show results on the split MNIST dataset while the right figures show results on the permuted MNIST dataset. We observe that the accuracy of old tasks in CLS drop sharply when learning new tasks, while the EBM training objective can mitigate the forgetting problem. The curve on EBMs drops even slower than CLS and CLS*, implying the proposed energy objective can mitigate the catastrophic forgetting problem.\nA.2 CLASS CONFUSION MATRIX AT THE END OF LEARNING\nIn Section 5.1.3, we show the qualitative analysis of EBMs and baselines. Except the energy landscape and predicted class distribution, we also generate the confusion matrices of EBMs and baseline approaches. Confusion matrix shows the relationship between ground truth labels and predicted labels. In Figure 5, we draw the confusion matrices after training on all the tasks on the split MNIST dataset and permuted MNIST dataset. The classifier tends to predict the classes from the last task (class 8, 9 for split MNIST and 90-100 for permuted MNIST). EBMs have high values along the diagonal that means the predicted results matching the ground truth labels for all the sequentially learned tasks. This demonstrates that EBMs are better at learning new tasks without catastrophically forgetting of old tasks.\nA.3 MODEL CAPACITY\nAnother hypothesized reason for why EBMs suffer less from catastrophic forgetting than standard classifiers is potentially their larger effective capacity. To analyze effective capacity of our models, we test the model capacity of the standard classifier and EBMs on both the generated images and natural images.\nModel capacity on generated images. We generate a large randomized dataset of 32× 32 images with each pixel value uniformly sampled from -1 to 1. Each image is then assigned a random class label between 0 and 10. We measure the model capacity by evaluating to what extent the model can fit a such dataset. For both the standard classifier and the EBM, we evaluate three different sizes of models (small, medium, and large). For a fair comparison, we control the EBM and classifier have similar number of parameters. The Small EBM and CLS have 2, 348, 545 and 2, 349, 032 parameters respectively. The medium models have 5, 221, 377 (EBM) and 5, 221, 352 (CLS) parameters while the large models have 33, 468, 417 (EBM) and 33, 465, 320 (CLS) parameters. We use the model architectures in Table 5a and Table 5b for EBMs and classifiers.\nThe resulting training accuracies are shown in Figure 6 with the number of data ranges from one to five millions. Given any number of datapoints, EBM obtains higher accuracy than the classifier, demonstrating that indeed EBM has larger capacity to memorize data given a similar number of parameters. The gap between EBM and CLS increases when the models become larger. The larger capacity of EBM potentially enables it to memorize more data and mitigate the forgetting problem.\nModel capacity on natural images. We also compare classifiers and EBMs on natural images from CIFAR-10. Each image is assigned a random class label between 0 and 10. We use the same network architecture as in Table 5a and Table 5b, but with a hidden unit size of h = 256. Since there are only 50, 000 images on CIFAR-10, we use a small classifier and EBM and train them on the full dataset. After training 100000 iterations, the EBM obtains a top-1 prediction accuracy of 82.81, while the classifier is 42.19. We obtain the same conclusion that EBM has larger capacity to memorize data given a similar number of parameters.\nA.4 PARAMETER IMPORTANCE\nTo further understand why EBMs suffer less from catastrophic forgetting, we design an experiment to test the importance of model parameters on past data. Inspired by the elastic weight consolidation (EWC) (Kirkpatrick et al., 2017), we estimate the importance of parameters for each tasks using the diagonal elements of the Fisher information matrix (FIM) F . Let θi be the model parameters after training on task Ti. Given one of previous tasks Tj , j < i, we evaluate how important each parameter is for tasks Tj . The kth diagonal of F is defined as the gradient on the EBM loss\nFi,k = Ex∼Tj∇θi,k Eθi(x,y+) + log∑ y′ exp(−Eθi(x,y′)) 2 , (8) where x is sampled from tasks Tj and E(x,y+) is the energy value of the input data x and ground truth label y+. y′ ∈ Yj are classes randomly selected from the current batch. Here we use a single\nnegative class. The above equation assigns high values to parameters crucial to task Tj as their gradients with respect to the loss are larger. Since the diagonal elements of the fisher information matrix measure the importance of each parameter to a given task, the density of diagonal elements represents the proportion of important parameters over all parameters. More density means more parameters are important for the given task and less parameters can be recruited for new tasks. Ideally, we expect these values to be sparse.\nIn Figure 7, we show the diagonal elements of the standard classifier (CLS), classifier using our training objective (CLS*), and our EBMs on the split MNIST dataset. For CLS and CLS*, we follow (Kirkpatrick et al., 2017) to compute their fisher information matrices. For comparisons across multiple models, we normalize the FIM diagonal elements of each method to be between 0 and 1 and report the normalized results in Figure 7. For example, “Fisher 5 on data 1” shows the diagonal elements of the Fisher information matrix obtained by Equation 8 using the model parameters θ5 (after training on task T5) and data x,y+,y′ from task T1. The distribution of EBMs is sparser than CLS and CLS* indicating that EBMs have fewer important parameters for previous data. Updating parameters for the new task will have less negative impact on old tasks. In addition, more parameters can be used for learning new tasks if the distribution is sparse. This may provide another explanation for why EBMs can mitigate catastrophic forgetting.\nA.5 MORE DETAILS OF THE INTERFERENCE WITH PAST DATA ANALYSIS\nIn Section 5.1.5, we test the interference of models trained on new data with past data. We compare the proposed EBMs with a standard classifier (CLS) and CLS using our EBMs training objective (CLS*). For CLS and CLS*, we use the cross-entropy loss to get their φ, i.e., the cross-entropy losses on the old data before and after training on a new data.\nWe obtain θi by training on 200 randomly selected data from task1 and compute its gradient after training. Then we perform a single step optimization on 200 data that are randomly selected from task2 and average their parameters to get θj . We compute φ using the inner product of 4θ(xj) = θj−θi and the gradient∇θiLML(xi | θi ). For comparisons among multiple approaches, we normalize the φ of each method by dividing its maximum element value of θi and scale φ based on the length of θ, as CLS and EBMs have different number of parameters.\nA.6 SPLIT CIFAR-100 WITH DIFFERENT NUMBERS OF TASKS\nTo test the generality of our proposed EBMs, in Table 6 we repeat the boundary-aware experiments on spit CIFAR-100 (see Table 1 in the main text) for different number of classes per task. In Table 1 the CIFAR-100 dataset was split up into 10 tasks, resulting in 10 classes per task. Here we additionally perform the split CIFAR-100 protocol with 5 different tasks (i.e., 20 classes per task), with 20 different tasks (i.e., 5 classes per task) and with 50 different tasks (i.e., 2 classes per task). For all settings, we find that our EBM substantially outperforms the baselines." }, { "heading": "B DERIVATION OF LOSS GRADIENT", "text": "The derivation of the loss gradient in Equation 5 is\n∂LML ∂θ = Ex∼T ∂Eθ(x,y+) ∂θ + ∂ [ log ∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ] ∂θ = Ex∼T ∂Eθ(x,y+) ∂θ + 1∑\nyi∈{N ,y+} exp(−Eθ(x,yi))\n∂ [∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ]\n∂θ = Ex∼T ∂Eθ(x,y+) ∂θ + 1∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ∑\nyi∈{N ,y+}\n∂ [exp(−Eθ(x,yi))] ∂θ = Ex∼T ∂Eθ(x,y+) ∂θ − 1∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ∂Eθ(x,yi) ∂θ\n = Ex∼T ∂Eθ(x,y+) ∂θ − ∑\nyi∈{N ,y+}\n1∑ yi∈{N ,y+} exp(−Eθ(x,yi)) exp(−Eθ(x,yi)) ∂Eθ(x,yi) ∂θ ∑\nyi∈{N ,y+}\nexp(−Eθ(x,yi)) is a constant and can be moved inside the summation = Ex∼T ∂Eθ(x,y+) ∂θ − ∑\nyi∈{N ,y+}\nexp(−Eθ(x,yi))∑ yi∈{N ,y+} exp(−Eθ(x,yi)) ∂Eθ(x,yi) ∂θ = Ex∼T ∂Eθ(x,y+) ∂θ − ∑\nyi∈{N ,y+}\npθ(yi|x) ∂Eθ(x,yi)\n∂θ = Ex∼T ( ∂Eθ(x,y +)\n∂θ − Eyi∼{N ,y+}\n[ ∂Eθ(x,yi)\n∂θ\n]) .\n(9)" }, { "heading": "C MODEL ARCHITECTURES", "text": "Here we provide details of the model architectures used on the different datasets.\nImages from the split MNIST and permuted MNIST datasets are grey-scale images. The baseline models for these datasets, similar as in (van de Ven & Tolias, 2019), consist of several fullyconnected layers. For the EBMs we use similar number of parameters. The model architectures of EBMs on the split MNIST dataset and permuted MNIST dataset are the same, but have different input and output dimensions and hidden sizes. The model architectures of EBMs and baseline models on the split MNIST dataset are shown in Table 7a and Table 7b respectively. The model architectures of EBMs and baseline models on the permuted MNIST dataset are shown in Table 8a and Table 8b.\nImages from the CIFAR-10 and CIFAR-100 datasets are RGB images. For CIFAR-10, we use a small convolutional network for both the baseline models and the EBMs. The model architectures of EBMs on the CIFAR-10 dataset are shown in Table 9a, Table 9b, Table 9c, Table 9d, and Table 9e. We investigate different architectures to search for the effective label conditioning on EBMs training as described in Section 5.1.5. The baseline models on CIFAR-10 are given in Table 9f.\nThe model architectures used on the CIFAR-100 dataset are detailed in Table 10." } ]
2,020
null
SP:5e964be1417deb994f62cd256e24ed7cafd2bd9c
[ "This paper proposes the VarIational STructured Attention networks (VISTA-Net), which improves pervious SOTA models for dense pixel-wise prediction tasks. The proposed VISTA-Net is featured by two aspects: 1) A new structured attention is proposed, which is able to jointly model spatial-level and channel-level dependencies; 2) It incorporates the proposed structured attention with a CRF-like inference framework, which allows the probabilistic inference. Experimental studies are conducted on monocular depth estimation and semantic image segmentation, showing improved performances of VISTA-Net consistently." ]
State-of-the-art performances in dense pixel-wise prediction tasks are obtained with specifically designed convolutional networks. These models often benefit from attention mechanisms that allow better learning of deep representations. Recent works showed the importance of estimating both spatialand channel-wise attention tensors. In this paper we propose a unified approach to jointly estimate spatial attention maps and channel attention vectors so as to structure the resulting attention tensor. Moreover, we integrate the estimation of the attention within a probabilistic framework, leading to VarIational STructured Attention networks (VISTA-Net). We implement the inference rules within the neural network, thus allowing for joint learning of the probabilistic and the CNN front-end parameters. Importantly, as demonstrated by our extensive empirical evaluation on six largescale datasets, VISTA-Net outperforms the state-of-the-art in multiple continuous and discrete pixel-level prediction tasks, thus confirming the benefit of structuring the attention tensor and of inferring it within a probabilistic formulation.
[]
[ { "authors": [ "Anurag Arnab", "Sadeep Jayasumana", "Shuai Zheng", "Philip HS Torr" ], "title": "Higher order conditional random fields in deep neural networks", "venue": null, "year": 2016 }, { "authors": [ "Aayush Bansal", "Bryan Russell", "Abhinav Gupta" ], "title": "Marr revisited: 2d-3d alignment via surface normal prediction", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Aayush Bansal", "Xinlei Chen", "Bryan Russell", "Abhinav Gupta", "Deva Ramanan" ], "title": "Pixelnet: Representation of the pixels, by the pixels, and for the pixels", "venue": "arXiv preprint arXiv:1702.06506,", "year": 2017 }, { "authors": [ "Jiawang Bian", "Zhichao Li", "Naiyan Wang", "Huangying Zhan", "Chunhua Shen", "Ming-Ming Cheng", "Ian Reid" ], "title": "Unsupervised scale-consistent depth and ego-motion learning from monocular video", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "arXiv preprint arXiv:1606.00915,", "year": 2016 }, { "authors": [ "Liang-Chieh Chen", "Yi Yang", "Jiang Wang", "Wei Xu", "Alan L Yuille" ], "title": "Attention to scale: Scaleaware semantic image segmentation", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": "arXiv preprint arXiv:1706.05587,", "year": 2017 }, { "authors": [ "Yunpeng Chen", "Marcus Rohrbach", "Zhicheng Yan", "Yan Shuicheng", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Graph-based global reasoning networks", "venue": null, "year": 2019 }, { "authors": [ "Bin Cheng", "Inderjot Singh Saggu", "Raunak Shah", "Gaurav Bansal", "Dinesh Bharadia" ], "title": "ŝ3 net: Semantic-aware self-supervised depth estimation with monocular videos and synthetic data", "venue": null, "year": 2020 }, { "authors": [ "Bowen Cheng", "Maxwell D Collins", "Yukun Zhu", "Ting Liu", "Thomas S Huang", "Hartwig Adam", "Liang-Chieh Chen" ], "title": "Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmentation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Sungha Choi", "Joanne T Kim", "Jaegul Choo" ], "title": "Cars can’t fly up in the sky: Improving urban-scene segmentation via height-driven attention networks", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Jan K Chorowski", "Dzmitry Bahdanau", "Dmitriy Serdyuk", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Attention-based models for speech recognition", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": null, "year": 2016 }, { "authors": [ "Angela Dai", "Angel X Chang", "Manolis Savva", "Maciej Halber", "Thomas Funkhouser", "Matthias Nießner" ], "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Jifeng Dai", "Kaiming He", "Jian Sun" ], "title": "Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Jifeng Dai", "Kaiming He", "Jian Sun" ], "title": "Convolutional feature masking for joint object and stuff segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Henghui Ding", "Xudong Jiang", "Bing Shuai", "Ai Qun Liu", "Gang Wang" ], "title": "Semantic correlation promoted shape-variant context for segmentation", "venue": null, "year": 2019 }, { "authors": [ "David Eigen", "Rob Fergus" ], "title": "Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "David Eigen", "Christian Puhrsch", "Rob Fergus" ], "title": "Depth map prediction from a single image using a multi-scale deep network", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": null, "year": 2010 }, { "authors": [ "Huan Fu", "Mingming Gong", "Chaohui Wang", "Kayhan Batmanghelich", "Dacheng Tao" ], "title": "Deep ordinal regression network for monocular depth estimation", "venue": null, "year": 2018 }, { "authors": [ "Jun Fu", "Jing Liu", "Haijie Tian", "Yong Li", "Yongjun Bao", "Zhiwei Fang", "Hanqing Lu" ], "title": "Dual attention network for scene segmentation", "venue": null, "year": 2019 }, { "authors": [ "Shanghua Gao", "Ming-Ming Cheng", "Kai Zhao", "Xin-Yu Zhang", "Ming-Hsuan Yang", "Philip HS Torr" ], "title": "Res2net: A new multi-scale backbone", "venue": null, "year": 2019 }, { "authors": [ "Andreas Geiger", "Philip Lenz", "Christoph Stiller", "Raquel Urtasun" ], "title": "Vision meets robotics: The kitti dataset", "venue": "IJRR,", "year": 2013 }, { "authors": [ "Clément Godard", "Oisin Mac Aodha", "Michael Firman", "Gabriel J Brostow" ], "title": "Digging into selfsupervised monocular depth estimation", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Vitor Guizilini", "Rares Ambrus", "Sudeep Pillai", "Allan Raventos", "Adrien Gaidon" ], "title": "3d packing for self-supervised monocular depth estimation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Junjun He", "Zhongying Deng", "Lei Zhou", "Yali Wang", "Yu Qiao" ], "title": "Adaptive pyramid context network for semantic segmentation", "venue": null, "year": 2019 }, { "authors": [ "Hanzhe Hu", "Deyi Ji", "Weihao Gan", "Shuai Bai", "Wei Wu", "Junjie Yan" ], "title": "Class-wise dynamic graph convolution for semantic segmentation", "venue": null, "year": 2020 }, { "authors": [ "Jingwei Huang", "Yichao Zhou", "Thomas Funkhouser", "Leonidas J Guibas" ], "title": "Framenet: Learning local canonical frames of 3d surfaces from a single rgb image", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 8638–8647,", "year": 2019 }, { "authors": [ "Zilong Huang", "Xinggang Wang", "Lichao Huang", "Chang Huang", "Yunchao Wei", "Wenyu Liu" ], "title": "Ccnet: Criss-cross attention for semantic segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Adrian Johnston", "Gustavo Carneiro" ], "title": "Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Tsung-Wei Ke", "Jyh-Jing Hwang", "Ziwei Liu", "Stella X Yu" ], "title": "Adaptive affinity fields for semantic segmentation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yoon Kim", "Carl Denton", "Luong Hoang", "Alexander M Rush" ], "title": "Structured attention networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Marvin Klingner", "Jan-Aike Termöhlen", "Jonas Mikolajczyk", "Tim Fingscheidt" ], "title": "Self-supervised monocular depth estimation: Solving the dynamic object problem by semantic guidance", "venue": null, "year": 2020 }, { "authors": [ "Iro Laina", "Christian Rupprecht", "Vasileios Belagiannis", "Federico Tombari", "Nassir Navab" ], "title": "Deeper depth prediction with fully convolutional residual networks", "venue": "arXiv preprint arXiv:1606.00373,", "year": 2016 }, { "authors": [ "Jae-Han Lee", "Chang-Su Kim" ], "title": "Multi-loss rebalancing algorithm for monocular depth estimation", "venue": null, "year": 2020 }, { "authors": [ "Jin Han Lee", "Myung-Kyu Han", "Dong Wook Ko", "Il Hong Suh" ], "title": "From big to small: Multi-scale local planar guidance for monocular depth estimation", "venue": null, "year": 1907 }, { "authors": [ "Hanchao Li", "Pengfei Xiong", "Jie An", "Lingxue Wang" ], "title": "Pyramid attention network for semantic segmentation", "venue": "arXiv preprint arXiv:1805.10180,", "year": 2018 }, { "authors": [ "Jun Li", "Reinhard Klein", "Angela Yao" ], "title": "A two-streamed network for estimating fine-scaled depth maps from single rgb images", "venue": null, "year": 2017 }, { "authors": [ "Xia Li", "Yibo Yang", "Qijie Zhao", "Tiancheng Shen", "Zhouchen Lin", "Hong Liu" ], "title": "Spatial pyramid based graph reasoning for semantic segmentation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Yanwei Li", "Lin Song", "Yukang Chen", "Zeming Li", "Xiangyu Zhang", "Xingang Wang", "Jian Sun" ], "title": "Learning dynamic routing for semantic segmentation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Wang Lijun", "Zhang Jianming", "Huchuan Lu Yifan", "Wang", "Ruan Xiang" ], "title": "Cliffnet for monocular depth estimation with hierarchical embedding loss", "venue": null, "year": 2020 }, { "authors": [ "Chenxi Liu", "Liang-Chieh Chen", "Florian Schroff", "Hartwig Adam", "Wei Hua", "Alan L Yuille", "Li FeiFei" ], "title": "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Fayao Liu", "Chunhua Shen", "Guosheng Lin" ], "title": "Deep convolutional neural fields for depth estimation from a single image", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "In EMNLP,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Nicolas Heess", "Alex Graves" ], "title": "Recurrent models of visual attention", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Roozbeh Mottaghi", "Xianjie Chen", "Xiaobai Liu", "Nam-Gyu Cho", "Seong-Whan Lee", "Sanja Fidler", "Raquel Urtasun", "Alan Yuille" ], "title": "The role of context for object detection and semantic segmentation in the wild", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Xiaojuan Qi", "Renjie Liao", "Zhengzhe Liu", "Raquel Urtasun", "Jiaya Jia" ], "title": "Geonet: Geometric neural network for joint depth and surface normal estimation", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Anurag Ranjan", "Varun Jampani", "Lukas Balles", "Kihwan Kim", "Deqing Sun", "Jonas Wulff", "Michael J Black" ], "title": "Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation", "venue": null, "year": 2019 }, { "authors": [ "Anirban Roy", "Sinisa Todorovic" ], "title": "Monocular depth estimation using neural regression forest", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Chang Shu", "Kun Yu", "Zhixiang Duan", "Kuiyuan Yang" ], "title": "Feature-metric loss for self-supervised learning of depth and egomotion", "venue": null, "year": 2020 }, { "authors": [ "Nathan Silberman", "Derek Hoiem", "Pushmeet Kohli", "Rob Fergus" ], "title": "Indoor segmentation and support inference from rgbd images", "venue": "In ECCV,", "year": 2012 }, { "authors": [ "Xibin Song", "Yuchao Dai", "Dingfu Zhou", "Liu Liu", "Wei Li", "Hongdong Li", "Ruigang Yang" ], "title": "Channel attention based iterative residual learning for depth map super-resolution", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Chiat-Pin Tay", "Sharmili Roy", "Kim-Hui Yap" ], "title": "Aanet: Attribute attention network for person re-identifications", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Lokender Tiwari", "Pan Ji", "Quoc-Huy Tran", "Bingbing Zhuang", "Saket Anand", "Manmohan Chandraker" ], "title": "Pseudo rgb-d for self-improving monocular slam and depth prediction", "venue": null, "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Jingdong Wang", "Ke Sun", "Tianheng Cheng", "Borui Jiang", "Chaorui Deng", "Yang Zhao", "Dong Liu", "Yadong Mu", "Mingkui Tan", "Xinggang Wang" ], "title": "Deep high-resolution representation learning for visual recognition", "venue": null, "year": 2020 }, { "authors": [ "Zhihao Xia", "Patrick Sullivan", "Ayan Chakrabarti" ], "title": "Generating and exploiting probabilistic monocular depth estimates", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Tianjun Xiao", "Yichong Xu", "Kuiyuan Yang", "Jiaxing Zhang", "Yuxin Peng", "Zheng Zhang" ], "title": "The application of two-level attention models in deep convolutional neural network for fine-grained image classification", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Saining Xie", "Zhuowen Tu" ], "title": "Holistically-nested edge detection", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Saining Xie", "Xun Huang", "Zhuowen Tu" ], "title": "Top-down learning for structured labeling with convolutional pseudoprior", "venue": null, "year": 2016 }, { "authors": [ "Dan Xu", "Wanli Ouyang", "Xavier Alameda-Pineda", "Elisa Ricci", "Xiaogang Wang", "Nicu Sebe" ], "title": "Learning deep structured multi-scale features using attention-gated crfs for contour prediction", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Dan Xu", "Wanli Ouyang", "Elisa Ricci", "Xiaogang Wang", "Nicu Sebe" ], "title": "Learning cross-modal deep representations for robust pedestrian detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Dan Xu", "Elisa Ricci", "Wanli Ouyang", "Xiaogang Wang", "Nicu Sebe" ], "title": "Multi-scale continuous crfs as sequential deep networks for monocular depth estimation", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Dan Xu", "Wanli Ouyang", "Xiaogang Wang", "Nicu Sebe" ], "title": "Pad-net: Multi-tasks guided predictionand-distillation network for simultaneous depth estimation and scene parsing", "venue": null, "year": 2018 }, { "authors": [ "Wei Yin", "Yifan Liu", "Chunhua Shen", "Youliang Yan" ], "title": "Enforcing geometric constraints of virtual normal for depth prediction", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Huangying Zhan", "Ravi Garg", "Chamara Saroj Weerasekera", "Kejie Li", "Harsh Agarwal", "Ian Reid" ], "title": "Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Fan Zhang", "Yanqin Chen", "Zhihang Li", "Zhibin Hong", "Jingtuo Liu", "Feifei Ma", "Junyu Han", "Errui Ding" ], "title": "Acfnet: Attentional class feature network for semantic segmentation", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Hang Zhang", "Kristin Dana", "Jianping Shi", "Zhongyue Zhang", "Xiaogang Wang", "Ambrish Tyagi", "Amit Agrawal" ], "title": "Context encoding for semantic segmentation", "venue": null, "year": 2018 }, { "authors": [ "Hang Zhang", "Han Zhang", "Chenguang Wang", "Junyuan Xie" ], "title": "Co-occurrent features in semantic segmentation", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Li Zhang", "Xiangtai Li", "Anurag Arnab", "Kuiyuan Yang", "Yunhai Tong", "Philip HS Torr" ], "title": "Dual graph convolutional network for semantic segmentation", "venue": "arXiv preprint arXiv:1909.06121,", "year": 2019 }, { "authors": [ "Li Zhang", "Dan Xu", "Anurag Arnab", "Philip HS Torr" ], "title": "Dynamic graph message passing networks", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Yinda Zhang", "Shuran Song", "Ersin Yumer", "Manolis Savva", "Joon-Young Lee", "Hailin Jin", "Thomas Funkhouser" ], "title": "Physically-based rendering for indoor scene understanding using convolutional neural networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Hengshuang Zhao", "Yi Zhang", "Shu Liu", "Jianping Shi", "Chen Change Loy", "Dahua Lin", "Jiaya Jia" ], "title": "Psanet: Point-wise spatial attention network for scene parsing", "venue": null, "year": 2018 }, { "authors": [ "Zilong Zhong", "Zhong Qiu Lin", "Rene Bidart", "Xiaodan Hu", "Ibrahim Ben Daya", "Zhifeng Li", "WeiShi Zheng", "Jonathan Li", "Alexander Wong" ], "title": "Squeeze-and-attention networks for semantic segmentation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Hang Zhao", "Xavier Puig", "Sanja Fidler", "Adela Barriuso", "Antonio Torralba" ], "title": "Scene parsing through ade20k dataset", "venue": null, "year": 2017 }, { "authors": [ "Zhen Zhu", "Mengde Xu", "Song Bai", "Tengteng Huang", "Xiang Bai" ], "title": "Asymmetric non-local neural networks for semantic segmentation", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Wang" ], "title": "2020), in particular one can notice the superior accuracy in the right hand side of the images where VISTA-Netshows better visual predictions. Semantic segmentation: computational cost. In Table. 6 we show the results of our experiments in order to analyze the computational cost of our method. In particular, we perform an analysis on the Pascal-context dataset and at varying T", "venue": "(Fu et al.,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Over the past decade, convolutional neural networks (CNNs) have become the privileged methodology to address computer vision tasks requiring dense pixel-wise prediction, such as semantic segmentation (Chen et al., 2016b; Fu et al., 2019), monocular depth prediction (Liu et al., 2015; Roy & Todorovic, 2016), contour detection (Xu et al., 2017a) and normal surface computation (Eigen et al., 2014). Recent studies provided clear evidence that attention mechanisms (Mnih et al., 2014) within deep networks are undoubtedly a crucial factor in improving the performance (Chen et al., 2016b; Xu et al., 2017a; Fu et al., 2019; Zhan et al., 2018). In particular, previous works demonstrated that deeply learned attentions acting as soft weights to interact with different deep features at each channel (Zhong et al., 2020; Zhang et al., 2018; Song et al., 2020) and at each pixel location (Li et al., 2020a; Johnston & Carneiro, 2020; Tay et al., 2019) permits to improve the pixel-wise prediction accuracy (see Fig.1.a and Fig.1.b). Recently, Fu et al. (2019) proposed the Dual Attention Network (DANet), embedding in a fully convolutional network (FCN) two complementary attention modules, specifically conceived to model separately the semantic dependencies associated to the spatial and to the channel dimensions (Fig.1.c).\nConcurrently, other approaches have considered the use of structured attention models integrated within a graph network framework (Zhang et al., 2020; Chen et al., 2019; Xu et al., 2017a), showing the empirical advantage of adopting a graphical model to effectively capture the structured information present in the hidden layers of the neural network and thus enabling the learning of better deep feature representations. Notably, Xu et al. (2017a) first introduced attention-gated conditional random fields (AG-CRFs), a convolutional neural network implementing a probabilistic graphical model that considers attention variables as gates (Minka & Winn, 2009) in order to learn improved deep features and effectively fuse multi-scale information. However, their structured attention model is only learned at the spatial-wise level, while channel-wise dependencies are not considered.\nThis paper advances the state of the art in dense pixel-wise prediction by proposing a novel approach to learn more effective deep representations by integrating a structured attention model which jointly account for spatial- and channel-level dependencies using an attention tensor (Fig.1.d) within a CRF framework. More precisely, inspired from Xu et al. (2017a) we model the attention as gates. Crucially, we address the question on how to enforce structure within these latent gates, in order to\njointly model spatial- and channel-level dependencies while learning deep features. To do so, we hypothesize that the attention tensor is nothing but the sum of T rank-1 tensors, each of them being the tensor product of a spatial attention map and a channel attention vector. This attention tensor is used as a structured latent attention gate, enhancing the feature maps. We cast the inference problem into a maximum-likelihood estimation formulation that is made computationally tractable thanks to a variational approximation. Furthermore, we implement the maximum likelihood update rules within a neural network, so that they can be jointly learned with the preferred CNN front-end. We called our approach based on structured attention and variational inference VarIational STructured Attention Networks or VISTA-Net. We evaluate our method on multiple pixel-wise prediction problems, i.e. monocular depth estimation, semantic segmentation and surface normale prediction, considering six publicly available datasets, i.e. NYUD-V2 (Silberman et al., 2012), KITTI (Geiger et al., 2013), Pascal-Context (Mottaghi et al., 2014), Pascal VOC2012 (Everingham et al., 2010), Cityscape (Cordts et al., 2016) and ScanNet (Dai et al., 2017). Our results demonstrate that VISTANet is able to learn rich deep representations thanks to the proposed structured attention and our probabilistic formulation, outperforming state-of-the-art methods.\nRelated Work. Several works have considered integrating attention models within deep architectures to improve performance in several tasks such as image categorization (Xiao et al., 2015), speech recognition (Chorowski et al., 2015) and machine translation (Vaswani et al., 2017; Kim et al., 2017; Luong et al., 2015). Focusing on pixel-wise prediction, Chen et al. (2016b) first described an attention model to combine multi-scale features learned by a FCN for semantic segmentation. Zhang et al. (2018) designed EncNet, a network equipped with a channel attention mechanism to model global context. Zhao et al. (2018) proposed to account for pixel-wise dependencies introducing relative position information in spatial dimension within the convolutional layers. Huang et al. (2019b) described CCNet, a deep architecture that embeds a criss-cross attention module with the idea of modeling contextual dependencies using sparsely-connected graphs, such as to achieve higher computational efficiency. Fu et al. (2019) proposed to model semantic dependencies associated with spatial and channel dimensions by using two separate attention modules. Zhong et al. (2020) introduced a squeeze-and-attention network (SANet) specialized to pixel-wise prediction that takes into account spatial and channel inter-dependencies in an efficient way.\nAttention was first adopted within a CRF framework by Xu et al. (2017a), which introduced gates to control the message passing between latent variables and showed that this strategy is effective for contour detection. Our work significantly departs from these previous approaches, as we introduce a novel structured attention mechanism, jointly handling spatial- and channel-level dependencies within a probabilistic framework. Notably, we also prove that our model can be successfully employed in case of several challenging dense pixel-level prediction tasks. Our work is also closely related to previous studies on dual graph convolutional network (Zhang et al., 2019c) and dynamic graph message passing networks (Zhang et al., 2020), which have been successfully used for pixellevel prediction tasks. However, while they also resort on message passing for learning refined deep feature representations, they lack a probabilistic formulation. Finally, previous studies (Xu et al., 2017c; Arnab et al., 2016; Chen et al., 2019) described CRF-based models for pixel-wise estima-\ntion, e.g. to learn and optimally fuse deep representations at multiple scales. However, they did not employ structured attention gates." }, { "heading": "2 VARIATIONAL STRUCTURED ATTENTION NETWORKS: VISTA-NET", "text": "As previously discussed, we aim to enhance the learned representation by structuring the attention within a probabilistic formulation. One the one side, inducing structure in the attention mechanisms has been proven to be successful (Fu et al., 2019; Zhong et al., 2020). On the other side, probabilistic formulations combined with deep architectures are interesting for pixel-level prediction tasks (Xu et al., 2017b). Up to our knowledge, we are the first to bring together recent advances in pixel-wise prediction by formulating a novel structured attention mechanism within a probabilistic CRF-like inference framework. Inspired by Fu et al. (2019), where two spatial- and a channel-wise fullrank tensors are computed, we opt to infer different spatial and channel attention variables.Very differently from Fu et al. (2019), we propose to structure a generic attention tensor a of dimension W ×H × C (widht, height, channels), as the sum of T rank-1 tensors:\na = T∑ t=1 mt ⊗ vt, mt ∈ R1×W×H ,vt ∈ RC×1×1, (1)\nmeaning that mt can be understood as an image of W ×H pixels and vt as a vector of dimension C, and ⊗ denotes the tensor product, in the case above leading to a 3-way tensor of dimensions W × H × C. Each of the tensor products within the sum yields a tensor of rank-1, consequently limiting the rank of a to be at maximum T . Equation (1) is the algebraic expression of the proposed structured attention mechanism, and is the methodological foundation of VISTA-Net.\nMoreover, we inspire from the CRF formulation with gating variables proposed in (Xu et al., 2017a), and derive a new energy function and variational approximation to enable efficient learning and inference procedures. Additionally, this formulation allows us to consider the CRF kernels as latent variables and infer them from the data, together with the structured attention variables mt and vt. We believe learning the kernels is important because it allows the CRF to weight the information flow depending on the content of the rather than keeping the same weights for all images.\nWe assume a generic CNN front-end providing a set of S multi-scale feature maps F = {fs}Ss=1. To ease notation, we assume that each feature map has P pixels and C channels, but in practice these dimensions depend on the scale s. For each scale, we also consider the set of hidden variables zs corresponding to fs, and Z = {zs}Ss=1. These hidden variables correspond to refined convolutional futures that incorporate information and attention from other feature maps, so as to better represent the key information for the pixel-level task at hand. Intuitively, the structured attention tensor should help refining the hidden variables to allow better performance at various pixel-level prediction tasks.\nAs in (Xu et al., 2017a), for every pair of emitting e and receiving r scales, we consider a dedicated attention tensor ae,r. Very importantly, in our case this attention tensor is structured following (1), and so we have a set of hidden spatial attention maps M = {mte,r} S,S,T e,r,t=1 and hidden channel attention vectors V = {vte,r} S,S,T e,r,t=1. More precisely, m t e,r ∈ {0, 1}P and vte,r ∈ {0, 1}C are\na binary spatial map and a stochastic channel-wise vector, hence ∑C\nc=1 v t,c e,r = 1. In this way,\nwe reduce ambiguity and ease the learning. This also means that the model is conceived to pay attention to only T channels of the feature map. While this could seem limiting at first glance we remark that: (i) the model learns which are the optimal T channels among the possible C that have to be used to refine the hidden variables and (ii) the posterior distribution of mt boils down to a convex combination of all channels, as it will appear clear when discussing the inference procedure." }, { "heading": "2.1 ENERGY FUNCTION AND VARIATIONAL APPROXIMATION", "text": "Our model consists on three different latent variables: the hidden features Z, and the hidden attention maps M and vectors V. In addition, we also consider inferring the CRF kernels, denoted by K from\nthe data. More precisely, the energy function associated to the proposed models writes: − E(Z,M,V,K,F,Θ) = ∑ s ∑ p,c φz(z p,c r , f p,c r )\n+ ∑ e,r ∑ p,c,p′,c′ ∑ t mt,pe,rv t,c e,rψ(z p,c r , z p′,c′ e , k e,p′,c′ r,p,c ) + φk(f p,c r , f p′,c′ e , k e,p′,c′ r,p,c ), (2)\nwhere φz , φk and ψ are potentials to be defined and ke,p ′,c′\nr,p,c denotes the kernel value weighting the information flow from the (p′, c′)-th value of the feature map of scale e to the (p, c)-th value of the feature map of scale r. Since the exact posterior distribution is not computationally tractable, we opt to approximate it with the following family of separable distributions:\np(Z,M,V,K|F,Θ) ≈ q(Z,M,V,K) = qz(Z)qm(M)qv(V)qk(K). (3)\nIn that case, the optimal solution for each of the factors of the distribution is to take the expectation w.r.t. to all the others, for instance:\nqz(Z) ∝ exp ( − Eqm(M)qv(V)qk(K) { E(Z,M,V,K,F,Θ) }) . (4)\nIt can be shown that the optimal variational factors write:\nqz(z p,c r ) ∝ exp ( φz(z p,c r , f p,c r ) + ∑ e 6=r ∑ t m̄t,pe,rv̄ t,c e,r ∑ p′,c′ Eqzqk{ψ(zp,cr , zp ′,c′ e , k e,p′,c′ r,p,c )} ) ,\nqm(m t,p e,r) ∝ exp ( mt,pe,r ∑ c v̄t,ce,r ∑ p′,c′ Eqz,qk{ψ(zp,cs , z p′,c′ s′ , k e,p′,c′ r,p,c )} ) ,\nqv(v t,c e,r) ∝ exp ( vt,ce,r ∑ p m̄t,pe,r ∑ p′,c′ Eqz,qk{ψ(zp,cs , z p′,c′ s′ , k e,p′,c′ r,p,c )} ) ,\nqk(k e,p′,c′ r,p,c ) ∝ exp ( φk(f p,c r , f p′,c′ e , k e,p′,c′ r,p,c ) + ∑ t m̄t,pe,rv̄ t,c e,rEqz{ψ(zp,cs , z p′,c′ s′ , k e,p′,c′ r,p,c )} ) ,\n(5)\nwhere m̄t,pe,r = Eqm{mt,pe,r} denotes the the posterior mean, and analogously for v̄t,ce,r. This result also implies that thanks to the variational approximation in (3), the posterior distributions factorise in each of the variables above, e.g. qz(Z) = ∏S,P,C r,p,c=1 qz(z p,c r ). The relation between the various hidden variables as for their inference is shown in Figure 2 (left). In addition, we also show the information flow between the hidden variables using arrows. Finally, on Figure 2 (right) we show the relation between the channel-wise and spatial attention variables and how the final structured attention tensor is computed." }, { "heading": "2.2 INFERENCE WITH VISTA-NET", "text": "In order to construct an operative model we need to define the potentials φz , φk and ψ. In our case, the unary potentials correspond to:\nφz(z p,c r , f p,c r ) = − bp,cr 2 (zp,cr − fp,cr )2, φk(fp,cr , fp ′,c′ e , k e,p′,c′ r,p,c ) = − 1 2 (ke,p ′,c′ r,p,c − fp,cr fp ′,c′ e ) 2, (6)\nwhere bp,cs > 0 is a weighting factor. ψ is bilinear in the hidden feature maps:\nψ(zp,cr , z p′,c′ e , k e,p′,c′ r,p,c ) = z p,c r k e,p′,c′ r,p,cz p′,c′ e . (7)\nUsing the over bar notation also for the hidden features and kernels, e.g. z̄p,cs = Eqz{zp,cs }, and by combining the kernel definitions (6) and (7) with the expression of the variational factors (5), we obtain the following update rules for the latent variables.\nZ-step. It can be seen that the posterior distribution on qz is Gaussian with mean:\nz̄p,cs = 1\nbp,cs\n( bp,cs f p,c s + ∑ e ∑ t m̄t,ps,s′ v̄ t,c s,s′ ∑ p′,c′ k̄e,p ′,c′ r,p,c z̄ p′,c′ s′ ) (8)\nThis corresponds to the update rule obtained in (Xu et al., 2017a) with two remarkable differences. First, the posterior of the attention gate corresponds to the posterior of the structured tensor of rank T . Second, the impact of the neighboring features is weighted by the expected kernel value k̄e,p ′,c′\nr,p,c .\nM-step. The variational approximation leads to a Bernoulli distribution for qm(mt,pe,r), which boils down to the following a posterior mean value using the sigmoid function σ:\nm̄t,pe,r = σ (∑\nc v̄t,ce,r ∑ p′,c′ z̄p,cs k̄ e,p′,c′ r,p,c z̄ p′,c′ s′ ) . (9)\nV-step. It can be shown that the approximated posterior distribution is categorical, and that the expected value of each dimension of vte,r can be computed using the softmax operator:\n(v̄t,ce,r) C c=1 = softmax (∑ p m̄t,pe,r ∑ p′,c′ z̄p,cs k̄ e,p′,c′ r,p,c z̄ p′,c′ e )C c=1 . (10)\nK-step. Finally, we need to derive the update rules for K. By further deriving the corresponding variational posterior distribution, it can be shown that the a posterior distribution for the kernels is a Gaussian distribution with the following mean:\nk̄e,p ′,c′\nr,p,c = f p,c r f\np′,c′ e + ∑ t m̄t,pe,rv̄ t,c e,r z̄ p,c r z̄ p′,c′ e . (11)\nThis solution is very straightforward, but since the kernels are estimated independently for each pair of receiving (r, p, c) - emitting (e, p′, c′) pixels, it has two major drawbacks. First, the kernel values are estimated without any spatial context. Second, given the large amount of kernel values, one must find a very efficient way to compute them. We propose to kill two birds with one stone by learning the kernels from the features using convolutional layers. By design, they take spatial context into account, and many popular libraries have efficient implementations of the convolution operation. The estimated kernel corresponding to the input channel c′ of scale e, ke,c ′\nr is computed via a convolutional operation. The input of the convolution is a concatenation of the tensor fr + zr ∑T t=1 m t r,e ⊗ vtr,e and the image zc ′ e resized to the spatial size of fr.\nJoint learning. We implement the inference procedure described before within the neural network, on the top of the CNN front-end. Indeed, implementing all inference operations using available deep learning operators has two prominent advantages. First, we can perform the inference and learning the CNN front-end at the same time, within the same formalism and for the same aim. Second, this allows direct parallelisation of our method, speeding up training and inference.\nThe precise implementation goes as follows. Regarding z̄r, we first apply message passing from the e-th scale to the sr-th scale is performed with ze→r ← k̄er ~ z̄e, where ~ denotes the convolutional operation and k̄er denotes the corresponding learned convolution kernel. We then apply elementwise product with the corresponding structured attention tensor ∑T t=1 m̄ t e,r ⊗ v̄te,r. Finally we compute the element-wise sum with other emiting scales and the feature maps fr, see (8). Regarding m̄e,r, we first compute the element-wise product between z̄r and ze→r. The sum over channels weighted by v̄e,r is computed previous to applying pixel-wise sigmoid, see (9). Regarding v̄e,r we operate in a very similar fashion, but weighting each pixel with m̄e,r and then summing every channel independently, before applying softmax, see (10). Regarding k̄e,c ′\nr , as discussed before, it is computed via a convolutional operation on the concatenations of ftm + gtm and the image z c′ e resized to the spatial size of fr. In terms of initialisation, we draw a random guess for M and V, and set Z to F. This allows us to update the kernels, then the other variables.\nOnce the hidden variables are updated, we use them to address several different pixel-wise prediction tasks involving continuous and discrete variables, including monocular depth estimation, surface normal estimation and semantic segmentation. Following previous works, the network optimization losses for these three tasks are a standard L2 loss (Xu et al., 2017c), a consine similarity loss (Eigen et al., 2014) and a cross-entropy loss (Chen et al., 2016a), respectively. The CNN front-end and VISTA-Net, are jointly trained end-to-end." }, { "heading": "3 EXPERIMENTAL EVALUATION", "text": "" }, { "heading": "3.1 DATASETS AND EXPERIMENTAL PROTOCOL", "text": "Tasks and Datasets. We demonstrate the effectiveness of VISTA-Net on two tasks: monocular depth estimation on the NYU-v2 (Silberman et al., 2012) and the KITTI (Geiger et al., 2013) datasets and semantic segmentation on the Pascal-Context (Mottaghi et al., 2014), the Pascal VOC2012 (Everingham et al., 2010) and the Cityscape (Cordts et al., 2016). We also conducted experiments on the surface normal estimation task on ScanNet (Dai et al., 2017) but due to lack of space the associated results are reported in the Appendix.\nFor NYU-v2 and KITTI we follow the experimental settings proposed by Eigen et al. (Eigen et al., 2014). For NYU-v2 we use 120K RGB-Depth pairs with a resolution of 480× 640 pixels, acquired with a Microsoft Kinect device from 464 indoor scenes, using 249 scenes for training and 215 scenes (654 images) for test. For KITTI we specifically use 22,600 frames from 32 scenes for training and 697 frames from the rest 29 scenes for test.\nFor Pascal-Context we follow the works (Chen et al., 2016a; Zhang et al., 2018) and we consider the most frequent 59 classes. The remaining classes are masked during training and test. Pascal VOC2012 contains 20 classes divided in 10582 training, 1449 validation and 1456 test images. Our method is trained using the protocol described in (Zhong et al., 2020; Long et al., 2015). For Cityscape dataset, only the 5,000 finely annotated images are used in our experiments, split into 2,975/500/1,525 images for training, validation, and test.\nEvaluation Metrics. To evaluate the performance on monocular depth estimation, we consider several metrics as in (Eigen & Fergus, 2015), including mean relative error (rel), root mean squared error (rms), mean log10 error (log10), and accuracy with threshold t (t ∈ {1.25, 1.252, 1.253}). As for semantic segmentation, we consider two metrics following (Zhou et al., 2017; Zhang et al., 2018), i.e. pixel accuracy (pixAcc) and mean intersection over union (mIoU), averaged over classes. Implementation Details. VISTA-Net is implemented in Pytorch. The experiments are conducted on four Nvidia Quadro RTX 6000 GPUs, each with 24 GB memory. The ResNet-101 architecture pretrained on ImageNet (Deng et al., 2009) is considered in all the experiments for initializing the backbone network of VISTA-Net except in for the experiments on the Cityscape dataset where we choose HRNet V2-W48 (whose complexity is comparable to dilated-ResNet-101) for fair comparison with previous works. Our model can be used for effective deep feature learning in both single-scale and multi-scale contexts. To boost the performance, following previous works (Xie & Tu, 2015; Xu et al., 2017a), we also consider features output by different convolutional blocks of a CNN backbone (e.g. res3c, ref4f, ref5d of a ResNet-50). For the semantic segmentation task, we use a learning rate of 0.001 on Pascal-context and Pascal-VOC 2012 and 0.01 on cityscapes with a momentum of 0.9 and a weight decay of 0.0001 using a polynomial learning rate scheduler as pre-\nviously done in (Zhang et al., 2018; Chen et al., 2016a). For the monocular depth estimation task, the learning rate is set to 10−4 with weight decay of 0.01. The Adam optimizer is used in all our experiments with a batch size of 8 for monocular depth estimation and 16 for semantic segmentation. The total training epochs are set to 50 for depth prediction experiments, to 150 for Pascal-context and Pascal VOC 2012 datasets and to 500 for the Cityscapes dataset." }, { "heading": "3.2 EXPERIMENTAL RESULTS AND ANALYSIS", "text": "Monocular Depth Estimation. Comparative results on KITTI dataset are shown in Table 1. We propose a comparison with state of the art models such as (Eigen et al., 2014; Ranjan et al., 2019; Bian et al., 2019; Godard et al., 2019; Fu et al., 2018; Yin et al., 2019; Lee et al., 2019; Guizilini et al., 2020). In addition we demonstrate the effectiveness of our VISTA-Net comparing with MSCRF (Xu et al., 2017c), a previous approach which exploit a probabilistic framework for multiscale feature learning but does not consider an attention mechanisms. Our approach is superior, thus demonstrating the effectiveness of the proposed attention model. We also compare with AGCRF (Xu et al., 2017a) adapting their model to the monocular depth estimation problem. Also in this case VISTA-Net outperforms the competitor confirming the importance of having a joint structured spatial- and channel-wise attention model. Note that AG-CRF (Xu et al., 2017a) and VISTA-Net are compared using the same backbone. In order to demonstrate the competitiveness of our approach in an indoor scenario we also report the results on NYUD-V2 dataset in Fig. 3. Similarly to the experiments on KITTI, VISTA-Net outperforms both state of the art approaches and previous methods based on attention gates and CRFs (Xu et al., 2017c;a).\nSemantic Segmentation. We first compare VISTA-Net with the most recent methods on the PascalContext dataset, including (Zhang et al., 2018; Fu et al., 2019; Zhu et al., 2019; Ding et al., 2019; Zhang et al., 2019b; Wang et al., 2020; He et al., 2019). As for the depth estimation task, also in this case we evaluate the performance of AG-CRF Xu et al. (2017a), adapting the original code to the semantic segmentation task. VISTA-Net, as shown in Table 2, is 0.6 points better according to the mIoU metric than the best available method, i.e. AG-CRF. Importantly, VISTA-Net outperforms EncNet (Zhang et al., 2018), which uses only channel-wise attention, as well as DANet (Fu et al.,\n2019) and SANet(Zhong et al., 2020), which considers inter-dependencies both at spatial and channel level in their attention model. In Table 3(right) are shown results on PASCAL VOC2012. Again, our method outperforms EncNet (Zhang et al., 2018), SANet (Zhong et al., 2020) and DANet (Fu et al., 2019). In particular, VISTA-Net is 3.7 points better according to the mIoU metric than the best available method, i.e. SANet. Finally, Table. 3(left) reports the results on Cityscape. As in the previous two datasets VISTA-Net outperforms the competitors (by nearly one point mIoU). Additional results are reported in the Appendix.\nAblation Study. We also performed an ablation study on the Pascal-context dataset to further demonstrate each proposed component’s impact. Fig. 5 (left) shows that the performance of VISTANet degrades not only when the model does not employ the structured attention mechanism, but also when only channel-wise or spatial-wise attention is used. Moreover, we can also see the advantage of using the proposed probabilistic formulation for joint modeling both spatial- and channel-wise attention in a principled manner. Interestingly, the performance achieved in each of the variants (spatial, channel, no probabilistic formulation) is similar. This leads us to believe that the proposed method’s competitive advantage is in combining structured attention with a probabilistic formulation. Notably, the feature refinement through message passing seems to be the most crucial contribution to improve the performance. For the sake of completeness, we also report the results of DANet, and of AG-CRF (which corresponds to the Multiple-scale/Spatial setting in). Finally, in Fig.5 we show the performance of VISTA-Net for different values of the tensor rank T . It is important to notice that the framework reaches better performance when T is higher. Fig. 4 clearly illustrate the perceptual improvement in segmentation masks obtained with higher values of the attention tensor rank. Additional examples are provided in the supplementary material." }, { "heading": "4 CONCLUSIONS", "text": "In this paper we proposed a novel approach to improve the learning of deep features representations for dense pixel-wise prediction tasks. Our approach seamlessly integrates a novel structured atten-\ntion model within a probabilistic framework. In particular, we proposed to structure the attention tensors as the sum of T rank-one tensors, each being the tensor-product of a spatial attention map and a channel attention vector. These two kinds of variables are jointly learned within the probabilistic formulation made tractable thanks to the variational approximation. The proposed structured attention is rich enough to capture complex spatial- and channel-level inter-dependencies, while being efficient to compute. The overall optimisation of the probabilistic model and of the CNN front-end is performed jointly. Extensive experimental evaluations show that VISTA-Net outperforms stateof-the-art methods on several datasets, thus confirming the importance of structuring the attention variables for dense pixel-level prediction tasks." }, { "heading": "A APPENDIX: ADDITIONAL EXPERIMENTAL RESULTS", "text": "In this section we report additional quantitative and qualitative results.\nCityscapes. Table 4 reports the results of VISTA-Net trained on the training+validation set and tested on the test set of Cityscapes dataset. We focus on comparing our method with recent attention networks, including (Zhang et al., 2019a; Fu et al., 2019; Zhang et al., 2020; Huang et al., 2019b; Choi et al., 2020). This table confirms that our structured attention module is better than the other attention methods and sets the new state-of-the-art on this dataset.\nSurface normal Estimation. We conduct experiments on the ScanNet dataset. ScanNet is a large scale RGB-D dataset for 3D scene understanding, we follow the protocol proposed in (Dai et al., 2017) with a split of 189,916/20,942 images for training and test respectively. The surface normal prediction performance is evaluated using five metrics. We compute the per-pixel angle distance between prediction and ground-truth, mean and median for valid pixels with given ground-truth normal. In addition to mean and median, we also compute the fraction of pixels with angle difference w.r.t. ground-truth less than t ∈ [11.25◦, 22.5◦, 30◦] as used in (Eigen et al., 2014). We compare VISTA-Net with other state-of-the-art RGB-based methods including (Bansal et al., 2016; Zhang et al., 2017; Qi et al., 2018; Huang et al., 2019a). Quantitative results are shown in Table. 5. Our\nmethod outperforms the FrameNet by more than 3 % in 11.25◦ and leaves behind by a significant margin the other methods. In addition we show some qualitative examples in Fig. 6. These results clearly indicate that predictions are extremely accurate also in the case of objects (i.e. waste bin, water closet, chair supports, etc.).\nDepth estimation: qualitative results. In Fig. 7 is shown a qualitative comparison of our method with DORN (Fu et al., 2018). Results indicate that VISTA-Netgenerates better depth maps, in particular one can appreciate the opening of the sky and the smoothness of the prediction on the sides. Fig 8 shows a similar comparison done on NYU-D dataset. The same accuracy in the prediction is visible also in this case, objects are more distinguishable w.r.t. DORN (e.g. the bathtub in row 2 and the desks in row 5). Finally, we also provided the computed attention maps on some sample images in KITTI dataset in Fig. 13. As expected, the final structured attention tensors manage to capture important information among different depths. For example, in the fourth row, structured attention focus on farthest frost, middle jungle, and close road.\nSemantic segmentation: qualitative results. In Fig. 9 we propose a few qualitative results on Pascal-context dataset. In the figure is shown the importance of the attention model and the result obtained increasing the iterations. In the odd rows are shown the misclassified pixels (in black). The image shows clearly how the proposed iterative approach based on message passing is beneficial for the final prediction. Additional qualitative results on Cityscapes dataset are shown in Fig. 10. Also in this case, the segmentation maps produced by VISTA-Netare more precise w.r.t. those of the competitors (Fu et al., 2019; Wang et al., 2020), in particular one can notice the superior accuracy in the right hand side of the images where VISTA-Netshows better visual predictions.\nSemantic segmentation: computational cost. In Table. 6 we show the results of our experiments in order to analyze the computational cost of our method. In particular, we perform an analysis on the Pascal-context dataset and at varying T . In the table FPS means Frames Per Second. As expected, when the rank increases we also observe an increase in term of parameters and a reduction in term of speed.\nAdditional qualitative maps. Fig. 11 shows different visualizations regarding the learned structured attention on an image from the Pascal-Context dataset. The first row shows the original image, together with four slices (channels) of the overall structured attention tensor a as defined in (1). The second row shows the T = 5 spatial maps of the structured tensor mt. While the latter seem to be spread all along the dog’s body with different shapes, we observe that by optimally combining the mt’s using the vt’s, different slices of the final structured attention tensor are able to on different important parts of the dog: the head, the body, the tail and the rear paws, thus allowing to take much more accurate pixel-level predictions for segmentation.\nMeanwhile, Fig. 12 depicts segmentation maps obtained on the Pascal-Context dataset using different versions of our method. In particular, we visualize (c) VISTA-Net w/o Attention, (d) VISTA-Net w/o Spatial Attention, (e) VISTA-Net w/o Channel Attention and (f) VISTA-Net (full model). From left to right, the results become more similar to the ground truth indicating the clear advantage of our proposed attention model.\nNetwork detail. The overview of VISTA-Net is depicted in Fig. 14 (a). From left to right features are extracted from the backbone after layers 2 to 4 and then are fed to the structured attention gate module (a detailed view is shown in Fig. 14 (b)) producing the attention maps. For each emitting and receiving scale (map), one structured attention gate is computed and exploited to gate the information sent from the emitting feature map to the receiving feature map. An inner view of the attention gate and its connections are shown in Fig. 14 (b). The blue cross ⊗ denotes the convolution operation while green cross refers to the conditional kernel prediction. The symbols and ⊕ denote element-wise multiplication and addition, respectively while σ© , c© refer to the sigmoid function and feature concatenation. Finally the color code for arrows is green for M-step, yellow for V-step and red for K-step. The algorithm is also described in Algorithm 1.\nAlgorithm 1: Our structured attention algorithm (VISTA-Net) Input :\n• fe – emitting feature, size is [B,C,H,W ] • fr – receiving feature, size is [B,C,H,W ]\nOutput:\n• f̂r – updated receiving feature, size is [B,C,H,W ] 1 fconcat ← concat(fe, fr) 2 L← Conv2d(fconcat) 3 Lse→sr ← Conv2d(fe) 4 Lsr→se ← Conv2d(fr) 5 hse ← unfold(fe) 6 hsr ← unfold(fr) 7 A← σ(L · fe + Lse→sr · hse + Lsr→se · hsr) 8 for t← 1 to T do 9 Ach ← randn(B,C, 1, 1)\n10 Āch ← softmax(Ach) 11 Asp ← ∑C c (Āch ·A) 12 Āsp ← sigmoid(Asp) 13 Ach ← ∑H i ∑W j (Āsp ·A) 14 Āch ← softmax(Ach) 15 end for 16 f̂r ← (L · hse) · (Āch · Āsp ·A) + fr 17 return f̂r" } ]
2,020
null
SP:0d62919086db1e43bdd5acbb80c25f82e5466cf6
[ "This work introduces manifold regularization as an approach for learning stable deep nets, towards the goal of adversarial robustness. Several regularizers are proposed: intrinsic, sparse Laplacian and Hamming regularizers. As the proposed method relies only on adding these regularization terms to the loss, it is more computationally efficient than methods that require computation of adversarial examples during training. The proposed method is evaluated on CIFAR-10 under $\\ell_2$ and $\\ell_{\\infty}$ ball attacks and shown to be state-of-the-art in terms of verifiable robustness to $\\ell_{\\infty}$ attacks at $\\epsilon = 8/255$." ]
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks. Our regularizers encourage functions which are smooth not only in their predictions but also their decision boundaries. Empirically, our networks exhibit stability in a diverse set of perturbation models, including `2, `∞, and Wasserstein-based perturbations; in particular, against a state-of-the-art PGD adversary, a single model achieves both `∞ robustness of 40% at = 8/255 and `2 robustness of 48% at = 1.0 on CIFAR-10. We also obtain state-of-the-art verified accuracy of 21% in the same `∞ setting. Furthermore, our techniques are efficient, incurring overhead on par with two additional parallel forward passes through the network; in the case of CIFAR-10, we achieve our results after training for only 3 hours, compared to more than 70 hours for standard adversarial training.
[]
[ { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a queryefficient black-box adversarial attack via random search, 2019", "venue": null, "year": 2019 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, 2018", "venue": null, "year": 2018 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Towards a theoretical foundation for laplacian-based manifold methods", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "Journal of machine learning research,", "year": 2006 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks, 2017", "venue": null, "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "Percy Liang", "John C. Duchi" ], "title": "Unlabeled data improves adversarial robustness, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ronald R. Coifman", "Stéphane Lafon" ], "title": "Special Issue: Diffusion Maps and Wavelets", "venue": "Diffusion maps. Applied and Computational Harmonic Analysis,", "year": 2006 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "venue": null, "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned verifiers, 2018", "venue": null, "year": 2018 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras" ], "title": "Robustness (python library), 2019", "venue": "URL https://github.com/MadryLab/robustness", "year": 2019 }, { "authors": [ "Bo Geng", "Dacheng Tao", "Chao Xu", "Linjun Yang", "Xian-Sheng Hua" ], "title": "Ensemble manifold regularization", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2012 }, { "authors": [ "Andrew B Goldberg", "Ming Li", "Xiaojin Zhu" ], "title": "Online manifold regularization: A new learning setting and empirical study", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2008 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": null, "year": 2014 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Relja Arandjelovic", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models, 2018", "venue": null, "year": 2018 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The elements of statistical learning: data mining, inference, and prediction", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Matthias Hein", "Jean-Yves Audibert", "Ulrike Von Luxburg" ], "title": "From graphs to manifolds–weak and strong pointwise consistency of graph laplacians", "venue": "In International Conference on Computational Learning Theory,", "year": 2005 }, { "authors": [ "Matthias Hein", "Jean-Yves Audibert", "Ulrike von Luxburg" ], "title": "Graph laplacians and their convergence on random neighborhood graphs", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hongwei Hu", "Bo Ma", "Jianbing Shen", "Hanqiu Sun", "Ling Shao", "Fatih Porikli" ], "title": "Robust object tracking using manifold regularized convolutional neural networks", "venue": "IEEE Transactions on Multimedia,", "year": 2018 }, { "authors": [ "Daniel Jakubovitz", "Raja Giryes" ], "title": "Improving dnn robustness to adversarial attacks using jacobian regularization", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Taehoon Lee", "Minsuk Choi", "Sungroh Yoon" ], "title": "Manifold regularized deep neural networks using adversarial examples", "venue": null, "year": 2015 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks, 2017", "venue": null, "year": 2017 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "arXiv preprint arXiv:1909.04068,", "year": 2019 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Partha Niyogi" ], "title": "Manifold regularization and semi-supervised learning: Some theoretical analyses", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax cross-entropy loss for adversarial robustness, 2019", "venue": null, "year": 2019 }, { "authors": [ "David L Phillips" ], "title": "A technique for the numerical solution of certain integral equations of the first kind", "venue": "Journal of the ACM (JACM),", "year": 1962 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "arXiv preprint arXiv:1711.09404,", "year": 2017 }, { "authors": [ "Hadi Salman", "Greg Yang", "Huan Zhang", "Cho-Jui Hsieh", "Pengchuan Zhang" ], "title": "A convex relaxation barrier to tight robustness verification of neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": "arXiv preprint arXiv:1805.09190,", "year": 2018 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S. Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free!, 2019", "venue": null, "year": 2019 }, { "authors": [ "Vikas Sindhwani", "Partha Niyogi", "Mikhail Belkin", "Sathiya Keerthi" ], "title": "Linear manifold regularization for large scale semi-supervised learning", "venue": "In Proc. of the 22nd ICML Workshop on Learning with Partially Classified Training Data,", "year": 2005 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Disentangling adversarial robustness and generalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": null, "year": 2013 }, { "authors": [ "Andrei Nikolaevich Tikhonov", "AV Goncharsky", "VV Stepanov", "Anatoly G Yagola" ], "title": "Numerical methods for the solution of ill-posed problems, volume 328", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Vincent Tjeng", "Kai Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "arXiv preprint arXiv:1711.07356,", "year": 2017 }, { "authors": [ "Vikrant Singh Tomar", "Richard C Rose" ], "title": "Manifold regularized deep neural networks", "venue": "In Fifteenth Annual Conference of the International Speech Communication Association,", "year": 2014 }, { "authors": [ "Vikrant Singh Tomar", "Richard C. Rose" ], "title": "Graph based manifold regularized deep neural networks for automatic speech recognition, 2016", "venue": null, "year": 2016 }, { "authors": [ "Florian Tramer", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ivor W Tsang", "James T Kwok" ], "title": "Large-scale sparsified manifold regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Jonathan Uesato", "Jean-Baptiste Alayrac", "Po-Sen Huang", "Robert Stanforth", "Alhussein Fawzi", "Pushmeet Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": null, "year": 2019 }, { "authors": [ "Ulrike Von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and computing,", "year": 2007 }, { "authors": [ "Ulrike Von Luxburg", "Mikhail Belkin", "Olivier Bousquet" ], "title": "Consistency of spectral clustering", "venue": "The Annals of Statistics,", "year": 2008 }, { "authors": [ "Xu Wang" ], "title": "Spectral convergence rate of graph laplacian", "venue": null, "year": 2015 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eric Wong", "Frank R. Schmidt", "Jan Hendrik Metzen", "J. Zico Kolter" ], "title": "Scaling provable adversarial defenses, 2018", "venue": null, "year": 2018 }, { "authors": [ "Eric Wong", "Frank R. Schmidt", "J. Zico Kolter" ], "title": "Wasserstein adversarial examples via projected sinkhorn", "venue": null, "year": 2019 }, { "authors": [ "Kai Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry" ], "title": "Training for faster adversarial robustness verification via inducing relu stability", "venue": null, "year": 2019 }, { "authors": [ "Zenglin Xu", "Irwin King", "Michael Rung-Tsong Lyu", "Rong Jin" ], "title": "Discriminative semi-supervised feature selection via manifold regularization", "venue": "IEEE Transactions on Neural networks,", "year": 2010 }, { "authors": [ "Runtian Zhai", "Chen Dan", "Di He", "Huan Zhang", "Boqing Gong", "Pradeep Ravikumar", "Cho-Jui Hsieh", "Liwei Wang" ], "title": "Macer: Attack-free and scalable robust training via maximizing certified radius", "venue": null, "year": 2001 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization, 2016", "venue": null, "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy, 2019", "venue": null, "year": 2019 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Wei Zhu", "Qiang Qiu", "Jiaji Huang", "Robert Calderbank", "Guillermo Sapiro", "Ingrid Daubechies" ], "title": "Ldmnet: Low dimensional manifold regularized neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hein" ], "title": "need to take to zero, so that in the limit we are just sampling from the unperturbed manifold. In fact, it is possible to prove convergence without taking s to zero, however the continuous operator that is recovered no longer refers to the gradient but rather the value of the kernel over the manifold (Von", "venue": "Luxburg et al", "year": 2005 }, { "authors": [ "Maini" ], "title": "train using SGD with a batch size of 128 and weight decay γK of 5e-4", "venue": null, "year": 2019 }, { "authors": [ "Croce", "Hein" ], "title": "2020), for the `2 and `∞ perturbations. We choose the attack because it is parameter-free, which reduces the possibility of misconfiguration; empirically it", "venue": null, "year": 2020 }, { "authors": [ "Wong" ], "title": "For comparing with standard PGD, we use a 20-step PGD adversary with 10 random restarts and a step size", "venue": null, "year": 2019 }, { "authors": [ "Tjeng" ], "title": "2017), with solves parallelized over 8 CPU cores and the timeout set", "venue": null, "year": 2017 }, { "authors": [ "Schott" ], "title": "namely, that a model trained using PGD on CIFAR-10 for `∞ performs poorly against `2 attacks. We also ran a 500-step `∞ PGD adversary with 20 restarts at = 8/255 against our model, yielding 40.4% robust accuracy, which is plotted as “ours (PGD+)", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent results in deep learning highlight the remarkable performance deep neural networks can achieve on tasks using data from the natural world, such as images, video, and audio. Though such data inhabits an input space of high dimensionality, the physical processes which generate the data often manifest significant biases, causing realistic inputs to be sparse in the input space.\nOne way of capturing this intuition is the manifold assumption, which states that input data is not drawn uniformly from the input space, but rather supported on some smooth submanifold(s) of much lower dimension. Starting with the work of Belkin et al. (2006), this formulation has been studied extensively in the setting of semi-supervised kernel and regression methods, where algorithms exploit the unlabelled data points to learn functions which are smooth on the input manifold (Geng et al., 2012; Goldberg et al., 2008; Niyogi, 2013; Sindhwani et al., 2005; Tsang and Kwok, 2007; Xu et al., 2010). Such techniques have seen less use in the context of deep neural networks, owing in part to the ability of such models to generalize from relatively sparse data (Zhang et al., 2016).\nContributions We apply concepts from manifold regularization to train locally stable deep neural networks. In light of recent results showing that neural networks suffer widely from adversarial inputs (Szegedy et al., 2013), our goal is to learn a function which does not vary much in the neighborhoods of natural inputs, independently of whether the network classifies correctly. We show that this definition of local stability has a natural interpretation in the context of manifold regularization, and propose an efficient regularizer based on an approximation of the graph Laplacian when the data is sparse, i.e., the pairwise distances are large. Crucially, our regularizer exploits the continuous piecewise linear nature of ReLU networks to learn a function which is smooth over the data manifold in not only its outputs but also its decision boundaries.\nWe evaluate our approach by training neural networks with our regularizers for the task of image classification on CIFAR-10 (Krizhevsky et al., 2009). Empirically, our networks exhibit robustness against a variety of adversarial models implementing `2, `∞, and Wasserstein-based attacks. We also achieve state-of-the-art verified robust accuracy under `∞ of size = 8/255. Furthermore, our regularizers are cheap: we simply evaluate the network at two additional random points for each training sample, so the total computational cost is on par with three parallel forward passes through the network. Our techniques thus present a novel, regularization-only approach to learning robust\nneural networks, which achieves performance comparable to existing defenses while also being an order of magnitude more efficient." }, { "heading": "2 BACKGROUND", "text": "Manifold regularization The manifold assumption states that input data is not drawn uniformly from the input domain X , also know as the ambient space, but rather is supported on a submanifold M⊂ X , called the intrinsic space. There is thus a distinction between regularizing on the ambient space, where the learned function is smooth with respect to the entire input domain (e.g., Tikhonov regularization (Phillips, 1962; Tikhonov et al., 2013)), and regularizing over the intrinsic space, which uses the geometry of the input submanifold to determine the regularization norm.\nA common form of manifold regularization assumes the gradient of the learned function ∇Mf(x) should be small where the probability of drawing a sample is large; we call such functions “smooth”. Let µ be a probability measure with supportM. This leads to the following intrinsic regularizer:\n||f ||2I := ∫ M ||∇Mf(x)||2dµ(x) (1)\nIn general, we cannot compute this integral becauseM is not known, so Belkin et al. (2006) propose the following discrete approximation that converges to the integral as the number of samples grows:\n||f ||2I ≈ 1\nN2 N∑ i,j=1 (f(xi)− f(xj))2Li,j (2)\nHere, the x1, ..., xN are samples drawn, by assumption, from the input manifoldM according to µ, and L is a matrix of weights measuring the similarity between samples. The idea is to approximate the continuous input manifold using a discrete graph, where the vertices are samples, the edge weights are distances between points, and the Laplacian matrix L encodes the structure of this graph. A common choice of weights is a heat kernel: Li,j = L(xi, xj) := exp(−||xi−xj ||2/s). To improve computational costs, weights are often truncated to the k-nearest neighbors or within some -ball. Note that the Laplacian can also be interpreted as a discrete matrix operator, which converges under certain conditions to the continuous Laplace operator (Belkin and Niyogi, 2008).\nReLU networks Our development focuses on a standard architecture for deep neural networks: fully-connected feedforward networks with ReLU activations. In general, we can write the function represented by such a network with n layers and parameters θ = {Ai, bi}i=1,...,n−1 as\nz0 = x (3) ẑi = Ai · zi−1 + bi for i = 1, ..., n− 1 (4) zi = σ(ẑi) for i = 1, ..., n− 2 (5)\nf(x; θ) = ẑn−1 (6)\nwhere the Ai are the weight matrices and the bi are the bias vectors. We call the zi “hidden activations”, or more simply, activations, and the ẑi “pre-activations”.\nIn this work, we consider networks in which σ(·) in (5) is the Rectified Linear Unit (ReLU) zi = σ(ẑi) := max(0, ẑi) (7)\nIt is clear from this description that ReLU networks are a family of continuous piecewise linear functions. We denote the linear function induced by an input x as fx(·; θ), i.e., the analytic extension of the local linear component about x over the input domain.\nAdversarial robustness One common measure of robustness for neural networks is against a norm-bounded adversary. In this model, the adversary is given an input budget over a norm || · ||in, and asked to produce an output perturbation δ over a norm | · |out. A point x′ is an -δ adversarial example for an input pair (x, y) if\n||x′ − x||in ≤ (8) |f(x′; θ)− y|out ≥ δ (9)\nWhen the specific norm is either unimportant or clear from context, we also write the first condition as x′ ∈ N (x), where N (x) refers to the -ball or neighborhood about x. If such an adversarial example does not exist, we say that the network is -δ robust at x. Standard examples of || · ||in include the `2 and `∞ “norm”, defined for vectors as ||x||∞ := maxi |xi|. For classification tasks, the adversary is successful if it produces an example in the -neighborhood of x which causes the network to misclassify. In this case, we drop δ and say that the network is -robust at x. Note that if f(x; θ) is already incorrect, then x suffices as an adversarial example." }, { "heading": "3 RELATED WORK", "text": "Manifold regularization was first introduced by Belkin et al. (2006) in the context of semi-supervised learning, where the goal was to leverage unlabeled samples to learn a function which behaves well (e.g., is smooth, or has low complexity) over the data manifold. The use of manifold regularization for deep neural networks has been explored in several contexts (Tomar and Rose, 2014; 2016; Hu et al., 2018; Zhu et al., 2018). In particular, Lee et al. (2015) combine manifold regularization with adversarial training and show improvements in standard test accuracy. Our approach is to use manifold regularization to induce stability separately from accuracy. We note that a similar decomposition between accuracy and stability forms the basis for the TRADES algorithm (Zhang et al., 2019), though the training procedure ultimately relies on adversarial training. Hein and Andriushchenko (2017) propose a conceptually similar regularizer to minimize the difference between logits and show improved `2 certified robustness. Finally, several prior works explore regularizing for robustness based on other differential forms (Ross and Doshi-Velez, 2017; Jakubovitz and Giryes, 2018), though they only report results using the weaker single-step PGD adversary. In particular, a recent work by Zhai et al. (2020) uses randomized smoothing for `2 certified robustness, and claim similar computational advantage due to avoiding adversarial training, but still take 61 hours to train, compared to only 3 hours in our approach.\nAdversarial examples were introduced by Szegedy et al. (2013), who found that naively trained neural networks suffer almost complete degradation of performance on natural images under slight perturbations which are imperceptible to humans. A standard class of defenses is adversarial training, which is characterized by training on adversarially generated input points (Goodfellow et al., 2014). In particular, the Projected Gradient Descent (PGD) attack (Kurakin et al., 2016; Madry et al., 2017) is widely considered to be an empirically sound algorithm for both training and evaluation of robust models. However, such training methods rely on solving an inner optimization via an iterative method, effectively increasing the number of epochs by a multiplicative factor (e.g., an overhead of 5–10x for standard PGD). Achieving robustness against multiple adversarial models has also been explored previously (Schott et al., 2018; Tramer and Boneh, 2019; Maini et al., 2019; Croce and Hein, 2019b), though in most cases these works use weaker variants of the subset of standard adversaries we consider (e.g., a smaller or the single-step version of PGD).\nAnother approach is to train models which are provably robust. One method is to use an exact verification method, such as an MILP solver, to prove that the network is robust on given inputs (Tjeng et al., 2017). In particular, Xiao et al. (2019) use a similar loss based on ReLU pre-activations to learn stable ReLUs for efficient verification, but rely on a PGD adversary to train a robust model. Certification methods modify models to work directly with neighborhoods instead of points (Dvijotham et al., 2018; Gowal et al., 2018; Mirman et al., 2018; Wong et al., 2018). In practice, the inference algorithms must overapproximate the neighborhoods to preserve soundness while keeping the representation compact as it passes through the network. This strategy can be interpreted as solving a convex relaxation of the exact verification problem. Though certification thus far has produced better lower bounds, verification as a technique is fully general and can be applied to any model (given sufficient time); recent work also suggests that methods using layerwise convex relaxations may face an inherent barrier to tight verification (Salman et al., 2019)." }, { "heading": "4 SETTING", "text": "We reframe the goal of learning functions that are robust using a perspective which decouples stability from accuracy. The key observation is that we would like to train networks that are locally stable around natural inputs, even if the network output is incorrect. This approach contrasts with\nadversarial training, which attempts to train the network to classify correctly on worst-case adversarial inputs. In particular, recall that a network is -robust at x if no point in the -neighborhood of x causes the network to misclassify. We consider the related property of -stability (cf. Zheng et al. (2016)):\nDefinition 4.1. A function f is -δ stable at an input x if for all x′ ∈ N (x), |f(x)− f ′(x)| ≤ δ. A classifier f is -stable at an input x if for all x′ ∈ N (x), f(x) = f(x′).\nAs -stability is independent of the correct label for x, we argue that -stability is a property of the function with respect to the input manifold and can thus be captured using manifold regularization. For completeness, we state the following connection between robustness and stability:\nProposition 4.1. A function f is -δ robust at an input x iff f is -δ stable at x and f(x) = y. A classifier f is -robust at an input x iff f is -stable at x and f correctly classifies x." }, { "heading": "5 MANIFOLD REGULARIZATION FOR DEEP NEURAL NETWORKS", "text": "Applying the regularization term in (2) yields, in the limit, a function which is smooth on the data manifold. Unfortunately, a straightforward approach does not suffice for our goal of learning -stable deep neural networks. The first problem is that smoothness on the data manifold does not yield -stability (Stutz et al. (2019)); indeed, we are concerned with the behavior of our function in an - neighborhood of the manifold, not just on the manifold. The second observation is that convergence of the discrete approximation requires that the samples be dense over the input manifold; however, this assumption is almost certainly violated in most practical applications, particularly in the deep learning regime. The next two sections are dedicated to these challenges." }, { "heading": "5.1 RESAMPLING FOR LOCAL SMOOTHNESS", "text": "We write the -neighborhood of a manifold M as M := {x : ∃y ∈ M, ||x − y|| ≤ }. Since -stability is defined over the -neighborhood of every input point, we might want a function which is smooth overM instead of justM, e.g., by slightly perturbing every input point. This strategy does produce samples from the -neighborhood of the data manifold, however note that the induced distribution is not uniform but rather the convolution of the choice of noise distribution with the uniform distribution overM. Nevertheless, this procedure exhibits several properties we can exploit. The first is that for sufficiently small , we get nearly the same operator overM andM, so that smoothness overM does not sacrifice smoothness on the original data manifoldM. A more subtle property is that we can actually draw as many distinct points as we would like fromM . We leverage these extra points to build a regularizer which yields good estimations of the local smoothness. Moreover, taking a discrete approximation of the form in Equation 2 with the resampled points from M still converges to the original operator. Formally, we can state the following: Proposition 5.1. Let , s > 0 be given. Let x1, ..., xn be n samples drawn uniformly at random from a submanifoldM ⊂ X . For each xi, pick c new points xi,1, ..., xi,c by sampling iid perturbations δi,1, ..., δi,c and setting xi,j = xi + δi,j , where ∀i, j, ||δi,j || < . Given a kernel ks(x, y) = exp(−||x − y||2/s), let L and Lc be the Laplacian matrices defined by the n original samples and n ·c new samples, respectively. Then if 2 < s, we have that L and Lc converge to the same operator in the limit as n→∞.\nWe prove this result in Appendix A. For our purposes, this result states that the resampled Laplacian enjoys the same behavior in the limit as the original Laplacian." }, { "heading": "5.2 SPARSE APPROXIMATION OF RESAMPLED LAPLACIAN", "text": "The Laplacian matrix is dense, and requires both O(N2) time and space to compute and store. It is thus standard to employ heuristics for sparsifying Laplacians to derive efficient algorithms; in this work, we consider the -neighborhood sparsifier, denoted L , which is defined as the Laplacian of the graph created by retaining only those edges whose weights are at most (Von Luxburg, 2007).\nTo motivate this approach, our main observation is that, under certain sparsity assumptions in the data, the resampled Laplacian will consist of two types of edges: very short edges between points\nresampled from the same -neighborhood, and very long edges between points sampled from different -neighborhoods. For an appropriate choice of the scale factor s, the exponential form of the heat kernel causes the weights on the long edges to fall off very quickly compared to the short edges. The following results gives bounds for this approximation of the regularizer in Equation 2 under a certain separation assumption in the data:\nProposition 5.2. Consider a function f and a set of points {xi}Ni=1 such that the pairwise differences bounded from above and below by 0 < c1 ≤ ||f(xi)−f(xj)||22 ≤ c2 <∞ for all i 6= j. Let s be the parameter of the heat kernel ks = exp(−||xi − xj ||2/s). Assume further that the points {xi} are separated in the sense that there exist at least O(n) pairs of points (xi1, xi2), i ∈ S whose distances are bounded from above by √ s, and the remainder of the points have distances bounded from below by m √ s for some constant m > 1. Writing R(L) := ∑N i,j=1(f(xi)− f(xj))2Li,j , we have that\nR(L)− c2N2 exp(−m2) ≤ R(L ) ≤ R(L) ≤ (1 + b−1)R(L ) (10)\nwhere b = Θ((c1/c2)(exp(m2)/N)).\nWe give a proof in Appendix A. In this work, we only consider bounded functions f , which allows us to apply this proposition. This result gives two bounds on the dense regularization R(L) in terms of R(L ). The first inequality says that the absolute approximation error is bounded by a term which vanishes unless the squared sample count N2 grows as the exponential of the squared separation m2. The third inequality gives a tighter bound on the relative error in a term that vanishes unless the linear sample count N grows as the exponential of the squared separation m2, but requires that we have good control over the ratio c1/c2. Note that the dependence on c1/c2 is unavoidable in the sense that a function which is very smooth in local neighborhoods of size s will necessarily have large relative contribution from the longer edges; however, in this case we still have a bound on the absolute contribution (though we trade the ratio c1/c2 for a factor of N ). Crucially, in both cases the error bound depends on an exponential factor of the squared separation m2.\nTo provide some empirical evidence that the separation m can be made non-trivial in some settings, we offer the following statistics from CIFAR-10. We select a random subset of 10,000 (20%) training samples. For each of these points, we find the closest point within the remaining 49,999 training samples, and compute the distance. Using the `2 metric, we found a mean distance of 9.597, a median distance of 9.405, and the distance at the 10th percentile to be 5.867. Conversely, we resample points in an `∞ ball of radius = 8/255, which is contained in an `2 ball of radius√\n3 · 32 · 32 ·8/255 = 1.739. In fact, only 21 training samples, or 0.21% of the random subset, have a partner closer than twice the perturbation bounds. Thus, after the resampling procedure, setting the scale parameter to s = 2 and taking the -neighborhood Laplacian should yield a good approximation to the full Laplacian. Finally, to generalize this to larger datasets, for fixed s one would expect that the separation m √ s between points grows with the dimension of the ambient space, which gives an exponential decay in the approximation error. Indeed, this is one of the manifestations of the oft-cited curse of dimensionality (Hastie et al., 2009).\nDue to our resampling procedure, we have at least c2N points whose squared pairwise distances are less than 2, which motivates setting s ≈ 2; notice that this choice of s satisfies the assumptions of Proposition 5.1 needed for convergence. However, rather than find the -neighborhood Laplacian, we propose to compute the regularizer only using points resampled from the same -neighorhood, yielding the following sparse approximation of the intrinsic regularizer:\n||f ||2I ≈ 1\nc2N N∑ i=1 c∑ j,k=1 (f(xi,j)− f(xi,k))2L(xi,j , xi,k) (11)\nWe emphasize that this is a heuristic, motivated by computational efficiency, whose quality ultimately depends on the sparsity of the dataset and the particular choice of . In particular, this approximation should not be used when the data is dense and there are many points that are within distance of each other. However, the diagonal form of this approximation permits extremely efficient computations, particularly on vectorized hardware." }, { "heading": "5.3 HAMMING REGULARIZERS", "text": "We additionally leverage the structure of ReLU networks to induce a stronger regularization effect on the data manifold. The central observation is that not just do we want the function computed by the neural network to be constant (i.e., smooth with respect to its outputs) on the manifold, we also want the network to produce its outputs in roughly the same way.\nFirst, we identify every local linear component fx(·; θ) with its “activation pattern”, i.e., the sequence of branches taken at each ReLU. We will write fH(x; θ) for the map that takes inputs x to their activation patterns, which live on the unit hypercube {0, 1}N (where N is the number of ReLUs in the network). If we endow the hypercube with the standard Hamming distance dH , this induces a pullback (psuedo-)metric in the input space: d∗H(x, y) = dH(fH(x; θ), fH(y; θ)). Notice that the metric identification is exactly the set of local linear components fx(·; θ). Recall that the goal of regularization is to reduce the complexity of the learned function f in the -neighborhood of inputs x. We argue that d∗H provides a concise measure of this complexity. Specifically, if we consider the number of distinct local linear components in N (x), then since d∗H is a psuedo-metric, we have that ∀x′ ∈ N (x), d∗H(x, x′; θ) = 0 if and only if N (x) is a single linear component. Thus, minimizing d∗H between x and all x\n′ ∈ N (x) reduces the number of linear components in the neighborhood of x. In fact, by the triangle inequality, the same holds in the interior of any convex polytope defined by a set of points {xi}i∈I where ∀i, i′ ∈ I, d∗H(xi, xi′ ; θ) = 0. This makes minimizing d∗H very sample efficient.\nTreating fH(·; θ) as an output of the network, we have the following (sparse) manifold regularizer: ||fH(·; θ)||2I := ∫ M ||∇M fH(x; θ)||2dµ(x) (12)\n≈ 1 c2N N∑ i=1 c∑ j,k=1 d∗H(xi,j , xi,k; θ) 2L(xi,j , xi,k) (13)\nwhich is just Equations 1 and 11 with the outputs f(x) replaced by the activation pattern map. However, this loss term is not continuous in the inputs, and furthermore, the gradients vanish almost everywhere, so it does not generate good training signals. We thus use a continuous relaxation:\nHα(ẑ, ŷ; θ) := abs(tanh(α ∗ ẑ)− tanh(α ∗ ŷ)) (14)\nThis form is differentiable everywhere except when ẑ = ŷ, and recovers the Hamming distance when we take α to infinity (after scaling). Qualitatively, sensitive activations (i.e., small |ẑ| and |ŷ|) are permitted so long as they are precise. Figure 1 presents the surface and contour plots of Hα. Note that this idea can be extended more generally to other activation functions by penalizing differing pre-activations more when the second derivative of the activation function is large (and so the first-order Taylor approximation has larger errors)." }, { "heading": "5.4 TRAINING WITH SPARSE MANIFOLD REGULARIZATION", "text": "For every sample x, we generate a new random maximal perturbation ρ ∈ {± }d and produce the pair of perturbed inputs x+ := x + p and x− := x − p. We compute the standard manifold regularization term as ||f(·, θ)||2I ∝ ∑ i ||f(x + i ) − f(x − i )||22. For the Hamming regularizer, we use the `2 norm to combine the Hamming distances between pre-activations within a layer and normalize by twice the number of elements in each layer. We sum over the layers, then normalize by the total number of layers. Note that in both cases, the weights L(x+i , x − i ) can be dropped since the distance between x+i and x − i is constant. The final optimization objective is thus\nθ∗ = arg min θ\n1\nN N∑ i=1 V (f(xi; θ), yi) + γK ||f(·; θ)||2K + γI ||f(·; θ)||2I + γH ||fH(·; θ)||2I (15)\nwhere V is the loss function, ||f(·, θ)||2K is the ambient regularizer (e.g., `2 regularization), and the γK , γI , γH are hyperparameters which control the relative contributions of the different regularizers." }, { "heading": "5.5 DISCUSSION", "text": "Our development has focused on learning deep neural networks which are smooth when samples are sparse over the input manifoldM. While this property is related to -stability, in general the two are not equivalent. Roughly, -stability is inherently a property about worst case behavior, whereas manifold regularization is aimed at learning a function which is smooth on average. Nonetheless, we note that a function achieving zero loss with the manifold regularizer yields perfect stability (i.e., a constant function on the data manifold).\nNext, we discuss our choice to focus on the sparse setting. The basic idea is two-fold: first, the requirement that the learned function be -stable significantly alters the geometry of the input manifold; second, when data is sparse, the amount of information one can glean about the overall geometry of the input manifold is limited, as is evidenced by the vanishing weights in the Laplacian. Our main hypothesis is that the combination of these two observations allows one to formulate a version of the regularizer built from resampled data, which maintains the same properties in the limit but yields more local information when data is sparse.\nConversely, consider the alternative approach, namely, directly learning a smooth function when it is possible to produce samples, not necessarily labelled, that are dense over the input manifoldM. Then given the central thesis behind this work, one would expect stability to improve. In fact, the top four results for robust accuracy reported in Croce and Hein (2020) all exploit an additional dataset of unlabeled images which is orders of magnitude larger than the original training set (Carmon et al., 2019; Hendrycks et al., 2019; Uesato et al., 2019; Wang et al., 2020)." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "We report results for two models trained on CIFAR-10 image classification using our regularization techniques: a smaller CNN for exact verification, following Xiao et al. (2019); and a PreActResNet18 model (He et al., 2016) for benchmarking against a variety of adversarial models. We also include three ablation studies for the larger model, including one which does not use the sparsification scheme developed in Section 5.2 (dense regularizer), and two using the individual regularizers to isolate their individual effects on stability (intrinsic / Hamming regularizer only). Appendix B reports training details and hyperparameters; full experimental results (including MNIST, which is not discussed here) are in Appendix C." }, { "heading": "6.1 STABILITY", "text": "The main characteristic that differentiates our approach conceptually from existing methods for adversarial robustness is that we train our models using pure regularization, i.e., without reference to a particular adversary. In comparison, most methods train and test using inputs produced by the same adversary. While this procedure yields good performance against the adversary used during training, performance degrades significantly when evaluated under different perturbation models. In general, the bounds for various adversarial models are set so that maximal perturbations remain\nsomewhat reasonable by human standards; thus, we expect a model that is stable in the general sense to be robust against a variety of adversaries, particularly those not seen during training.\nIn line with this observation, we test a single model trained to be stable using manifold regularization against three norm-bounded perturbation models. The first two constitute the most common settings for adversarial robustness, namely, CIFAR-10 robustness using `∞ and `2 PGD adversaries bounded at = 8/255 and 0.5, respectively. We also use a Wasserstein-based PGD adversary at = 0.1, which is a metric for images more closely aligned with human perceptions. Our results indicates that a model trained using our manifold regularizer does in fact learn to be more stable on test images across a variety of different perturbation models (Table 1). We obtained our results after training for 3 hours on one GPU, compared to several days for standard PGD training (Shafahi et al., 2019).\nFor comparison, we note that standard adversarial training using PGD from Madry et al. (2017) , which forms the basis of every state-of-the-art approach for `∞ robustness (Croce and Hein, 2020), achieves negligible performance when evaluated in the `2 setting. We also report results from Maini et al. (2019), which was the only other approach in the literature to evaluate their models for both `∞ and `2 robustness at the full levels of using a PGD adversary; their models are trained for 50 epochs with 50 steps of PGD against multiple adversaries in parallel (≈ 5000 effective epochs). Finally, we also report results from Wong et al. (2019) for comparison against the Wasserstein adversary. Their results are obtained against a PGD adversary using a 50-step inner optimization (number of training epochs is not reported); their baseline model is trained to be certifiably robust against `∞ attacks of size = 2/255, and achieves lower robustness than ours." }, { "heading": "6.2 ABLATION STUDIES", "text": "We run several ablation studies to better understand the behavior of our methods. The first column of Table 2 reports the verifiable robustness of our smaller model, which outperforms the previous state of the art as reported in Xiao et al. (2019). We emphasize that we provide this metric as a baseline for establishing a provable lower bound on the robustness achieved by our defense, compared to empirical robustness which is often a brittle measure of true robustness, particularly for newly proposed defenses (Athalye et al., 2018; Carlini et al., 2019; Tramer et al., 2020).\nThe second column of Table 2 presents robust accuracy against the standard PGD adversary used in the literature. The dense regularizer has both lower robust and clean accuracy, which suggests an over-regularization effect stemming from longer edges that do not yield reliable information about the intrinsic geometry of the manifold. Furthermore, training with the dense regularizer takes around 21.5 hours, or nearly 8 times longer than the sparse method (due to both the overhead of computing additional gradients during backpropogation, as well as the smaller batch sizes for offsetting the increased memory requirements for storing the dense Laplacian). We also report the results from using intrinsic or Hamming regularizers alone, which indicate that both terms contribute jointly and individually to our performance. For comparison, we report results for other approaches which are not variants of PGD (i.e., do not rely on solving a minimax problem at each training loop).\nTo the best of our knowledge, Pang et al. (2019) is the only other result in the literature which achieves non-trivial `∞ robust accuracy on the standard benchmark of CIFAR-10 at = 8/255 using a regularization-only approach; their results are better than either of our regularizers used independently, but significantly worse than when the regularizers are used together, in either the sparse or the dense setting. For context, the best result in this setting is reported by Wang et al. (2020) using a PGD variant at 65.04% robust accuracy." }, { "heading": "7 CONCLUSION", "text": "We design regularizers based on manifold regularization that encourage piecewise linear neural networks to learn locally stable functions. We demonstrate this stability by showing that a single model trained using our regularizers is resilient against `2, `∞, and Wasserstein-based attacks. We also achieve state-of-the-art verified robustness of 21% against `∞-bounded perturbations of size = 8/255 on CIFAR-10. Critically, computing our regularizers relies only on random sampling, and thus does not require running an inner optimization loop to find strong perturbations at training time. As such, our techniques exhibit strong scaling, since they increase batch sizes rather than epochs during training, allowing us to train an order of magnitude faster than standard adversarial training. This work thus presents the first regularization-only approach to achieve comparable results to standard adversarial training against a variety of perturbation models." }, { "heading": "A PROOFS", "text": "Proof of Proposition 5.1. Given a heat kernel ks(x, y) = exp(−||x − y||2/s) and samples x1, ..., xN , we define the discrete operator L for an out-of-sample point x as\nLf(x) = 1\nN N∑ i=1 ks(x, xi)f(x)− 1 N N∑ i=1 ks(x, xi)f(xi) (16)\nOur claim is that the original Laplacian L and the resampled Laplacian L′c converge pointwise to the same continuous operator.\nThe proof proceeds in two parts. First, we show that resampling once still yields convergence of the Laplacian. Notice that we can model the resampling procedure as a form of noise; our desired result follows immediately from the following result of Coifman and Lafon (2006):\nProposition A.1 (Criterion 5 in Coifman and Lafon (2006)). Suppose that X is a perturbed version of M, that is, there exists a perturbation function η : M → X with a small norm (the size of the perturbation) such that every point in X can be written as x + η(x) for some x ∈ M. Then the approximation is valid as long as the scale parameter s remains larger than the size of the perturbation.\nSince we have that the size of our perturbations is bounded by 2 < s, this gives us the desired result.\nNext we show pointwise convergence holds for arbitrary resampling c. This is a simple consequence of the linearity of the operator L. We order the samples as xi,j , where i indexes the original data points i = 1, ..., N , and j indexes the resample j = 1, ..., c. Furthermore, let Ljc denote the Laplacian operator defined only on the jth resampled points xi,j for i = 1, ..., N . Then\nLcf(x) = 1\ncN cN∑ i=1 ks(x, xi)f(x)− 1 cN cN∑ i=1 ks(x, xi) (17)\n= 1\nc c∑ j=1 [ 1 N N∑ i=1 ks(x, xi,j)f(x)− 1 N N∑ i=1 ks(x, xi,j)f(xi,j) ] (18)\n= 1\nc c∑ j=1 Ljcf(x) (19)\n−−−−→ n→∞ Lf(x) (20)\nwhere the last line follows from the first half of the proof.\nRemarks. We have shown that L′ converges pointwise to the discrete operator L. However, for L to converge to its continuous counterpart in Equation 1 (i.e., the Laplace-Beltrami operator), we actually need to take the scale of the kernel s to zero (see, e.g. the proofs of convergence in Belkin and Niyogi (2008); Hein et al. (2005; 2007)). Then Proposition 5.1 implies that we also need to take to zero, so that in the limit we are just sampling from the unperturbed manifold. In fact, it is possible to prove convergence without taking s to zero, however the continuous operator that is recovered no longer refers to the gradient but rather the value of the kernel over the manifold (Von Luxburg et al. (2008) analyze this case). The spectra of these operators are often used in various clustering algorithms; in the context of deep neural networks, these operators yield similar regularizers for “smoothness”, though for a different definition than the one used in this work. Finally, it should also be noted that the convergence of these operators depends on the intrinsic dimension of the manifold; as we consider here an -neighborhood of the manifold which has the same dimension as the ambient space, our operators will converge more slowly (Wang (2015)).\nProof of Proposition 5.2. We introduce L̄ := L − L , which is just the Laplacian on the complement of the neighborhood subgraph, i.e., the subgraph whose edges are at least . Clearly R(L) = R(L̄ ) +R(L ). Then we have the following bounds:\nR(L̄ ) = ∑ i,j 6∈S (f(xi)− f(xj))2Li,j ≤ c2N2 exp(−m2) (21)\nR(L ) = ∑ i,j∈S (f(xi)− f(xj))2Li,j ≥ ac1N exp(−1) (22)\nwhere the a is a constant introduced for the O(N) edges in the graph of L .\nTake b = (ac1/c2)(exp(m2 − 1)/N). A simple rearrangement gives c2 = (b−1ac1)(exp(m2 − 1)/N), thus we have that\nR(L) = R(L ) +R(L̄ ) (23)\n≤ R(L ) + c2N2 exp(−m2) (24) ≤ R(L ) + [ (b−1ac1)(exp(m 2 − 1)/N) ] N2 exp(−m2) (25)\n= R(L ) + b −1ac1N exp(−1) (26)\n≤ R(L ) + b−1R(L ) (27) = (1 + b−1)R(L ) (28)\nThen the first inequality follows from (24), the second inequality is trivial, and the third inequality follows from (23)–(28)." }, { "heading": "B EXPERIMENTAL METHODS AND HYPERPARAMETERS", "text": "We use a PreActResNet18 model (He et al., 2016) for the CIFAR-10 robustness experiments. We train using SGD with a batch size of 128 and weight decay γK of 5e-4. We follow Maini et al. (2019) for our learning rate schedule, which is piecewise linear and starts at 0 and goes to 0.1 over first 40 epochs; 0.005 over the next 40 epochs; and 0 over the final 20 epochs. We increase epsilon from 2 to 8 over epochs 10 to 35. We start the weight γI of the manifold regularization at 0.8 and the weight γH of the Hamming regularization at 2,400; these increase linearly up to a factor of 10 from epochs 20 to 80. We set the hyperparameter α = 8. We use this set of hyperparamaters for all the CIFAR-10 ablation studies (except when setting γI or γH to 0 for the ablation studies involving the individual regularizers).\nFor the dense regularizer, we used heat kernel with scale s = 2 as weights for the Laplacian. We use a smaller batch size of 40 to offset the increased memory requirements of storing and computing the dense regularizer.\nWe use a two-layer convolutional neural network for the CIFAR-10 verification experiments, consisting of 2x2 strided convolutions with 16 and 32 filters, then a 128 hidden unit fully connected layer. This is the same model as used in Wong et al. (2018) and Xiao et al. (2019), except those works use a 100 hidden unit fully connected layer. We use the same schedule for the learning rate and as in the PreActResNet18 model. The weight γI starts at 0.4 and the weight γH starts at 9000; these increase linearly up to a factor of 10 from epochs 20 to 80. We use the same hyperparameter α as for the PreActResNet18 model.\nWe use the CNN with four convolutional layers plus three fully-connected layers from Carlini and Wagner (2017) for the MNIST robustness experiments. We use the same schedule for the learning rate, , γI , and γH as in the PreActResNet18 model (except that scales to 0.3). We use the same hyperparameter α as for the PreActResNet18 model.\nTo set the hyperparameters γI and γH on the PreActResNet18 model, we ran a grid search for γI = 0.2, ..., 1.0 and γH/γI = 100, 200, ..., 500 and selected the settings which yielded the best robust accuracy against a 20-step `∞ PGD adversary with 10 restarts for = 8/255 on the full training set; the range of γH/γI was set such that the corresponding losses were roughly equal for a randomly initialized, untrained network. For reporting results, we train five models using the selected hyperparameters and report results using the one with the median performance on the test set against a 20-step `∞ PGD adversary at = 8/255 on CIFAR-10 or 0.3 on MNIST. For the PreActResNet18 model, robust accuracy over the 5 runs ranged from 40.1% to 41.5% with median 40.5%; clean accuracy ranged from 66.9% to 72.4% with median 70.0%.\nFor our stability results, we use the full version of AutoAttack+, an ensemble of attacks proposed by Croce and Hein (2020), for the `2 and `∞ perturbations. We choose the attack because it is parameter-free, which reduces the possibility of misconfiguration; empirically it is at least as strong as standard PGD, and has been successful in breaking many proposed defenses. For the Wasserstein adversary, we use an implementation by Wong et al. (2019). For comparing with standard PGD, we use a 20-step PGD adversary with 10 random restarts and a step size of 2.5 · /20 as implemented by Engstrom et al. (2019). For verification, we adopt the setup of Xiao et al. (2019), using the MIP verifier of Tjeng et al. (2017), with solves parallelized over 8 CPU cores and the timeout set to 120 seconds." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "We plot robustness curves for both CIFAR and MNIST against a standard 20-step PGD adversary with 10 restarts using both `2 and `∞ bounds in Figure 2. These results show that, compared with models trained using standard PGD against `∞ perturbations, our methods perform on par (or substantially better, in the case of the `2 adversary on CIFAR-10) in a variety of settings, despite the disadvantage of not using the adversary during training. We also used a stronger AutoAttack+ adversary for the CIFAR-10 results in Table 1, which we plot for reference as “ours (AA+)”.\nFor the CIFAR-10 results, we use a PreActResNet18 model trained with manifold regularization on CIFAR-10 at = 8/255. The curves for PGD are taken from Madry et al. (2017). Our findings mirror those in Schott et al. (2018), namely, that a model trained using PGD on CIFAR-10 for `∞ performs poorly against `2 attacks. We also ran a 500-step `∞ PGD adversary with 20 restarts at = 8/255 against our model, yielding 40.4% robust accuracy, which is plotted as “ours (PGD+)”.\nFor the MNIST results, we use the CNN architecture from Carlini and Wagner (2017), trained with manifold regularization on MNIST at = 0.3. The curves for PGD are taken from Madry et al. (2017). The difference in performance is much less dramatic in this case, which we attribute to the lower complexity of the task. We note in particular that both methods experience a sharp drop in performance in the `∞ case for larger than the one used during training." } ]
2,020
null
SP:a1bb6da48c8ed54c0bbc88d2109a17276a529c5f
[ "The submission focuses on a variant of inverse reinforcement learning, where the learner knows the task reward but is unaware of hard constraints that need to be respected while completing the task. The authors provide an algorithm to recover these constraints from expert demonstrations. The proposed algorithm builds upon a recent technique (Scobee & Sastry 2020) and addresses problems with large and continuous state spaces." ]
Standard reinforcement learning (RL) algorithms train agents to maximize given reward functions. However, many real-world applications of RL require agents to also satisfy certain constraints which may, for example, be motivated by safety concerns. Constrained RL algorithms approach this problem by training agents to maximize given reward functions while respecting explicitly defined constraints. However, in many cases, manually designing accurate constraints is a challenging task. In this work, given a reward function and a set of demonstrations from an expert that maximizes this reward function while respecting unknown constraints, we propose a framework to learn the most likely constraints that the expert respects. We then train agents to maximize the given reward function subject to the learned constraints. Previous works in this regard have either mainly been restricted to tabular settings or specific types of constraints or assume knowledge of transition dynamics of the environment. In contrast, we empirically show that our framework is able to learn arbitrary Markovian constraints in high-dimensions in a model-free setting.
[]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov Decision Processes", "venue": null, "year": 1999 }, { "authors": [ "Dario Amodei", "Jack Clark" ], "title": "Faulty reward functions in the wild", "venue": "https://openai.com/ blog/faulty-reward-functions/,", "year": 2016 }, { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul F. Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in AI safety, 2016", "venue": null, "year": 2016 }, { "authors": [ "Shalabh Bhatnagar" ], "title": "An actor–critic algorithm with function approximation for discounted cost constrained markov decision processes", "venue": "Systems and Control Letters,", "year": 2010 }, { "authors": [ "Lukas Biewald" ], "title": "Experiment tracking with weights and biases, 2020", "venue": "URL https://www. wandb.com/. Software available from wandb.com", "year": 2020 }, { "authors": [ "Greg Brockman", "Vicki Cheung", "Ludwig Pettersson", "Jonas Schneider", "John Schulman", "Jie Tang", "Wojciech Zaremba" ], "title": "OpenAI Gym, 2016", "venue": null, "year": 2016 }, { "authors": [ "Glen Chou", "Dmitry Berenson", "Necmiye Ozay" ], "title": "Learning constraints from demonstrations, 2018", "venue": null, "year": 2018 }, { "authors": [ "Glen Chou", "Necmiye Ozay", "Dmitry Berenson" ], "title": "Learning constraints from locally-optimal demonstrations under cost function uncertainty", "venue": "IEEE Robotics Autom. Lett.,", "year": 2020 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Paul F. Christiano", "Jan Leike", "Tom B. Brown", "Miljan Martic", "Shane Legg", "Dario Amodei" ], "title": "Deep reinforcement learning from human preferences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christian Daniel", "Malte Viering", "Jan Metz", "Oliver Kroemer", "Jan Peters" ], "title": "Active reward learning", "venue": "In Proceedings of Robotics: Science and Systems,", "year": 2014 }, { "authors": [ "Claudia Pérez D’Arpino", "Julie A. Shah" ], "title": "C-learn: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy", "venue": "In IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Guided cost learning: Deep inverse optimal control via policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Wonseok Jeon", "Seokin Seo", "Kee-Eung Kim" ], "title": "A bayesian approach to generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Hoang Le", "Cameron Voloshin", "Yisong Yue" ], "title": "Batch policy learning under constraints", "venue": "In International Confernce on Machine Learning,", "year": 2019 }, { "authors": [ "Jan Leike", "Miljan Martic", "Victoria Krakovna", "Pedro A. Ortega", "Tom Everitt", "Andrew Lefrancq", "Laurent Orseau", "Shane Legg" ], "title": "AI safety gridworlds", "venue": null, "year": 2017 }, { "authors": [ "Jan Leike", "David Krueger", "Tom Everitt", "Miljan Martic", "Vishal Maini", "Shane Legg" ], "title": "Scalable agent alignment via reward modeling: A research direction, 2018", "venue": null, "year": 2018 }, { "authors": [ "James MacGlashan", "Mark K. Ho", "Robert Loftin", "Bei Peng", "Guan Wang", "David L. Roberts", "Matthew E. Taylor", "Michael L. Littman" ], "title": "Interactive learning from policy-dependent human feedback", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Bernard Michini", "Jonathan P. How" ], "title": "Bayesian nonparametric inverse reinforcement learning", "venue": "In Machine Learning and Knowledge Discovery in Databases,", "year": 2012 }, { "authors": [ "Sobhan Miryoosefi", "Kianté Brantley", "Hal Daume III", "Miro Dudik", "Robert E Schapire" ], "title": "Reinforcement learning with convex constraints", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Michael Pardowitz", "Raoul Zöllner", "Rüdiger Dillmann" ], "title": "Learning sequential constraints of tasks from user demonstrations", "venue": "In 5th IEEE-RAS International Conference on Humanoid Robots,", "year": 2005 }, { "authors": [ "Deepak Ramachandran", "Eyal Amir" ], "title": "Bayesian inverse reinforcement learning", "venue": "In 20th International Joint Conference on Artifical Intelligence,", "year": 2007 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking safe exploration in deep reinforcement learning", "venue": "https://cdn.openai.com/safexp-short.pdf", "year": 2019 }, { "authors": [ "Dorsa Sadigh", "Anca D. Dragan", "Shankar Sastry", "Sanjit A. Seshia" ], "title": "Active preference-based learning of reward functions", "venue": "In Robotics: Science and Systems XIII,", "year": 2017 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Dexter R.R. Scobee", "S. Shankar Sastry" ], "title": "Maximum likelihood constraint inference for inverse reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rohin Shah", "Dmitrii Krasheninnikov", "Jordan Alexander", "Pieter Abbeel", "Anca D. Dragan" ], "title": "Preferences implicit in the state of the world", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Guru Subramani", "Michael Zinn", "Michael Gleicher" ], "title": "Inferring geometric constraints in human demonstrations", "venue": "In 2nd Conference on Robot Learning (CoRL),", "year": 2018 }, { "authors": [ "Chen Tessler", "Daniel J. Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ruiyi Zhang", "Tong Yu", "Yilin Shen", "Hongxia Jin", "Changyou Chen" ], "title": "Text-based interactive recommendation via constraint-augmented reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Brian D. Ziebart", "Andrew Maas", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In 23rd National Conference on Artificial Intelligence (AAAI),", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reward functions are a critical component in reinforcement learning settings. As such, it is important that reward functions are designed accurately and are well-aligned with the intentions of the human designer. This is known as agent (or value) alignment (see, e.g., Leike et al. (2018; 2017); Amodei et al. (2016)). Misspecified rewards can lead to unwanted and unsafe situations (see, e.g, Amodei & Clark (2016)). However, designing accurate reward functions remains a challenging task. Human designers, for example, tend to prefer simple reward functions that agree well with their intuition and are easily interpretable. For example, a human designer might choose a reward function that encourages an RL agent driving a car to minimize its traveling time to a certain destination. Clearly, such a reward function makes sense in the case of a human driver since inter-human communication is contextualized within a framework of unwritten and unspoken constraints, often colloquially termed as ‘common-sense’. That is, while a human driver will try to minimize their traveling time, they will be careful not to break traffic rules, take actions that endanger passersby, and so on. However, we cannot assume such behaviors from RL agents since they are are not imbued with common-sense constraints.\nConstrained reinforcement learning provides a natural framework for maximizing a reward function subject to some constraints (we refer the reader to Ray et al. (2019) for a brief overview of the field). However, in many cases, these constraints are hard to specify explicitly in the form of mathematical functions. One way to address this issue is to automatically extract constraints by observing the behavior of a constraint-abiding agent. Consider, for example, the cartoon in Figure 1. Agents start at the bottom-left corner and are rewarded according to how quickly they reach the goal at the bottom-right corner. However, what this reward scheme misses out is that in the real world the lower bridge is occupied by a lion which attacks any agents attempting to pass through it. Therefore, agents that are naı̈vely trained to maximize the reward function will end up performing poorly in the real world. If, on the other hand, the agent had observed that the expert (in Figure 1(a)) actually performed suboptimally with respect to the stipulated reward scheme by taking a longer route to the goal, it could have concluded that (for some unknown reason) the lower bridge must be avoided and consequently would have not been eaten by the lion!\nScobee & Sastry (2020) formalizes this intuition by casting the problem of recovering constraints in the maximum entropy framework for inverse RL (IRL) (Ziebart et al., 2008) and proposes a greedy\nalgorithm to infer the smallest number of constraints that best explain the expert behavior. However, Scobee & Sastry (2020) has two major limitations: it assumes (1) tabular (discrete) settings, and (2) the environment’s transition dynamics. In this work, we aim to address both of these issues by learning a constraint function instead through a sample-based approximation of the objective function of Scobee & Sastry. Consequently, our approach is model-free, admits continuous states and actions and can learn arbitrary Markovian constraints1. Further, we empirically show that it scales well to high-dimensions.\nTypical inverse RL methods only make use of expert demonstrations and do not assume any knowledge about the reward function at all. However, most reward functions can be expressed in the form “do this task while not doing these other things” where other things are generally constraints that a designer wants to impose on an RL agent. The main task (“do this”) is often quite easy to encode in the form of a simple nominal reward function. In this work, we focus on learning the constraint part (“do not do that”) from provided expert demonstrations and using it in conjunction with the nominal reward function to train RL agents. In this perspective, our work can be seen as a principled way to inculcate prior knowledge about the agent’s task in IRL. This is a key advantage over other IRL methods which also often end up making assumptions about the agent’s task in the form of regularizers such as in Finn et al. (2016).\nThe main contributions of our work are as follows:\n• We formulate the problem of inferring constraints from a set of expert demonstrations as a learning problem which allows it to be used in continuous settings. To the best of our knowledge, this is the first work in this regard.\n• We eliminate the need to assume, as Scobee & Sastry do, the environment’s transition dynamics.\n• We demonstrate the ability of our method to train constraint-abiding agents in highdimensions and show that it can also be used to prevent reward hacking." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 UNCONSTRAINED RL", "text": "A finite-horizon Markov Decision Process (MDP)M is a tuple (S,A, p, r, γ, T ), where S ∈ R|S| is a set of states, A ∈ R|A| is a set of actions, p : S × A × S 7→ [0, 1] is the transition probability function (where p(s′|s, a) denotes the probability of transitioning to state s′ from state s by taking\n1Markovian constraints are of the form c(τ) = ∏T t=1 c(st, at) i.e. constraint function is independent of the\npast states and actions in the trajectory.\naction a), r : S × A 7→ R is the reward function, γ is the discount factor and T is the timehorizon. A trajectory τ = {s1, a1, . . . , sT , aT } denotes a sequence of states-action pairs such that st+1 ∼ p(·|st, at). A policy π : S 7→ P(A) is a map from states to probability distributions over actions, with π(a|s) denoting the probability of taking action a in state s. We will sometimes abuse notation to write π(s, a) to mean the joint probability of visiting state s and taking action a under the policy π and similarly π(τ) to mean the probability of the trajectory τ under the policy π.\nDefine r(τ) = ∑T t=1 γ\ntr(st, at) to be the total discounted reward of a trajectory. Forward RL algorithms try to find an optimal policy π∗ that maximizes the expected total discounted reward J(π) = Eτ∼π[r(τ)]. On the other hand, given a set of trajectories sampled from the optimal (also referred to as expert) policy π∗, inverse RL (IRL) algorithms aim to recover the reward function r, which can then be used to learn the optimal policy π∗ via some forward RL algorithm." }, { "heading": "2.2 CONSTRAINED RL", "text": "While normal (unconstrained) RL tries to find a policy that maximizes J(π), constrained RL instead focuses on finding a policy that maximizes J(π) while respecting explicitly-defined constraints. A popular framework in this regard is the one presented in Altman (1999) which introduces the notion of a constrained MDP (CMDP). A CMDP Mc is a simple MDP augmented with a cost function c : S ×A 7→ R and a budget α ≥ 0. Define c(τ) = ∑T t=1 γ\ntc(st, at) to be the total discounted cost of the trajectory τ and Jc(π) = Eτ∼π[c(τ)] to be the expected total discounted cost. The forward constrained RL problem is to find the policy π∗c that maximizes J(π) subject to J\nc(π) ≤ α. In this work, given a set D of trajectories sampled from π∗c , the corresponding unconstrained MDP M (i.e.,Mc without the cost function c) and a budget α, we are interested in recovering a cost function which when augmented withM has an optimal policy that generates the same set of trajectories as in D. We call this as the inverse constrained reinforcement learning (ICRL) problem. If the budget α is strictly greater than 0, then the cost function c defines soft constraints over all possible state-action pairs. In other words, a policy is allowed to visit states and take actions that have non-zero costs as long as the expected total discounted cost remains less than α. If, however, α is 0 then the cost function translates into hard constraints over all state-action pairs that have a non-zero cost associated with them. A policy can thus never visit these state-action pairs. In this work, we restrict ourselves to this hard constraint setting. Note that this is not particularly restrictive since, for example, safety constraints are often hard constraints as well are constraints imposed by physical laws.\nSince we restrict ourselves to hard constraints, we can rewrite the constrained RL problems as follows: define C = {(s, a)|c(s, a) 6= 0} to be the constraint set induced by c. The forward constraint RL problem is to find the optimal constrained policy π∗C that maximizes J(π) subject to π∗C(s, a) = 0 ∀(s, a) ∈ C. The inverse constrained RL problem is to recover the constraint set C from trajectories sampled from π∗C .\nFinally, we will refer to our unconstrained MDP as the nominal MDP hereinafter. The nominal MDP represents the nominal environment (simulator) in which we train our agent." }, { "heading": "3 FORMULATION", "text": "" }, { "heading": "3.1 MAXIMUM LIKELIHOOD CONSTRAINT INFERENCE", "text": "We take Scobee & Sastry as our starting point. Suppose that we have a set of trajectories D = {τ (i)}Ni=1 sampled from an expert π∗C navigating in a constrained MDPMC ∗ where C∗ denotes the (true) constraint set. Furthermore, we are also given the corresponding nominal MDPM2. Our goal is to recover a constraint set which when augmented withM results in a CMDP that has an optimal policy that respects the same set of constraints as π∗C does. Scobee & Sastry pose this as a maximum likelihood problem. That is, if we let pM denote probabilities given that we are considering MDP M and assume a uniform prior on all constraint sets, then we can choose C∗ according to\nC∗ ← arg max C pM(D|C). (1)\n2Availability of transition dynamics model of nominal MDP is not necessary.\nUnder the maximum entropy (MaxEnt) model presented in Ziebart et al. (2008), the probability of a trajectory under a deterministic MDPM can be modelled as\nπM(τ) = exp(βr(τ))\nZM 1 M(τ), (2)\nwhere ZM = ∫\nexp(βr(τ))1M(τ)dτ is the partition function, β ∈ [0,∞) is a parameter describing how close the agent is to the optimal distribution (as β →∞ the agent becomes a perfect optimizer and as β → 0 the agent simply takes random actions) and 1 is an indicator function that is 1 for trajectories feasible under the MDPM and 0 otherwise. Assume that all trajectories in D are i.i.d. and sampled from the MaxEnt distribution. We have\np(D|C) = 1 (ZMC ) N N∏ i=1 exp(βr(τ (i)))1M C (τ (i)). (3)\nNote that 1M C (τ (i)) is 0 for all trajectories that contain any state-action pair that belongs to C. To maximize this, Scobee & Sastry propose a greedy strategy wherein they start with an empty constraint set and incrementally add state-action pairs that result in the maximal increase in p(D|C)." }, { "heading": "3.2 SAMPLE-BASED APPROXIMATION", "text": "Since we are interested in more realistic settings where the state and action spaces can be continuous, considering all possible state-action pairs individually usually becomes intractable. Instead, we propose a learning-based approach wherein we try to approximate 1M C (τ) using a neural network. Consider the loglikelihood\nL(C) = 1 N log p(D|C) = 1 N N∑ i=1 [ βr(τ (i)) + log 1M C (τ (i)) ] − logZMC ,\n= 1\nN N∑ i=1 [ βr(τ (i)) + log 1M C (τ (i)) ] − log ∫ exp(βr(τ))1M C (τ)dτ.\n(4)\nNote that 1M C (τ) = ∏T t=0 1\nMC (st, at) merely tells us whether the trajectory τ is feasible under the constraint set C or not. Let us have a binary classifier ζθ parameterized with θ do this for us instead, i.e., we want ζθ(st, at) to be 1 if (st, at) is not in C∗ and 0 otherwise. Using ζθ(τ) as a short hand for ∏T t=0 ζθ(st, at), we have\nL(C) = L(θ) = 1 N N∑ i=1 [ βr(τ (i)) + log ζθ(τ (i)) ] − log ∫ exp(βr(τ))ζθ(τ)dτ. (5)\nLetMζ̄θ denote the MDP obtained after augmentingM with the cost function ζ̄θ := 1 − ζθ3, and πMζ̄θ denote the corresponding MaxEnt policy. Taking gradients of (5) with respect to θ gives us (see Appendix A.1 for derivation)\n∇θL(θ) = 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− Eτ∼π Mζ̄θ [∇θ log ζθ(τ)] . (6)\nUsing a sample-based approximation for the right-hand term we can rewrite the gradient as\n∇θL(θ) ≈ 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− 1 M M∑ j=1 ∇θ log ζθ(τ̂ (j)), (7)\nwhere τ̂ are sampled from πMζ̄θ (discussed in the next section). Notice that making ∇θL(θ) zero essentially requires matching the expected gradient of log ζθ under the expert (left hand term) and\n3Note that since we are assuming that α is 0, we can assign any non-zero (positive) cost to state-action pairs that we want to constrain. Here 1− ζθ assigns a cost of 1 to all such pairs.\nnominal (right hand term) trajectories. For brevity, we will write πMζ̄θ as πθ from now onwards. We can choose ζθ to be a neural network with parameters θ and a sigmoid at the output. We train our neural network via gradient descent by using the expression for the gradient given above.\nIn practice, since we have a limited amount of data, ζθ, parameterized as a neural network, will tend to overfit. To mitigate this, we incorporate the following regularizer into our objective function.\nR(θ) = δ ∑\nτ∼{D,S}\n[ζθ(τ)− 1] (8)\nwhere S denotes the set of trajectories sampled from πθ and δ ∈ [0, 1) is a fixed constant. R incentivizes ζθ to predict values close to 1, thus encouraging ζθ to choose the smallest number of constraints that best explain the expert data." }, { "heading": "3.3 FORWARD STEP", "text": "To evaluate (7) we need to sample from πθ. Recall that πθ needs to maximize J(π) subject to πθ(s, a) = 0 ∀(s, a) ∈ Z where Z is the constraint set induced by ζ̄θ. However, since ζθ outputs continuous values in the range (0, 1), we instead solve the soft constrained RL version, wherein we try to find a policy π that maximizes J(π) subject to Eτ∼π[ζ̄θ(τ)] ≤ α. In our experiments, we set α to a very small value. Note that if α is strictly set to 0 our optimization program will have an empty feasible set. Please refer to Appendix A.5 for some more discussion on α.\nWe represent our policy as a neural network with parameters φ and train it by solving the following equivalent unconstrained min-max problem on the Lagrangian of our objective function\nmin λ≥0 max φ LF (φ, λ) = J(πφ) +\n1 β H(πφ)− λ(Eτ∼πφ [ζ̄θ(τ)]− α) (9)\nby gradient ascent on φ (via the Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017)) and gradient descent on the Lagrange multiplier λ. Note that we also add the entropy H(πφ) = −Eτ∼πφ [log πφ(τ)] of πφ to our objective function. Maximizing the entropy ensures that we recover the MaxEnt distribution in (2) at convergence (see Appendix A.3 for proof)." }, { "heading": "3.4 INCORPORATING IMPORTANCE SAMPLING", "text": "Running the forward step in each iteration is typically very time consuming. To mitigate this problem, instead of approximating the expectation in (6) with samples from πθ, we approximate it with samples from an older policy πθ̄ where θ̄ were the parameters of ζ at some previous iteration. We, therefore, only need to learn a new policy after a fixed number of iterations. To correct for the bias introduced in the approximation because of using a different sampling distribution, we add important sampling weights to our expression for the gradient. In this case, the importance sampling weights can be shown to be given by (see Appendix A.2 for the derivation)\ns(τ) = ζθ(τ)\nζθ̄(τ) . (10)\nThe gradient can thus be rewritten as\n∇θL(θ) ≈ 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− 1∑ τ̂∼πθ̄ s(τ̂) ∑ τ̂∼πθ̄ s(τ̂)∇θ log ζθ(τ̂) . (11) Algorithm 1 summarizes our training procedure." }, { "heading": "4 EXPERIMENTS", "text": "Learning a single constraint: Consider the TwoBrdiges environment in Figure 1. The agent starts at the bottom-left corner and can take one of the following 4 actions at each step: right, left, up, down. The reward in the left half is negative and in the right half is positive (and proportional to how close the agent is to the goal). This incentivizes the agent to cross over into the right half as\nAlgorithm 1: ICRL with Importance Sampling Input: Expert trajectories D, iterations N , number of backward iterations B. Initialize θ and φ randomly. for i = 1, . . . , N do\nLearn πφ by solving (9) using current ζθ. for j = 1, . . . , B do\nSample a set of trajectories S = {τ (k)}Mk=1 using πφ. Compute importance sampling weights s(τ (k)) using (10) for k = 1, . . . ,M . Use S and D to update θ via SGD by using the gradient in (11).\nend end return πφ\nquickly as possible, which is obviously via the lower bridge. However, since the lower bridge is occupied by a lion, the expert agent avoids it, whereas the nominal agent does not.\nLearning multiple constraints: For this experiment we design the ThreeBridges environment shown in Figure 2(a). The agent starts at either the bottom or top-left corner with equal probability and, as in the TwoBridges case, is incentivized to cross over into the right half as quickly as possible. The expert on the other hand always takes the middle bridge, since both the upper and lower bridges are actually constrained.\nPreventing reward hacking: Figure 2(b) shows the LapGridWorld environment4 which we use to test if our method can prevent reward hacking. The agent is intended to sail clockwise around the track. Each time it drives onto a golden dollar tile, it gets a reward of 3. However, the nominal agent “cheats” by stepping back and forth on one dollar tile, rather than going around the track, and ends up getting more reward than the expert (which goes around the track, as intended).\nScaling to high dimensions: For this experiment, we use a simulated robot called HalfCheetah from OpenAI Gym (Brockman et al., 2016). The state and action spaces are of 18 and 6 dimensions respectively. The robot can move both in forward and backward directions and is rewarded proportionally to the distance it covers. For the constrained environment, shown in Figure 2(c), we add a solid wall at a distance of 5 units from the origin to prevent the robot from moving forwards. Consequently, the expert always moves backwards, whereas the nominal agent moves in both directions.\nTransferring constraints: In many cases, constraints are actually part of the environment and are the same for different agents (for example, all vehicles must adhere to the same speed limit). In such instances, it is useful to first learn the constraints using only one agent and then transfer them onto other agents. As a proof of concept, we transfer the constraints learnt on the HalfCheetah agent from the previous paragraph on a Walker2d agent. Note that in this case ζθ must only be fed a subset of the state and action spaces that are common across all agents. As such, we only train ζθ on the\n4This environment is similar to the boat race environment in Leike et al. (2017).\n(x, y) position coordinates of the HalfCheetah agent, since the rest of elements in the state space are specific to agent.\nFigures 3 and 4 show the results of these experiments. The rewards shown are the actual rewards that the agent gets in the constrained (true) environment. In the case of TwoBridges and ThreeBridges, we add solid obstacles on the constrained bridges to prevent the agent from passing through them. In the constrained LapGridWorld environment, we award the agent 12 points whenever it completes a lap (rather than awarding 3 points each time it lands on a golden dollar tile, as in the nominal environment). Finally, in the case of HalfCheetah and Walker2d, we terminate the episode whenever the agent moves beyond the wall. The bottom row in Figure 3 shows the average number of constraints that the agent violates per trajectory when rolled out\nin the nominal environment. (For the LapGridWorld this is the number of times the agent attempts to move in the anti-clockwise direction.) For the Walker2d experiment, we observed 0 constraint violations throughout training. This is because, in practice, ζθ usually acts conservatively compared to the true constraint function (by also constraining state-action pairs that are close to the true constrained ones). Additional details on these experiments, including hyperparameters, can be found in Appendix A.4. As can be seen, over the course of training, the agent’s true reward increases and its number of constraint violations go down." }, { "heading": "5 ABLATION STUDIES", "text": "We conduct experiments on the TwoBridges environment to answer the following questions:\n(a) Can we learn constraints even when we have only one expert rollout? (b) Does importance sampling speedup convergence? (c) Does the regularization term in (8) encourage ζθ to choose a minimal set of constraints?\nTo answer (a) we run our algorithm for different number of expert rollouts. As shown in Figure 5(a) we are able to achieve expert performance even with only one expert rollout.\nFor (b) we run experiments with and without importance sampling for different values of B (the number of backward iterations). Figures 5(b) and 5(c) show the results. Using B > 1 without\nimportance sampling results in a failure in the training procedure (usually resulting in NaNs) or low reward whereas using B > 1 with importance sampling accelerates convergence.\nFor (c) we run experiments with different values of δ which controls the extent of regularization (see (8)). Figure 6 shows the results. Note that as δ increases, ζθ constrains fewer constraints. Also note that when δ = 1, ζθ fails to constrain any state." }, { "heading": "6 RELATED WORK", "text": "Forward Constrained RL: Several approaches have been proposed in literature to solve the forward constrained RL problem in the context of CMDPs (Altman, 1999). Achiam et al. (2017) analytically solves trust region policy optimization problems at each policy update to enforce the constraints. Chow et al. (2018) uses a Lyapnuov approach and also provide theoretical guarantees. Le et al. (2019) proposes an algorithm for cases when there are multiple constraints. Finally, a well-known approach centers around rewriting the constrained RL problem as an equivalent unconstrained minmax problem by using Lagrange multipliers (Zhang et al., 2019; Tessler et al., 2019; Bhatnagar, 2010)) (see Section 3.3 for further details).\nConstraint Inference: Previous work done on inferring constraints from expert demonstrations has either focussed on either inferring specific type of constraints such as geometric (D’Arpino & Shah, 2017; Subramani et al., 2018), sequential (Pardowitz et al., 2005) or convex (Miryoosefi et al., 2019) constraints or is restricted to tabular settings (Scobee & Sastry, 2020; Chou et al., 2018) or assume transition dynamics (Chou et al., 2020).\nPreference Learning: Constraint inference also links to preference learning which aims to extract user preferences (constraints imposed by an expert on itself, in our case) from different sources such as ratings (Daniel et al., 2014), comparisions (Christiano et al., 2017; Sadigh et al., 2017), human reinforcement signals (MacGlashan et al., 2017) or the initial state of the agent’s environment (Shah et al., 2019). Preference learning also includes inverse RL, which aims to recover an agent’s reward function by using its trajectories. To deal with the inherent ill-posedness of this problem, inverse RL algorithms often incorporate regularizers (Ho & Ermon, 2016; Finn et al., 2016) or assume a prior\ndistribution over the reward function (Jeon et al., 2018; Michini & How, 2012; Ramachandran & Amir, 2007)." }, { "heading": "7 CONCLUSION AND FUTURE WORK", "text": "We have presented a method to learn constraints from an expert’s demonstrations. Unlike previous works, our method both learns arbirtrary constraints and can be used in continuous settings.\nWhile we consider our method to be an important first step towards learning arbitrary constraints in real-world continuous settings, there is still considerable room for improvement. For example, as is the case with Scobee & Sastry, our formulation is also based on (2) which only holds for deterministic MDPs. Secondly, we only consider hard constraints. Lastly, one very interesting extension of this method can be to learn constraints only from logs data in an offline way to facilitate safe-RL in settings where it is difficult to even build nominal simulators such as is the case for plant controllers." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 GRADIENT OF LOG LIKELIHOOD", "text": "The gradient of (5) is\n∇θL(θ) = 1\nN M∑ i=1 [ 0 +∇θ log ζθ(τ (i)) ] − 1∫ exp(βr(τ))ζθ(τ)dτ ∫ exp(βr(τ))∇θζθ(τ)dτ,\n= 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− ∫ exp(βr(τ))ζθ(τ)∫ exp(βr(τ ′))ζθ(τ ′)dτ ′ ∇θ log ζθ(τ)dτ,\n= 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− ∫ pMζ̄θ (τ)∇θ log ζθ(τ)dτ,\n= 1\nN N∑ i=1 ∇θ log ζθ(τ (i))− Eτ∼πMζθ [∇θ log ζθ(τ)] ,\n(12)\nwhere the second line follows from the identity ∇θcθ(τ) ≡ cθ(τ)∇θ log cθ(τ) and the fourth line from the MaxEnt assumption." }, { "heading": "A.2 DERIVING THE IMPORTANCE SAMPLING WEIGHTS", "text": "Suppose that at some iteration of our training procedure we are interested in approximating the gradient of the log of the partition function ∇θ logZθ (where θ are the current parameters of our classifier) using an older policy πζθ̄ (where θ̄ were the parameters of the classifier which induced the constraint set that this policy respects). We can do so by noting that\nZθ = ∫ exp(r(τ))ζθ(τ)dτ,\n= ∫ πζθ̄ (τ) [ exp(r(τ))cθ(τ)\nπζθ̄ (τ)\n] dτ,\n= Eτ∼πζ θ̄ (τ)\n[ exp(r(τ))cθ(τ)\nπζθ̄ (τ)\n] ,\n= Zθ̄ · Eτ∼πζθ̄ (τ) [ ζθ(τ)\nζθ̄(τ)\n] ,\n≈ Zθ̄ · 1\nM M∑ i=1 τ∼πζθ̄ ζθ(τ (i)) ζθ̄(τ (i)) ,\n(13)\nwhere the fourth lines follows from our MaxEnt assumption, i.e., πζθ̄ (τ) = exp(r(τ))ζθ̄(τ)/Zθ̄.\nTherefore\n∇θ logZθ = 1\nZθ ∇θZθ,\n= 1 Zθ̄ · 1M ∑ τ∼q(τ) ζθ(τ) ζθ̄(τ) Zθ̄ · 1M ∑ τ∼q(τ) ∇θζθ(τ) ζθ̄(τ) , =\n1∑ τ∼q(τ) ζθ(τ) ζθ̄(τ) ∑ τ∼q(τ) ζθ(τ) ζθ̄(τ) ∇θ log ζθ(τ) . (14)" }, { "heading": "A.3 RATIONALE FOR (9)", "text": "Consider a constrained MDP MC as defined in Section 2.2. We are interested in recovering the following policy\nπMC (τ) = exp(βr(τ))\nZMC 1 C(τ), (15)\nwhere ZMC = ∫\nexp(βr(τ))1C(τ)dτ is the partition function and 1C is an indicator function that is 0 if τ ∈ C and 1 otherwise. Lemma: The Boltzmann policy πB(τ) = exp(βr(τ))/Z maximizesL(π) = Eτ∼π[r(τ)]+ 1βH(π), where H(π) denotes the entropy of π.\nProof: Note that the KL-divergence, DKL, between a policy π and πB can be written as\nDKL(π||πB) = Eτ∼π[log π(τ)− log πB(τ)], = Eτ∼π[log π(τ)− βr(τ) + logZ], = −Eτ∼π[βr(τ)]−H(π) + logZ, = −βL(π) + logZ.\n(16)\nSince logZ is constant, minimizingDKL(π||πB) is equivalent to maximizing L(π). Also, we know that DKL(π||πB) is minimized when π = πB . Therefore, πB maximizes L. Proposition: The policy in (15) is a solution of\nminimize λ≥0 max π\nEτ∼π[r(τ)] + 1\nβ H(πφ)− λ(Eτ∼πφ [ζ̄θ(τ)]− α). (17)\nProof: Let us rewrite the inner optimization problem as\nmax π\nEτ∼π[r(τ)− λ(ζ̄θ(τ)− α)] + 1\nβ H(π). (18)\nFrom the Lemma we know that the solution to this is\nπ(τ, λ) = g(τ, λ)∫ g(τ ′, λ)dτ ′ , (19)\nwhere g(τ, λ) = exp(β(r(τ)− λ(ζ̄θ(τ)− α))). To find π∗(τ) = minλ π(τ, λ), note that:\n1. When ζ̄θ(τ) ≤ α, then λ∗ = 0 minimizes π. In this case g(τ, λ∗) = exp(βr(τ)). 2. When ζ̄θ(τ) > α, then λ∗ →∞ minimizes π. In this case g(τ, λ∗) = 0.\nWe can combine both of these cases by writing\nπ∗(τ) = exp(r(τ))∫\nexp(r(τ ′))1ζ̄θ (τ ′)dτ ′ 1 ζ̄θ (τ), (20)\nwhere 1ζ̄θ (τ) is 1 if ζ̄θ(τ) ≤ α and 0 otherwise. (Note that the denominator is greater than 0 as long as we have at least one τ for which ζ̄θ(τ) ≤ α, i.e., we have at least one feasible solution.) QED." }, { "heading": "A.4 EXPERIMENTAL SETTINGS", "text": "We used W&B (Biewald, 2020) to manage our experiments and conduct sweeps on hyperparameters. We used Adam (Kingma & Ba, 2015) to optimize all of our networks. All important hyperparameters are listed in Table 1. Details on the environments can be found below." }, { "heading": "A.4.1 TWOBRIDGES", "text": "In this environment, the agent’s state consists of its (x, y) position coordinates. Agents start at (0, 0) and the goal is at (20, 0). Episdoes terminate when the agent is within one unit circle of the goal or when the number of timesteps exceeds 200. The bottom left corners of the bridges are at (4, 5) and\n(4, 14). Each bridge is 4 units long and 1 unit wide. Agents can take one of the following actions: right, left, up and down. Each action moves the agent 0.7 units in the respective direction. Agents attempting to move outside the 20× 20 simulator or into the water (in between the bridges) end up in the same position and receive a reward of −2. The reward in the regions left of the bridge is fixed to −1 and on and to the right of the bridges is equal 10/d where d is the Euclidean distance of the agent to the goal. Additionally, the agent’s reward is scaled by 20 if it is to the right of bridges and at or below the lower bridge (i.e. y < 6). Finally, the reward within one unit circle of the goal is fixed to 250." }, { "heading": "A.4.2 THREEBRIDGES", "text": "This is similar to the ThreeBridges environment except that we now have three bridges. The bottomleft corners of each of the bridges are at (4, 1), (4, 9) and (4, 17.5). The middle bridge is 4 units long and 2 units wide while the other two bridges are 4 units long and 1.5 units wide. Agents attempting to move outside the simulator or into water receive a reward of−2. The reward in regions left of the bridges is fixed to −5 and on and to the right of the bridges is 200/d where d is Euclidean distance of the agent to the goal. The reward within one unit circle of the goal is fixed to 250. Finally, agents randomly start at either the bottom-left of top-left corners with equal probability." }, { "heading": "A.4.3 LAPGRIDWORLD", "text": "Here, agents move on a 11×11 grid by taking either clockwise or anti-clockwise actions. The agent is awarded a reward 3 each time it moves onto a bridge with a dollar (see Figure 2). The agent’s state is the number of the grid it is on." }, { "heading": "A.4.4 HALFCHEETAH AND WALKER2D", "text": "The original reward schemes for HalfCheetah and Walker2d in OpenAI Gym (Brockman et al., 2016), reward the agents proportional to the distance they cover in the forward direction. We modify this and instead simply reward the agents according to the amount of distance they cover (irrespective of the direction they move in)." }, { "heading": "A.5 ON THE BUDGET SIZE", "text": "As noted in Section 3.3, we set the budget size α to a very small value, typically around 0.01. α controls the extent to which the agent respects the constraints imposed by ζθ. In this section, we study the effect of α on the agent’s performance. We train an agent using our algorithm on the TwoBridges environment for different values of α. Figure 7 shows the results. As can be seen, the agent’s performance drops at higher values of α. For example when α is 1, the agent fails to achieve any meaningful reward." } ]
2,020
null
SP:cd6f5c3ee37991ff572589467b2216ba364275ba
[ "This paper studies the Lipschitz continuity properties of self-attention. It is proved that the widely-used dot-product self-attention is not Lipschitz continuous. A novel L2 self-attention is proposed and proven to be Lipschitz continuous. Experiments show that the upper bound of Lipschitz constant for L2 self-attention is asymptotically tight. Invertibility of MHA residual map is investigated to prove the Lipschitz continuity of L2 self-attention. Finally, experiments on Transformers with L2 self-attention are studied." ]
Lipschitz constants of neural networks have been explored in various contexts in deep learning, such as provable adversarial robustness, estimating Wasserstein distance, stabilising training of GANs, and formulating invertible neural networks. Such works have focused on bounding the Lipschitz constant of fully connected or convolutional networks, composed of linear maps and pointwise non-linearities. In this paper, we investigate the Lipschitz constant of self-attention, a non-linear neural network module widely used in sequence modelling. We prove that the standard dot-product self-attention is not Lipschitz, and propose an alternative L2 self-attention that is Lipschitz. We derive an upper bound on the Lipschitz constant of L2 self-attention and provide empirical evidence for its asymptotic tightness. To demonstrate the practical relevance of our theoretical work, we formulate invertible self-attention and use it in a Transformer-based architecture for a character-level language modelling task.
[]
[ { "authors": [ "M. Abadi", "A. Agarwal", "P. Barham", "E. Brevdo", "Z. Chen", "C. Citro", "G.S. Corrado", "A. Davis", "J. Dean", "M Devin" ], "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1603.04467,", "year": 2016 }, { "authors": [ "C. Anil", "J. Lucas", "R. Grosse" ], "title": "Sorting out Lipschitz function approximation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "A. Bapna", "M.X. Chen", "O. Firat", "Y. Cao", "Y. Wu" ], "title": "Training deeper neural machine translation models with transparent attention", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "J. Behrmann", "W. Grathwohl", "R.T.Q. Chen", "D. Duvenaud", "Jacobsen", "J.-H" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "T.B. Brown", "B. Mann", "N. Ryder", "M. Subbiah", "J. Kaplan", "P. Dhariwal", "A. Neelakantan", "P. Shyam", "G. Sastry", "A Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "R.T.Q. Chen", "Y. Rubanova", "J. Bettencourt", "D. Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "R.T.Q. Chen", "J. Behrmann", "D. Duvenaud", "Jacobsen", "J.-H" ], "title": "Residual flows for invertible generative modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "M. Cisse", "P. Bojanowski", "E. Grave", "Y. Dauphin", "N. Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "J. Devlin", "Chang", "M.-W", "K. Lee", "K. Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "M. Fazlyab", "A. Robey", "H. Hassani", "M. Morari", "G. Pappas" ], "title": "Efficient and accurate estimation of Lipschitz constants for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "H. Federer" ], "title": "Geometric Measure Theory. Classics in Mathematics", "venue": null, "year": 1969 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "W. Grathwohl", "R.T.Q. Chen", "J. Betterncourt", "I. Sutskever", "D. Duvenaud" ], "title": "FFJORD: Free-form continuous dynamics for scalable reversible generative models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "S. Hochreiter" ], "title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions", "venue": "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,", "year": 1998 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "F. Latorre", "P. Rolland", "V. Cevher" ], "title": "Lipschitz constant estimation of neural networks via sparse polynomial optimization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "M.P. Marcus", "M.A. Marcinkiewicz", "B. Santorini" ], "title": "Building a large annotated corpus of English: The Penn Treebank", "venue": "Computational Linguistics,", "year": 1993 }, { "authors": [ "R. Mises", "H. Pollaczek-Geiringer" ], "title": "Praktische verfahren der gleichungsauflösung", "venue": "ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik,", "year": 1929 }, { "authors": [ "T. Miyato", "T. Kataoka", "M. Koyama", "Y. Yoshida" ], "title": "Spectral normalization for Generative Adversarial Networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "E. Parisotto", "H.F. Song", "J.W. Rae", "R. Pascanu", "C. Gulcehre", "S.M. Jayakumar", "M. Jaderberg", "R.L. Kaufman", "A. Clark", "S. Noury", "M.M. Botvinick", "N. Heess", "R. Hadsell" ], "title": "Stabilizing Transformers for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "N. Parmar", "P. Ramachandran", "A. Vaswani", "I. Bello", "A. Levskaya", "J. Shlens" ], "title": "Stand-alone self-attention in vision models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "G. Peyré", "M. Cuturi" ], "title": "Computational optimal transport", "venue": "Foundations and Trends in Machine Learning,", "year": 2019 }, { "authors": [ "J. Sokolić", "R. Giryes", "G. Sapiro", "M.R. Rodrigues" ], "title": "Robust large margin deep neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Tsai", "Y.-H. H", "S. Bai", "M. Yamada", "Morency", "L.-P", "R. Salakhutdinov" ], "title": "Transformer dissection: An unified understanding for Transformer’s attention via the lens of kernel", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Y. Tsuzuku", "I. Sato", "M. Sugiyama" ], "title": "Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "Ł. Kaiser", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Virmaux", "K. Scaman" ], "title": "Lipschitz regularity of deep neural networks: analysis and efficient estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "J. Vuckovic", "A. Baratin", "R. Tachet des Combes" ], "title": "A mathematical theory of attention", "venue": "arXiv preprint arXiv:2007.02876,", "year": 2020 }, { "authors": [ "X. Wang", "R. Girshick", "A. Gupta", "K. He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "H. Zhang", "I. Goodfellow", "D. Metaxas", "A. Odena" ], "title": "Self-attention Generative Adversarial Networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Lipschitz continuity is a strong form of continuity for functions. Loosely speaking, a function is Lipschitz continuous if changing its input by a certain amount cannot change its output by more than K times that amount. The constant K is a hard constraint on how rapidly the function’s output can vary, and the smallest such K is known as the function’s Lipschitz constant. For example, f1(x) = √ |x| and f2(x) = exp(x) for x ∈ R are not Lipschitz continuous, because their output can change arbitrarily fast as x approaches 0 and +∞ respectively. On the other hand, g1(x) = tanh(x) and g2(x) = αx are Lipschitz continuous, because their rate of change (derivative) is bounded.\nIn deep learning, we often use Lipschitz continuity as a constraint for neural networks, to control how much a network’s output can change relative to its input. Such Lipschitz constraints are useful in several contexts. For example, Lipschitz constraints can endow models with provable robustness against adversarial pertubations (Cisse et al., 2017; Tsuzuku et al., 2018; Anil et al., 2019), and guaranteed generalisation bounds (Sokolić et al., 2017). Moreover, the dual form of the Wasserstein distance is defined as a supremum over Lipschitz functions with a given Lipschitz constant, hence Lipschitz-constrained networks are used for estimating Wasserstein distances (Peyré & Cuturi, 2019). Further, Lipschitz-constrained networks can stabilise training for GANs, an example being spectral normalisation (Miyato et al., 2018). Finally, Lipschitz-constrained networks are also used to construct invertible models and normalising flows. For example, Lipschitz-constrained networks can be used as a building block for invertible residual networks and hence flow-based generative models (Behrmann et al., 2019; Chen et al., 2019). Additionally, Neural ODEs (Chen et al., 2018; Grathwohl et al., 2019) are typically defined using vector fields parameterized via Lipschitz networks, so that the flow generated by the vector field is guaranteed to exist for all times.\nNonetheless, designing Lipschitz-continuous neural networks and computing (or even upperbounding) their Lipschitz constant is a hard problem. Previous work mostly focused on fullyconnected and convolutional networks, not only because they are common in deep learning, but also because they are relatively simple to analyze, as compositions of linear maps and pointwise nonlinearities. Even in this case however, exact evaluation of the Lipschitz constant of fully-connected and convolutional networks is NP-hard (Virmaux & Scaman, 2018) and obtaining a tight upper bound remains a challenging task (Virmaux & Scaman, 2018; Fazlyab et al., 2019; Latorre et al., 2020).\nFully-connected and convolutional networks are not the only neural networks worthy of interest. Recently, self-attention (Vaswani et al., 2017) has become a popular alternative to recurrent neural\nnetworks. Self-attention is a key component of the Transformer (Vaswani et al., 2017), that has found success as a building block in models of various data modalities, starting with natural-language processing (Vaswani et al., 2017; Devlin et al., 2019; Brown et al., 2020) and extending to computer vision (Zhang et al., 2019; Parmar et al., 2019), audio generation (Huang et al., 2019), and reinforcement learning (Parisotto et al., 2020). However, so far no previous work has analyzed the Lipschitz properties of self-attention, and thus it has been unclear whether self-attention is a viable option in applications that require Lipschitz constraints. In this work, we address this gap in the theory of self-attention by providing a thorough analysis of its Lipschitz properties. In particular, we make the following contributions:\n• We prove that the widely used dot-product self-attention is not Lipschitz, and therefore not suitable to use in applications requiring Lipschitz constraints.\n• We formulate L2 self-attention as an alternative, and show that it is Lipschitz. • We derive a theoretical upper bound on the Lipschitz constant of L2 self-attention, and provide\nempirical evidence of the asymptotic tightness of the bound. • As a practical demonstration of the theory, we use this bound to formulate invertible self-attention,\nand explore its use in a Transformer architecture for character-level language modelling." }, { "heading": "2 LIPSCHITZ CONSTANT OF FULLY-CONNECTED/CONVOLUTIONAL LAYERS", "text": "We first define the notion of Lipschitz continuity, and proceed to define the Lipschitz constant. Definition 2.1. Given two metric spaces (X , dX ) and (Y, dY), a function f : X → Y is called Lipschitz continuous (or K-Lipschitz) if there exists a constant K ≥ 0 such that\ndY(f(x), f(x ′)) ≤ KdX (x, x′) for all x, x′ ∈ X . (1)\nThe smallest such K is the Lipschitz constant of f , denoted Lip(f).\nIn this paper, we focus on the common case where X = Rn, Y = Rm, and dX , dY are induced by a p-norm ‖x‖p := ( ∑ i |xi|p)1/p. We will primarily consider the cases p = 2 and p = ∞, where ‖x‖∞ := maxi |xi|. To emphasise the dependence of the Lipschitz constant on the choice of p-norm, we will often denote it by Lipp(f). In this case, it follows directly from Definition 2.1 that the Lipschitz constant is given by\nLipp(f) = sup x6=x′∈Rn ‖f(x)− f(x′)‖p ‖x− x′‖p . (2)\nNext, we outline some basic results that are useful for estimating Lipschitz constants, also covered in related works (Virmaux & Scaman, 2018; Behrmann et al., 2019). We describe how these results are used to provide bounds on the Lipschitz constant of fully-connected networks (FCN) and convolutional neural networks (CNN), using the fact that both are compositions of linear maps and pointwise non-linearities. To begin with, the following theorem suggests a way to bound Lipp(f) for a differentiable Lipschitz function f : Theorem 2.1 (Federer, 1969). Let f : Rn → Rm be differentiable and Lipschitz continuous under a choice of p-norm ‖ · ‖p. Let Jf (x) denote its total derivative (Jacobian) at x. Then Lipp(f) = supx∈Rn ‖Jf (x)‖p where ‖Jf (x)‖p is the induced operator norm on Jf (x).\nHence if f is a linear map represented by a matrix W then\nLipp(f) = ‖W‖p := sup ‖x‖p=1 ‖Wx‖p = { σmax(W ), if p = 2 maxi ∑ j |Wij | if p =∞\n(3)\nwhere ‖W‖p is the operator norm on matrices induced by the vector p-norm, and σmax(W ) is the largest singular value of W . Under this choice of norm, many common non-linearities (including relu, sigmoid, tanh, elu) are 1-Lipschitz. ‖W‖2 = σmax(W ) is usually estimated via power iteration; we provide details on how this is done in Appendix B.\nSince we now know the Lipschitz constants of the components of both FCN and CNN, we can bound their Lipschitz constants by applying the following lemma: Lemma 2.1 (Federer, 1969). Let g, h be two composable Lipschitz functions. Then g ◦ h is also Lipschitz with Lip(g ◦ h) ≤ Lip(g) Lip(h).\nCorollary 2.1. For a fully-connected network (FCN) or a convolutional neural network (CNN) f =WK ◦ ρK−1 ◦WK−1 ◦ . . . ◦ ρ1 ◦W1, we have Lipp(f) ≤ ∏ k ‖Wk‖p under a choice of p-norm with 1-Lipschitz non-linearities ρk. The above bound is not necessarily tight; there are various works that compute tighter bounds for FCN and CNN (e.g. Virmaux & Scaman, 2018; Fazlyab et al., 2019; Latorre et al., 2020)." }, { "heading": "3 LIPSCHITZ CONSTANT OF SELF-ATTENTION", "text": "3.1 DOT-PRODUCT SELF-ATTENTION IS not LIPSCHITZ\nMoving on, we investigate whether self-attention is Lipschitz. We first consider the widely used (scaled) dot-product multihead self-attention as formulated by Vaswani et al. (2017). Let x1, . . . , xN be a sequence of N elements, where xi ∈ RD for i = 1, . . . , N . We represent this sequence as a matrix X ∈ RN×D such that the ith row of X is the ith element of the sequence, i.e. Xi: = x>i . Dot-product multihead self-attention (DP-MHA) is a map from RN×D to RN×D consisting of H ‘heads’, where H is chosen to divide D. Each head is a map from RN×D to RN×D/H defined by\nDP(X) := softmax ( XWQ(XWK)>/ √ D/H ) XWV , (4)\nwhere WQ,WK ,WV ∈ RD×D/H are learnable parameters specific to each head. The input to the softmax is an N ×N matrix of dot products (hence dot-product self-attention), and the softmax is applied to each row of this matrix. Finally, the outputs of all heads are concatenated into an N ×D matrix and are right multiplied by WO ∈ RD×D, thus DP-MHA is defined by\nMHA(X) := [ DP1(X), . . . ,DPH(X) ] WO. (5)\nIn what follows, we will prove that MHA as defined above is not Lipschitz, assuming that the MHA map is non-trivial, i.e. WQ,WK ,WV ,WO 6= 0. It is sufficient to show that a single head DP is not Lipschitz, since MHA is a linear combination of the outputs of each head. Let us write Equation (4) as DP(X) = PXWV , where P ∈ RN×N is the output of the softmax (we suppress the dependence of P on X to reduce clutter below). P is a stochastic matrix, i.e. its entries are non-negative and its rows sum to 1. Since the rows of X are the xi’s, a linear transformation of each xi by some matrix A is equivalent to right multiplication of X by A>. So right multiplication of X by WV is a linear map and thus Lipschitz. Therefore, we are interested in the mapping f(X) = PX; this is not a linear mapping because P itself is a non-linear function of X . In fact, we show that f is not Lipschitz, thus proving the first main result of the paper: Theorem 3.1. DP-MHA is not Lipschitz for any vector p-norm ‖ · ‖p with p ∈ [1,∞]. Summary of Proof. We use Theorem 2.1, noting that if the supremum of the norm of the Jacobian is infinite, then the mapping is not Lipschitz. In particular, we show that when xi = 0 for some i, some elements of the Jacobian of f grow proportionally to the sample variance of x 6=i, which is unbounded. Proof. We show the proof for the case D = 1 (i.e. X ∈ RN×1, a column vector) for readability. See Appendix C for the general case, which follows the same logic.\nThe mapping f can be written as f(X) = PX = softmax ( aXX> ) X = f1(X) >\n... fN (X) > ∈ RN×1, (6) where a = WKWQ ∈ R (we assume a 6= 0 such that self-attention is non-trivial) and fi(X) =∑N\nj=1 Pijxj with P > i: = softmax (axiX). Hence f can be interpreted as a map of each xi to a point\nin the convex hull of x1, ..., xN . Since f is a map from RN×1 to RN×1, its Jacobian is\nJf = J11 . . . J1N... . . . ... JN1 . . . JNN ∈ RN×N , (7) where Jij =\n∂fi(X) ∂xj ∈ R. By taking partial derivatives we can show that Jij = aX>P (i) [ejiX + δijX] + PijI where eij ∈ RN×N is a binary matrix with zeros everywhere\nexcept the (i, j)th entry, δij is the Kronecker delta, and P (i) := diag(Pi:)− P>i: Pi:. So for i = j:\nJii = aX >P (i)eiiX + aX >P (i)X + Pii (8)\nLet us investigate the scalarX>P (i)X . We observe that it is in fact a variance of a discrete distribution. Specifically:\nX>P (i)X = ∑ k Pikx 2 k − ( ∑ k Pikxk) 2 = Var(X), (9)\nwhere X is a discrete distribution with support at the inputs {x1, . . . , xN} and probability mass function given by their softmax probabilities P(X = xj) = Pij . A consequence of this interpretation is that P (i) is positive semi-definite (PSD) since X>P (i)X = Var(X) ≥ 0, with equality if and only if the xj are all equal.\nWe use this observation to show that Jii is unbounded, and so ‖Jf‖p is unbounded, hence DP-MHA is not Lipschitz. Consider the case xi = 0. Then P>i: = softmax (XAxi) = 1 N 1, i.e. we have uniform attention regardless of x 6=i. The first term of Jii in Equation (8) disappears since eiiX = [0, . . . , xi, . . . , 0] = 0, and the last term becomes 1N I . Now consider the second term aX\n>P (i)X = aVar(Xl). Note X is uniformly distributed, since P(X = xj) = Pij = 1/N . Hence the second term is equal to a times the sample variance of x1, . . . , xN , which can be arbitrarily large.\nHigh-level intuition for proof. At xi = 0, fi(X) = 1N ∑ k xk, the mean of the inputs. The rate of change of fi is governed by how fast the softmax saturates when xi is perturbed, which is determined by how spread out the x 6=i are. The more spread out they are (the higher the sample variance), the greater the rate of saturation of the softmax, and the faster the rate of change of fi. Since the sample variance of x 6=i can be arbitrarily large, the rate of change of fi can also be arbitrarily large, i.e. the entries of the Jacobian (and hence its p-norm) can become arbitrarily large. In Appendix D, we show that adding bias terms to x>i W Q and x>j W K does not resolve the issue.\nThe implications of this result are the following. (1) There can be undesirable behaviour (e.g. training instabilities) for the Transformer when some inputs are close to zero. (2) Dot-product self-attention (and hence the standard Transformer) is not a suitable choice when we require a Lipschitz neural network, such as for formulating invertible residual networks (Behrmann et al., 2019). Therefore, to use self-attention and Transformers in such applications, a Lipschitz formulation of self-attention is required, together with an explicit (ideally tight) upper bound to its Lipschitz constant, to quantify how much the output can change with respect to changes in the input.\nOne method to make dot-product self-attention Lipschitz is by ensuring its inputs are bounded. Indeed, if the input space is compact, e.g. [0, 1]N×D, any continuously differentiable function is Lipschitz, including dot-product self-attention. However, as we further discuss in Section 6, such an approach has its own challenges, since it makes the Lipschitz constant depend on the input range. Instead, in the next section we formulate a version of self-attention that is provably Lipschitz on all of RN×D, allowing us to derive an upper bound that holds for any subset of RN×D." }, { "heading": "3.2 L2 SELF-ATTENTION: A LIPSCHITZ FORMULATION OF SELF-ATTENTION", "text": "The pathology in dot-product self-attention arises because the softmax probabilities Pi: are constant with respect to x 6=i when xi = 0. This behaviour can be undesirable as we want Pij to vary according to xj , regardless of whether xi is zero or not. Hence we propose an alternative form of self-attention based on L2 distance:\nPij ∝ exp(Lij) := exp ( − ∥∥x>i WQ − x>j WK∥∥22 /√D/H) , (10)\nwith the normalisation constant ensuring that ∑ j Pij = 1. We will refer to it as L2 self-attention. It is reminiscent of the standard squared-exponential kernel, but with softmax normalisation that ensures that each row of the kernel matrix sums to 1. Normalisation is usually necessary to deal with inputs of varying length N (Wang et al., 2018), hence we keep the softmax for L2 self-attention. Similarly to dot-product self-attention, L2 self-attention can be computed efficiently with matrix operations; see Appendix E for details, with a comparison of wall-clock runtimes between different choices of attention.\nWe first state the mathematical formulation of L2 multihead self-attention (L2-MHA) before proving the main result — the upper bound of its Lipschitz constant with respect to ‖ · ‖p for p = 2,∞. The\nfull L2-MHA map F : RN×D → RN×D is defined as F (X) := [ f1(X)WV,1, . . . , fH(X)WV,H ] WO where fh(X) := PhXAh.\nIn the above, WV,h ∈ RD×D/H , WO ∈ RD×D, Ph is defined as in Equation (10) with WQ,h = WK,h ∈ RD×D/H , and Ah := WQ,hWQ,h > / √ D/H ∈ RD×D. There are two changes from the usual form of multihead self-attention:\n(1) We require WQ,h =WK,h for each head fh(X) to be Lipschitz. In Lemma F.1 of Appendix F we show that L2-MHA is not Lipschitz for arbitraryWQ,h, WK,h, and that tyingWQ,h =WK,h is sufficient for L2-MHA to be Lipschitz, with intuition for why tying is sufficient.\n(2) In each head of the self-attention fh(X), right multiplication by Ah has been included for the theorem below to hold (details are in the proof). In practice, there is little harm done by this extra linear transformation, since when the heads are combined together in F , each fh(X) is additionally transformed by WV,h, a free parameter.\nThe second main result of the paper is the following: Theorem 3.2. L2-MHA is Lipschitz, with the following bound on Lip∞(F ):\nLip∞(F ) ≤\n( 4φ−1(N − 1) + 1√\nD/H\n) max h ‖WQ,h‖∞‖WQ,h > ‖∞max h ‖WV,h > ‖∞ ‖WO > ‖∞\nand the following bound on Lip2(F ):\nLip2(F ) ≤ √ N√ D/H ( 4φ−1(N − 1) + 1 )(√∑ h ‖WQ,h‖22 ‖WV,h‖22 ) ‖WO‖2\nwhere φ(x) := x exp(x + 1) is an invertible univariate function on x > 0, and N is the input sequence length.\nSpecifically, φ−1(N − 1) = W0(Ne ) where W0 is the Lambert W -function, which grows sublogarithmically as O(logN − log logN) (Corless et al., 1996). Hence the above bounds can be simplified to O(logN) for p =∞ and O( √ N logN) for p = 2.\nProof. See Appendix F, which uses the key observation that X>P (i)X is a covariance matrix (c.f. Equation (9)) to bound ‖JF ‖p, the norm of the Jacobian of F . Appendix G shows how the argument can be modified to prove the analogous result for the case with masking in the self-attention.\nThese bounds are complemented by the concurrent work of Vuckovic et al. (2020), which provides a O( √ D logN) bound on Lip1(F ) using measure-theoretic tools." }, { "heading": "4 APPLICATION: INVERTIBLE SELF-ATTENTION", "text": "" }, { "heading": "4.1 INVERTIBLE RESIDUAL NETWORK", "text": "Consider the residual function g(x) := x+f(x). Behrmann et al. (2019) give the following sufficient condition for its invertibility: if f is a contraction with respect to some metric, i.e. if Lip(f) < 1, and the metric space on which f is defined is complete, then g is invertible. (A Euclidean space with a metric induced by a p-norm ‖ · ‖p for p ∈ [1,∞] is always complete.) Specifically, the inverse g−1(y) is the unique fixed point of the recursion xi+1 := y − f(xi), since by the definition of the inverse we have y = g−1(y)+f(g−1(y)). Because f is a contraction, Banach’s Fixed Point Theorem guarantees that this fixed point exists and is unique for all y, and that the recursion converges for all initial values x0 (often set to y in practice) exponentially fast. Hence the inverse can be computed to arbitrary accuracy (up to numerical precision in practice) by the above fixed-point iteration.\nNote that a composition of such invertible residual blocks is also invertible. Behrmann et al. (2019) use this observation to design invertible ResNets: they take f to be a CNN normalised by an upper bound on Lip(f) given by Corollary 2.1, making the resulting function contractive. For the 2-norm ‖ · ‖2, a hyperparameter c < 1 is chosen and each linear map (convolution) W in the CNN is multiplied by c/‖W‖2 if c < ‖W‖2 where ‖W‖2 is estimated by power iteration (c.f. Appendix B). This multiplicative factor determines the scale of the Lipschitz constant of the normalised function.\n4.2 INVERTIBLE SELF-ATTENTION\nThe standard use case of self-attention is with a skip connection inside the Transformer. A Transformer block is composed of residual blocks of multihead self-attention (MHA) and fully-connected (FCN) layers (Figure 1). Hence similarly to invertible ResNets, we can normalise L2-MHA by the upper bounds given in Theorem 3.2 to obtain Contractive-L2-MHA f , with which we can obtain invertible self-attention g(x) = x+ f(x).\nIn the next section, we investigate the properties of invertible self-attention and how it compares with the standard dot-product self-attention; we replace DP-MHA in the Transformer with Contractive-L2-MHA, hence replacing the residual self-attention module with invertible self-attention. We are not interested in the modified Transformer per se, but rather in comparing the properties of invertible self-attention to standard self-attention — we only use the Transformer as a testbed for this purpose, since self-attention is commonly used in a Transformer. Given the theoretical focus of the paper, we believe that a more challenging application of invertible self-attention, such as normalising flow-based modelling, would be more suitable as a separate paper focused on that particular application. In Appendix H, we show that Dropout in the residual branch is also contractive." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "5.1 ASYMPTOTIC TIGHTNESS OF THE UPPER BOUND ON Lip∞(F )\nA tight bound on the Lipschitz constant of selfattention is desirable for all listed applications in Section 1; it leads to tighter generalisation bounds, lighter constraints for provable robustness, and better expressiveness in residual flow models. Hence we investigate the tightness of our bound on the Lipschitz constant of L2-MHA. The Lipschitz constant is a supremum over the space of inputs X ∈ RN×D (c.f. Equation (2)) and approximating it requires solving an intractable optimisation problem. Hence it is infeasible to estimate accurately in general, especially when X is high-dimensional. However, we may compute a lower bound on the Lipschitz\nconstant by maximising the norm of the Jacobian ‖Jf (X)‖ with respect to X until convergence. This local optimum will form a lower bound by Theorem 2.1, and we can expect this lower bound to be fairly tight for the low-dimensional case, provided the optimisation is thorough.\nWe use this observation to provide empirical evidence for the asymptotic tightness of the upper bound on Lip∞(f) in Theorem 3.2. In Figure 2, we show the upper bound as well as the lower bound on Lip∞(f) obtained by optimising ‖Jf (X)‖∞ with respect to X for L2-MHA f with 50 different random initialisations of X , with H = D = 1 and N varying between 100 and 1000. See Appendix I for further details. Note that we use a log-scale for the x-axis, and recall that the upper bound is O(logN − log logN), dominated by the O(logN) term for large N . Hence the plot for the upper bound shows a linear trend. We also observe that the slope of the lower bound is very similar, providing empirical evidence that the O(logN − log logN) upper bound is asymptotically tight. There are at least two possible explanations for the gap between the upper and lower bounds. (1) The lower bound is only a local optimum — the true Lipschitz constant is a global optimum across inputs, which can be difficult to attain especially for high values of N . (2) The multiplicative constant of the upper bound may be loose. Assuming asymptotic tightness, it remains an open question whether the multiplicative constant can be tightened. We show the analogous plot for Lip2(F ) and discuss the results in Appendix K. Additionally in Appendix L, we show that optimising ‖Jf (X)‖∞ w.r.t. X for DP-MHA f causes the norm to diverge, providing empirical verification of Theorem 3.1, that DP-MHA is indeed not Lipschitz." }, { "heading": "5.2 NUMERICAL INVERTIBILITY OF MHA RESIDUAL MAP", "text": "Recall from Section 4.1 that g(x) = x + f(x) is invertible if f is contractive. Hence if f is Contractive-L2-MHA, g is necessarily invertible. However, technically we do not disprove the invertibility of DP-MHA, since the converse does not hold in general i.e. if f is DP-MHA, which we have shown is not Lipschitz hence not contractive, it may still be the case that g is invertible. To verify that DP-MHA (with the skip connection) is not invertible in practice, we compare the numerical invertibility of the residual map g(x) = x + cf(x) between the cases where f is L2-MHA and DP-MHA in Figure 3. For each, we take MHA with 8 heads and randomly initialised weights, and quantify the\nmaximum reconstruction error across a batch of 128 inputs whose outputs are inverted via the fixed-point iteration described in Section 4.1. We use N = 64, D = 64, and c ∈ {0.5, 0.7, 0.9} (see Appendix J for analogous results for a wider range of N and D and for DP-MHA with trained weights). To highlight the difference between the two types of self-attention, recall in the proof of Theorem 3.1 (showing that DP-MHA is not Lipschitz) that when one of the inputs xi is 0, some terms of the Jacobian grow with the sample variance of x 6=i. Hence we check numerical invertibility at a set of N inputs where xi = 0 and x 6=i are chosen uniformly at random. In Figure 3, we see that DP-MHA is not invertible whereas L2-MHA is invertible for sufficiently small c. This shows how not having the theoretical guarantee of f being contractive can cost us invertibility in practice. We note that the figure shows local invertibility at the sampled inputs, as opposed to global invertibility across the whole input space, yet this clearly highlights the difference between the two choices of self-attention. Experiments with the globally invertible self-attention obtained by normalising with the Lipschitz upper bound are provided in the next section." }, { "heading": "5.3 EXPRESSIVENESS OF L2-MHA AND INVERTIBLE SELF-ATTENTION", "text": "A natural question to ask is: how does the expressiveness of L2-MHA and Contractive-L2-MHA (that leads to invertible self-attention with the skip connection) compare with the original DP-MHA? We expect that the Lipschitz constraint will limit the expressiveness of the Transformer, and would like to find out by how much. We investigate this by comparing the performance of the original Transformer and the Transformer with invertible self-attention (c.f. Figure 1) at character-level language modelling on the Penn Treebank dataset (Marcus et al., 1993). We compare the test negative log-likelihood (NLL) of a baseline LSTM, the original Transformer (DP-MHA), and a series of models between the original Transformer and the Transformer with invertible self-attention (Contractive-L2-MHA), making one change at a time and tuning the hyperparameters on a validation set. For Contractive-L2-MHA, we normalise L2-MHA by the bound on Lip∞(F ) as it is tighter than the bound on Lip2(F ). See Appendix I for experimental details.\nThe results are shown in Figure 4. The first plot shows the best performing LSTM reaching a test NLL of around 1.0, and the second plot shows the best performing Transformer reaching a slightly improved performance for 3–5 layers of Transformer blocks. We observe instabilities in training for a higher number of layers, requiring careful tuning of the learning rate schedule for stability at the cost of performance, a commonly observed phenomenon in the literature of deep Transformer architectures (Bapna et al., 2018; Parisotto et al., 2020). The third plot shows results for the Transformer with DP-MHA replaced with L2-MHA but without tying WQ and WK , and we observe a very similar test performance. The fourth plot shows the change when we further tie the query and key weights (making WQ =WK); we see that there is a small degradation in performance. Here the number of trainable parameters has been reduced, but in Appendix M we show that matching parameter count does not help performance, suggesting that the reduction in performance when tying queries and keys is not solely due to having fewer parameters. We note that performance saturates at around 5 layers for each Transformer model so far. On the rightmost plot we show results when further dividing self-attention in each block by the upper bound on Lip∞(F ), to obtain invertible self-attention. This does give reduced performance for the same number of layers, but we can attain similar performance with more layers, no longer saturating at 5 layers.\nThus we conclude the following. (1) Replacing the dot-product with the L2 distance incurs hardly any loss in expressiveness. (2) Tying the query and key weights to obtain Lipschitz self-attention incurs a small loss in expressiveness. (3) Dividing by the upper bound on Lip∞(F ) to obtain invertible self-attention incurs a noticeable loss in expressiveness, but also has a stabilising effect on the optimisation of the Transformer, thus allowing one to compensate for the apparent loss in expressiveness by increasing the number of layers. We show further experimental results that compare the training stability of DP-MHA and (Contractive)-L2-MHA in Appendix N." }, { "heading": "6 CONCLUSION AND DISCUSSION", "text": "We have shown that the widely used dot-product self-attention is not Lipschitz, and that the proposed L2 self-attention is Lipschitz, by deriving anO(logN−log logN) Lipschitz bound for p =∞ and an O( √ N(logN−log logN)) bound for p = 2, whereN is the input sequence length. We also provided empirical evidence of the asymptotic tightness of the bound for p = ∞. Finally we demonstrated that Lipschitz-constrained self-attention can be used to formulate invertible self-attention, which we experimentally evaluated on a character-level language modelling task.\nOur approach to Lipschitz self-attention has been to replace the dot-product kernel with an L2 kernel. An alternative would be to constrain the inputs of self-attention to be bounded; if the input space is compact, e.g. [0, 1]N×D, any continuously differentiable function is Lipschitz, including dot-product self-attention. However, while being simple to implement, this solution has its own difficulties. First, it makes the Lipschitz constant depend on the range of the input, and thus obtaining a tight bound would require non-trivial mathematical work. We stress that a guarantee that the function is Lipschitz does not tell us anything about its Lipschitz constant; without a tight Lipschitz bound, the true Lipschitz constant can be very large, at which point it is unhelpful that the function is Lipschitz. Second, since self-attention is typically applied at multiple layers within a model (e.g. Transformer), the input to each self-attention will live in a different compact set that depends on the parameters of the previous layers, complicating the analysis for subsequent layers. A solution is to constrain the inputs of each layer to be in the same compact set, e.g. by passing them through a sigmoid non-linearity. This however can have undesirable side effects such as vanishing gradients when the sigmoids are saturated (Hochreiter, 1998). Despite these difficulties, this could be a worthwhile alternative route for obtaining Lipschitz self-attention to explore in the future.\nHaving a provably Lipschitz self-attention module at our disposal makes it possible to use Transformerbased architectures in applications requiring Lipschitz constraints, while enjoying theoretical guarantees. A natural application of Lipschitz self-attention is for residual flows (Behrmann et al., 2019), and for parameterising Neural ODEs (Chen et al., 2018) where a Lipschitz vector field guarantees the existence of a unique solution to the ODE for all times. These models can be used for density estimation and generative modelling of sets. Another interesting direction for future work would be to analyse different variants of self-attention based on kernels other than dot-product and L2, as Tsai et al. (2019) do from an experimental perspective, for which we believe the mathematical tools developed in this paper may aid the analysis." }, { "heading": "A CHAIN RULE FOR VECTOR VALUED FUNCTIONS", "text": "In this section, we list some useful identities for deriving the Jacobians of the expressions in the paper.\nSuppose λ is a scalar, u, v, x are column vectors, and f(u) is a vector valued function. We use the standard convention that for a ∈ Rm, b ∈ Rn, we have ∂a∂b ∈ R\nm×n. Then we have the following chain rule identities:\n• ∂∂x [λu] = λ ∂u ∂x + u ∂λ ∂x\n• ∂f(u)∂x = ∂f(u) ∂u ∂u ∂x\n• ∂∂x [u >v] = u> ∂v∂x + v > ∂u ∂x\nNote ∂λ∂x is a row vector, so u ∂λ ∂x is a matrix." }, { "heading": "B POWER ITERATION", "text": "Although ‖W‖∞ can be computed efficiently in O(nm) time for W ∈ Rm×n, naïvely computing ‖W‖2 = σmax(W ) := √ λmax(W>W ) requires O(n3) operations. (By λmax(A) we denote the greatest eigenvalue of a symmetric matrix A.) We can however obtain an underestimate σ̃(W ) via power iteration:\nbk+1 = W>Wbk ‖W>Wbk‖2 , σ̃k(W ) =\n√ b>kW >Wbk\nb>k bk , (11)\nwith each iteration taking O(n2) time. Then using K n iterations gives us an underestimate σ̃K in O(Kn2) time. Since this is an underestimate, the resulting approximation to the Lipschitz constant of the linear map will not be an upper bound. However the number of power iterations is usually chosen so that σ̃ is accurate enough — K = 5 is shown to be sufficient in the context of fully connected networks or convolutions considered by Behrmann et al. (2019).\nThe iteration will converge if W>W has an eigenvalue that is strictly greater in magnitude than its other eigenvalues, and the starting vector b0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue. This happens with probability 1 if b0 is chosen at random, and the convergence is geometric with ratio |λ2/λmax| where λ2 is the eigenvalue with second largest magnitude (Mises & Pollaczek-Geiringer, 1929)." }, { "heading": "C PROOF OF THEOREM 3.1 FOR GENERAL D", "text": "Theorem 3.1. DP-MHA is not Lipschitz for any vector p-norm ‖ · ‖p with p ∈ [1,∞].\nProof. The mapping f can be written as f(X) = PX = softmax ( XA>X> ) X = f1(X) >\n... fN (X) > ∈ RN×D, (12) where A = WKWQ > / √ D/H ∈ RD×D and fi(X) = ∑N j=1 Pijxj with P > i: = softmax (XAxi). Hence f can be interpreted as a map of each xi to a point in the convex hull of x1, ..., xN . Since f is a map from RN×D to RN×D, its Jacobian is\nJf = J11 . . . J1N... . . . ... JN1 . . . JNN ∈ RND×ND, (13) where Jij =\n∂fi(X) ∂xj ∈ RD×D. By taking partial derivatives we can show that Jij = X>P (i) [ ejiXA > +XAδij ] + PijI where eij ∈ RN×N is a binary matrix with zeros everywhere\nexcept the (i, j)th entry, δij is the Kronecker delta, and P (i) := diag(Pi:)− P>i: Pi:. So for i = j:\nJii = X >P (i)eiiXA > +X>P (i)XA+ PiiI = Pii (xi − ∑ k Pikxk)x > i A > +X>P (i)XA+ PiiI. (14)\nFor the last equality, note eiiX has all rows equal to zero except for the ith row given by x>i . We can then verify that X>P (i)eiiX simplifies to Pii(xi − ∑ k Pikxk)x > i .\nFor vector p-norms, ‖Jf‖p is bounded if and only if its entries are bounded, by definition of the operator norm. The entries ofX>P (i)XA are bounded for arbitraryA only if the entries ofX>P (i)X are bounded. So let us investigate the entries of this D ×D matrix. Writing out each term of the matrix, we observe that it is in fact a covariance matrix of a discrete distribution. Specifically:\n[X>P (i)X]lm = ∑ k Pikxklxkm − ( ∑ k Pikxkl) ( ∑ k Pikxkm) = Cov(Xl,Xm), (15)\nwhere X is a discrete distribution with support at the inputs {x1, . . . , xN} and probability mass function given by their softmax probabilities P(X = xj) = Pij . A consequence of this interpretation is that P (i) is positive semi-definite (PSD) since for D = 1, Equation (15) becomes X>P (i)X = Var(X) ≥ 0, with equality if and only if the xj are all equal. We use this observation to show that the terms of Jii are unbounded, and so DP-MHA is not Lipschitz. Consider the case xi = 0. Then P>i: = softmax (XAxi) = 1 N 1, i.e. we have uniform attention regardless of x 6=i. The first term of Jii in Equation (14) disappears since xi = 0, and the last term becomes 1N I . For the second term, the entries [X\n>P (i)X]ll = Var(Xl) are unbounded since the latter is equal to the sample variance of x1l, . . . , xNl, which can be arbitrarily large." }, { "heading": "D BIAS TERM IN DP SELF-ATTENTION", "text": "A natural question to ask is whether we can add bias terms bQ to x>i W Q and bK to x>j W K to resolve the issue of attention weights Pi: becoming uniform when xi = 0. The answer is no in general. It can again be shown that Jii is unbounded when xi is chosen such that x>i W\nQ + bQ = 0 (such a choice is possible assuming WQ is full rank, a dense set in RD×D/H ). Then P>i: = 1N 1 again, and the diagonal entries of X>P (i)X are unbounded." }, { "heading": "E EFFICIENT COMPUTATION OF L2 SELF-ATTENTION", "text": "Dot-product self-attention only requires a few matrix multiplications to compute the logits (i.e. the inputs to the softmax) between all pairs of inputs, without having to loop over pairs, hence it can be computed efficiently. Similarly, we can show that L2 self-attention can also be computed in an efficient manner. Using the identity ‖a− b‖22 = ‖a‖22 − 2a>b+ ‖b‖22 we can compute the logits of L2 attention between all pairs via matrix multiplications and computation of row-wise L2 norms, with negligible overhead compared to dot-product self-attention. Specifically, for L2 self-attention we can show that\nP = softmax ( −‖XW\nQ‖2row1> − 2XWQ(XWK)> + 1‖XWK‖2>row√ D/H\n) , (16)\nwhere ‖A‖2row applies the squared L2 norm to each row of A, so if A ∈ Rm×n then ‖A‖2row ∈ Rm. In Table 1 we show the wall-clock training times for the Transformer models with different attention functions and a varying number of layers. It is evident that the differences between the models are rather small." }, { "heading": "F PROOF OF THEOREM 3.2", "text": "Recall the formulation of L2-MHA:\nF : RN×D → RN×D F (X) = [ f1(X)WV,1, . . . , fH(X)WV,H ] WO\nfh(X) = PhXAh\nPhij ∝ exp(Lij) := exp\n( − ‖x>i WQ,h − x>j WK,h‖22√\nD/H\n) , ∑ j Phij = 1\nwhere we have that WQ,h,WK,h,WV,h ∈ RD×D/H , WO ∈ RD×D, Ph ∈ RN×N and Ah := WQ,hWQ,h > / √ D/H ∈ RD×D, and the softmax is applied to each row of the input matrix. Recall Equation (16):\nPh = softmax ( −‖XW Q,h‖2row1> − 2XWQ,h(XWK,h)> + 1‖XWK,h‖2 >\nrow√ D/H\n) .\nF.1 L2 SELF-ATTENTION IS not LIPSCHITZ FOR GENERAL WQ,WK\nLet us first look at the case of H = 1 and suppress the index h to reduce clutter. Consider the map f̃(X) := PX , so f(X) = f̃(X)A. We need f̃ to be Lipschitz for f and hence F to be Lipschitz. Note that P is defined as:\nPij ∝ exp(Lij) := exp ( − ‖x>i WQ − x>j WK‖22√\nD/H ) and the normalisation constant satisfies ∑ j Pij = 1, for P ∈ RN×N , X ∈ RN×D.\nFor L2 self-attention, we may take partial derivatives and use the chain rule to show that the Jacobian of f̃ is:\nJf̃ = J̃11 . . . J̃1N... . . . ... J̃N1 . . . J̃NN ∈ RND×ND (17) with\nJ̃ij = X >P (i) ∂Li: ∂xj + PijI ∈ RD×D (18)\nwhere ∂Li: ∂xj = 2√ D/H [( XWK − 1x>i WQ ) WQ > δij + ( ejiXW Q − ejjXWK ) WK > ]\n(19)\nand\nP (i) := diag(Pi:)− P>i: Pi: = Pi1(1− Pi1) −Pi1Pi2 . . . −Pi1PiN −Pi2Pi1 Pi2(1− Pi2) . . . −Pi2PiN\n... ...\n. . . ...\n−PiNPi1 −PiNPi2 . . . PiN (1− PiN )\n ,\nPij = exp\n( −‖x>i WQ − x>j WK‖22 )∑ k exp ( −‖x>i WQ − x>kWK‖22\n) . Recall that eji ∈ RN×N is a binary matrix with zeros everywhere except the (j, i)th entry. Hence ejiX has all rows equal to zero except for the jth row given by x>i . We can then verify:\nX>P (i)ejiX = Pij(xj − ∑ k Pikxk)x > i . (20)\nAlso note P (i) is symmetric, and each row/colum sums to 0, i.e. P (i)1 = 1>P (i) = 0. Hence we may simplify the Jacobian terms as follows: J̃ii = 2√ D/H [ X>P (i)(XWK − 1xTi WQ)WQ > +X>P (i)eiiX(W Q −WK)WK > ] + PiiI\n= 2√ D/H\n[ X>P (i)(XWK − 1xTi WQ)WQ > + Pii(xi − ∑ k Pikxk)x > i (W Q −WK)WK > ] + PiiI\n= 2√ D/H\n[ X>P (i)XWKWQ > + Pii(xi −\n∑ k Pikxk)x > i (W Q −WK)WK >\n] + PiiI,\n(21)\nand for i 6= j:\nJ̃ij = 2√ D/H X>P (i)(eijXW Q − ejjXWK)WK > + PijI\n= 2√ D/H Pij(xj − ∑ k Pikxk)(x > i W Q − x>j WK)WK > + PijI. (22)\nWe are now ready to show that f̃ is not Lipschitz for general WQ,WK :\nLemma F.1. If WK ∈ RD×D/H is full rank (i.e. full column rank), and WK 6=WQ, then Jij has terms that are unbounded for i 6= j, hence f̃ is not Lipschitz.\nProof. Let us investigate the expression K̃ij := PijWK > (xj − ∑ k Pikxk)(x > i W\nQ − x>j WK) ∈ RDH×DH for i 6= j, which is related to J̃ij as follows by Equation (22):\nWK > J̃ij = ( 2√ D/H K̃ij + PijI ) WK > .\nIt suffices to show that K̃ij is unbounded to show that J̃ij is unbounded, since WK is full rank and Pij ∈ [0, 1].\nLet y>j = x > i W Q − x>j WK . Then we have:\nyj − ∑ k Pikyk =W Q>xi −WK > xj − ∑ k Pik(W Q>xi −WK > xk)\n=WQ > xi −WK > xj − (WQ > xi − ∑ k PikW K>xk) = −WK > (xj −\n∑ k Pikxk).\nHence K̃ij = −Pij(yj − ∑ k Pikyk)y > j . Note yi can take an arbitrary value in RD/H , since WK 6=WQ and WK is full-rank. For all j 6= i, let us choose xj such that yj = −yi. This is possible for any value of yi since WK is full-rank. Note yj = −yi and not yi. We then have that ‖yj‖22 is equal for all j, hence\nPij := exp(−‖yj‖22)∑ k exp(−‖yk‖22) = 1N for all j. Then for i 6= j, K̃ij simplifies to\nK̃ij = − 1\nN\n( −yi − 1\nN (N − 2)(−yi)\n) (−yi)> = −\n2N − 2 N2 yiy > i\nwhose entries are unbounded since yi can be any vector in RD/H (note we assume N ≥ 2 for self-attention to be well-defined, hence 2N − 2 6= 0).\nThe intuition for this result is as follows: a reason for DP-MHA not being Lipschitz is that for xi = 0„ the attention weights Pij become uniform regardless of the values of xj for j 6= i. A similar issue arises for L2-MHA with WQ 6= WK and full-rank WK , as shown above: given any xi, we can choose xj such that the Pij become uniform.\nF.2 L2 SELF-ATTENTION IS LIPSCHITZ FOR WQ = WK\nHence we impose the restriction that WK =WQ. With this assumption we have Pij ∝ exp ( −‖(xi − xj)> √ A‖22 ) (23)\nwhere A =WQWQ > / √ D/H ∈ RD×D and √ A is chosen such that A = √ A √ A >\n, in particular√ A :=WQ/(D/H) 1 4 . The terms in the Jacobian of f̃ simplify to:\nJ̃ii = 2X >P (i)XA+ PiiI (note P (i)1 = 0), (24) J̃ij = 2Pij(xj − ∑ k Pikxk)(xi − xj)>A+ PijI for i 6= j. (25)\nLet the Jacobian of f(X) be:\nJf = J11 . . . J1N... . . . ... JN1 . . . JNN ∈ RND×ND. (26) Since f(X) = f̃(X)A, and by the chain rule ∂∂xj [f̃i(X)A] = A > ∂f̃i(X) ∂xj = A∂f̃i(X)∂xj (by symmetry of A), we have that Jij = AJ̃ij . Hence\nJii = 2AX >P (i)XA+ PiiA (note P (i)1 = 0), (27) Jij = 2PijA(xj − ∑ k Pikxk)(xi − xj)>A+ PijA for i 6= j. (28)\nNoting Lipp(f) = supX ‖Jf (X)‖p, we would like to upper bound ‖Jf‖p.\nF.2.1 UPPER BOUND ON Lip∞(F ) FOR L2-MHA\nConsider the choice p = ∞, where ‖Jf‖∞ is the maximum absolute row sum of Jf . A key observation is that if we can bound the∞-norm of the Jacobian of fi, a single output of f (i.e. a single block row ‖[Ji1, ..., JiN ]‖∞ of Jf ), then this is also a bound on ‖Jf‖∞ due to permutation equivariance of self-attention; all block rows have the same maximal ‖ · ‖∞ when each is optimised over the input X . Using this, we can prove that ‖Jf‖∞ admits an upper bound that is O(logN − log logN). Below we state and prove lemmas that lead to the proof of this upper bound.\nFirst we analyse the term √ A > X>P (i)X √ A, that appears in the first term of Jii. Note that for\nY := X √ A, so that the rows of Y are y>i := x > i √ A, we have\n√ A > X>P (i)X √ A = Y >P (i)Y = Cov(Y) (29)\nwhere P(Y = yj) = Pij = exp(−‖yj − yi‖22)/ ∑ k exp(−‖yk − yi‖22). The last equality uses the observation in Equation (9).\nThe central inequality used throughout the proof of the main theorem is the following:\nLemma F.2. Tr(Cov(Y)) = ∑ j Pij‖yj− ∑ k Pikyk‖22 ≤ ∑ j Pij‖yj−yi‖22 ≤ φ−1(N −1) where φ(c) = c exp(c+ 1) is a one-dimensional invertible function on R≥0.\nProof. The first equality holds since Tr(Cov(Y)) = ∑ j Cov(Y)jj = ∑ j Var(Yj) = ∑ j E[(Yj − E[Yj ])2]. The next inequality holds since Var(Yj) = Var(Yj) = E[Y 2 j ]− E[Yj ]2 ≤ E[Y 2\nj ] where Y = Y− yi. The final inequality can be proved as follows. We would like to bound∑\nj\nPij‖yj − yi‖22 = ∑ j ‖yj − yi‖22 exp(−‖yj − yi‖22)∑\nk exp(−‖yk − yi‖22) =\n∑ j z\n2 j exp(−z2j )∑ k exp(−z2k)\n(30)\nwhere zj := ‖yj − yi‖2 (hence zi = 0). Define:\ng(z) :=\n∑ j z\n2 j exp(−z2j )∑ k exp(−z2k) =\n∑ j 6=i z 2 j exp(−z2j )\n1 + ∑ k 6=i exp(−z2k) . (31)\nFirst note that as zj →∞, exp(−z2j )→ 0 exponentially fast, causing the product z2j exp(−z2j )→ 0. Hence we expect the above quantity to be bounded and attain its maximum.\nLet h(zj) := exp(−z2j ) for notational conciseness, and note h(zj) > 0. By taking partial derivatives with the chain rule, we have that for j 6= i\n∂g(z)\n∂zj =\n2yjh(zj) ( ∑ k h(zk)) 2\n[ (1− z2j ) ∑ k h(zk) + ∑ k h(zk)z 2 k ] . (32)\nHence the derivative is 0 if and only if zj = 0 or (1− z2j ) ∑ k h(zk) + ∑ k h(zk)z 2 k = 0, the latter\nbeing equivalent to z2j = 1 + ∑ k h(zk)z 2 k∑\nk h(zk) = 1 + g(z). Hence at the maximum, the non-zero values\namong {zj}Nj=1 must be equal to one another. It is clear now that the maximum value c is attained when z2j = 1 + c for j 6= i (and recall zi = 0). So h(zj) = exp(−1 − c) for j 6= i. Substituting this into g(z), and rearranging, we obtain c exp(c + 1) = N − 1. Note φ(x) := x exp(x + 1) is increasing for x > 0 hence c = φ−1(N − 1).\nNote φ(logN) = (logN) exp(logN+1) ≥ N logN ≥ N−1 forN ≥ 3. Since φ is increasing, we have φ−1(N−1) ≤ log(N) forN ≥ 3. In fact, it is known that φ−1(N−1) = O(logN−log logN) (Corless et al., 1996).\nNote theA term in f(X) = f̃(X)A allows us to use the above inequality, since Y >P (i)Y = Cov(Y) now appears in the terms of Jf :\nJii = 2 √ A[Y >P (i)Y ] √ A > + PiiA, (33)\nJij , = 2 √ APij(yj − ∑ k Pikyk)(yi − yj)> √ A > + PijA for i 6= j. (34)\nUsing the inequalities ‖BC‖ ≤ ‖B‖‖C‖, ‖B+C‖ ≤ ‖B‖+‖C‖ and ‖[A1, . . . , AN ]‖ ≤ ∑ i ‖Ai‖, we have: ‖[Ji1, . . . , JiN ]‖∞ ≤‖Jii‖∞ +\n∑ j 6=i ‖Jij‖∞\n≤2‖ √ A‖∞‖Y >P (i)Y ‖∞‖ √ A > ‖∞ + Pii‖A‖∞\n+ 2 ∑ j 6=i ‖ √ A‖∞‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖∞‖ √ A > ‖∞ + Pij‖A‖∞\n=2‖ √ A‖∞‖ √ A > ‖∞ ( ‖Y >P (i)Y ‖∞ + ∑ j 6=i ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖∞ ) + ‖A‖∞\n=2 ‖WQ‖∞‖WQ >‖∞√ D/H\n( ‖Y >P (i)Y ‖∞ + ∑ j ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖∞ ) + ‖WQWQ>‖∞√ D/H .\nFor the first equality, note that ∑ j Pij = 1. For the second equality, note that the summand for j = i is 0 because the term yi − yj = 0. Each of the terms in the brackets are bounded by the following lemmas: Lemma F.3. ‖Y >P (i)Y ‖∞ ≤ φ−1(N − 1) √ D/H (φ defined as in Lemma F.2).\nProof. Recall that Y >P (i)Y = Cov(Y). Let σ(Ym) denote the standard deviation of Ym. Then [Cov(Y)]lm ≤ σ(Yl)σ(Ym). Hence\n‖Cov(Y)‖∞ = max l ∑ m |[Cov(Y)]lm| ≤ max l σ(Yl) ∑ m σ(Ym)\n≤ √ D\nH ∑ m σ2(Ym) = √ D H Tr(Cov(Y))\n≤ √ D\nH φ−1(N − 1),\nsince ∑ m σ(Ym) ≤ √ D H √∑ m σ\n2(Ym) (by e.g. using the Cauchy–Schwartz inequality on [σ(Y1), . . . , σ(YD/H)] and 1) and maxl σ(Yl) ≤ √∑ m σ\n2(Ym), and the last inequality is from Lemma F.2.\nLemma F.4. ∑ j ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖∞ ≤ φ−1(N − 1) √ D/H .\nProof. Note ‖ab>‖∞ = ‖a‖∞‖b‖1 for real vectors a, b. Hence∑ j ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖∞ = ∑ j Pij‖yj − ∑ k Pikyk‖∞‖yi − yj‖1\n= a>b ≤ ‖a‖2‖b‖2, where aj = √ Pij‖yj − ∑ k Pikyk‖∞, bj = √ Pij‖yi − yj‖1.\nNote aj ≤ cj := √ Pij‖yj − ∑ k Pikyk‖2 since ‖x‖∞ ≤ ‖x‖2 for vector x. Hence ‖a‖2 ≤ ‖c‖2.\nAlso bj ≤ √ D H dj := √ D H √ Pij‖yi − yj‖1 since ‖x‖1 ≤ √ D H ‖x‖2 (e.g. by the Cauchy–Schwartz\ninequality on [|x1|, . . . , |xD/H |] and 1) for x ∈ RD/H . Hence ‖b‖2 ≤ √ D H ‖d‖2.\nNote ‖c‖22 = ∑ j Pij‖yj− ∑ k Pikyk‖22 = Tr(Cov(Y)) ≤ φ−1(N−1) from Lemma F.2, and ‖d‖22 =∑\nj Pij‖yi − yj‖22 ≤ φ−1(N − 1) also from Lemma F.2. Hence ‖a‖2‖b‖2 ≤ √\nD H ‖c‖2‖d‖2 ≤√\nD Hφ −1(N − 1).\nPutting the above lemmas altogether, with the observation supX ‖Jf (X)‖∞ = supX ‖[Ji1(X), . . . , JiN (X)]‖∞ by permutation invariance of ‖Jf‖∞ (since f is permutation equivariant and ‖ · ‖∞ is the maximum absolute row sum), we have\n‖Jf‖∞ ≤ 4‖WQ‖∞‖WQ > ‖∞φ−1(N − 1) + ‖WQWQ>‖∞√ D/H\n≤ ‖WQ‖∞‖WQ > ‖∞ ( 4φ−1(N − 1) + 1√\nD/H\n) (35)\n≤ ‖WQ‖∞‖WQ > ‖∞ ( 4 logN +\n1√ D/H\n) ,\nwhere the last inequality holds for N ≥ 3. The full multihead attention map that combines the heads fh(X) is:\nF : X 7→ [ f1(X)WV,1, . . . fH(X)WV,H ] WO = g(X)WVWO\nwhere g : X 7→ [f1(X), . . . , fH(X)], WO ∈ RD×D and\nWV = W V,1 . . . 0 ... . . . ...\n0 . . . WV,H ∈ RDH×D. Note the Jacobian Jg is a block matrix whose rows are Jfh , hence ‖Jg‖∞ = maxh ‖Jfh‖∞, and similarly ‖WV >‖∞ = maxh ‖WV,h >‖∞. Hence we have\nLip∞(F ) ≤ max h ‖Jfh‖∞max h ‖WV,h\n> ‖∞‖WO > ‖∞.\nCombining this with Inequality (35), we have:\nLip∞(F ) ≤\n( 4φ−1(N − 1) + 1√\nD/H\n) max h ‖WQ,h‖∞‖WQ,h > ‖∞max h ‖WV,h > ‖∞ ‖WO > ‖∞.\nF.2.2 UPPER BOUND ON Lip2(F ) FOR L2-MHA\nFor p = 2, we use the following lemma: Lemma F.5. Let A be a block matrix with block rows A1, . . . , AN . Then ‖A‖2 ≤ √∑\ni ‖Ai‖22, and equality holds if and only if the first right singular vectors of the Ai align.\nProof.\n‖A‖22 = ∥∥∥∥∥∥∥ A1... AN ∥∥∥∥∥∥∥ 2\n2\n= sup ‖x‖2=1 ∥∥∥∥∥∥∥ A1... AN x ∥∥∥∥∥∥∥ 2\n2\n= sup ‖x‖2=1 ∑ i ‖Aix‖22 ≤ ∑ i sup ‖x‖2=1 ‖Aix‖22 = ∑ i ‖Ai‖22.\nNote that equality holds if and only if the first right singular vectors of the Ai align.\nHence a bound on the spectral norm of each block row of Jf can give us an O( √ N) bound on ‖Jf‖2, which may be loose, and it remains an open question as to whether this bound can be tightened.\nTo bound the ‖ · ‖2 norm of each row of Jf , we use the following lemmas: Lemma F.6. ‖Y >P (i)Y ‖2 ≤ φ−1(N − 1)\nProof. ‖Y >P (i)Y ‖2 = ‖Cov(Y)‖2 = λmax(Cov(Y)) ≤ Tr(Cov(Y)) ≤ φ−1(N − 1), where the first equality holds by symmetry of Cov(Y) and the next holds by Cov(Y) being positive semidefinite, so all its eigenvalues are non-negative, and hence the maximal eigenvalue is bounded by the sum of the eigenvalues, equal to its trace. The final inequality is from Lemma F.2.\nLemma F.7. ∑ j ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖2 ≤ φ−1(N − 1)\nProof. Directly use Cauchy–Schwartz on c and d in the proof of Lemma F.4.\nAgain using the inequalities ‖BC‖ ≤ ‖B‖‖C‖, ‖B + C‖ ≤ ‖B‖ + ‖C‖ and ‖[A1, . . . , AN ]‖ ≤∑ i ‖Ai‖, with the additional equality ‖B>‖2 = ‖B‖2, we have the bound:\n‖[Ji1, . . . , JiN ]‖2\n≤ 2‖W Q‖2‖WQ >‖2√ D/H\n( ‖Y >P (i)Y ‖2 + ∑ j ‖Pij(yj − ∑ k Pikyk)(yi − yj)>‖2 ) + ‖WQWQ>‖2√ D/H\n≤ 4φ−1(N − 1)‖W Q‖22√ D/H + ‖WQWQ>‖2√ D/H ≤ ‖W Q‖22√ D/H ( 4φ−1(N − 1) + 1 ) .\nUsing Lemma F.5, we have that\n‖Jf‖2 ≤ √ N‖WQ‖22√ D/H\n( 4φ−1(N − 1) + 1 ) (36)\n≤ √ N‖WQ‖22√ D/H (4 logN + 1).\nTo obtain the final result for the full multihead self-attention F , we need a final lemma: Lemma F.8. Let A be a block matrix with block columns A1, . . . , AN . Then ‖A‖2 ≤ √∑ i ‖Ai‖22.\nProof.\n‖A‖2 = ‖[A1, . . . , AN ]‖2 = sup∑ i ‖xi‖22=1 ∥∥∥∥∥∥∥[A1, . . . , AN ] x1... xN ∥∥∥∥∥∥∥ 2\n2\n= sup∑ i ‖xi‖22=1 ‖ ∑ i Aixi‖2\n≤ sup∑ i ‖xi‖22=1 ∑ i ‖Aixi‖2 = sup ‖ei‖2=1, ∑ i λ 2 i=1 ∑ i λi‖Aiei‖2 = sup∑ i λ 2 i=1 ∑ i λi‖Ai‖2\n≤ √∑\ni\n‖Ai‖22,\nwhere we are using the substitution xi = λiei, and the last inequality holds by e.g. Cauchy–Schwartz inequality on [λ1, . . . , λN ] and [‖A1‖2, . . . , ‖AN‖2].\nRecall that F : X 7→ [ f1(X)WV,1, . . . , fH(X)WV,H ] WO. Since ‖fh(X)WV,h‖2 ≤ ‖Jfh‖2‖WV,h‖2, by Lemma F.8 we have that∥∥[f1(X)WV,1, . . . , fH(X)WV,H ]∥∥ 2 ≤ √∑\nh\n‖Jfh‖22‖WV,h‖22\nand hence\nLip2(F ) ≤ √∑ h ‖Jfh‖22‖WV,h‖22 ‖WO‖2. (37) Combining this with Inequality (36), we have:\nLip2(F ) ≤ √ N√ D/H ( 4φ−1(N − 1) + 1 )(√∑ h ‖WQ,h‖22 ‖WV,h‖22 ) ‖WO‖2." }, { "heading": "G THE CASE WITH MASKING", "text": "Since self-attention is often used with masking, a natural question is how masking affects the derived bounds. In self-attention (for any choice of attention function), masking is implemented as follows: given a set of mask indicesM⊂ {1, . . . , N}× {1, . . . , N}, the logits (i.e. the inputs to the softmax) are set to −∞ at the mask indices. That is,\nLij = { L̃ij if (i, j) /∈M −∞ if (i, j) ∈M\nwhere L̃ij is the original logit (e.g. for L2 self-attention, L̃ij = −(xi − xj)>A(xi − xj)). Masking implies fi(X) is not a function of xj for (i, j) ∈M, hence Jij = 0 for (i, j) ∈M. Thus fi(X) is equal to the ith output for self-attention with inputs restricted to {xj : (i, j) /∈ M}, the unmasked inputs with respect to the ith output. Hence Jij will no longer contribute to the bound on ‖[Ji1, . . . , JiN ]‖, and hence the bound for the unmasked case will continue to hold as long as (i, i) ∈M i.e. xi attends to itself (this is necessary for the proof of Lemma F.2 to hold). The bound can in fact be tightened by replacing N with |{xj : (i, j) /∈ M}|, the number of unmasked inputs with respect to the ith output." }, { "heading": "H DROPOUT IS CONTRACTIVE", "text": "At test time, Dropout multiplies inputs by the dropout keep probability p < 1, so it is a contraction with Lipschitz constant p at evaluation time. At training time, Dropout amounts to setting some inputs to zero, while keeping other inputs constant. This can be expressed as right multiplication by a diagonal binary matrix M , and for such matrices we can verify ‖M‖p := sup‖x‖p=1 ‖Mx‖p ≤ 1." }, { "heading": "I EXPERIMENTAL DETAILS", "text": "For the experiment in Section 5.1, showing the asymptotic tightness of the upper bound on Lip∞(F ) where F is L2-MHA, we fix all free parameters of F (namely WQ,WV ) to be the identity, and only optimise the input X . We use 50 random initialisations of X for each N , where Xij ∼ U [−c, c] for c ∼ U [0, 10] (we observed that having c itself be random improves optimisation). We display the top 5 results for each value of N after optimising each random initialisation till convergence using Adam (Kingma & Ba, 2015) with a learning rate of 0.1.\nFor the experiments in Section 5.3, we comparing the performance of the original Transformer and the Transformer with Lipschitz/invertible self-attention at character-level language modelling on the Penn Treebank dataset (Marcus et al., 1993).1 Each training example is a sentence represented as a variable-length sequence of characters, and examples are batched according to length such that padding is minimised, with the maximum sequence length set to 288. All models are autoregressive, outputting the logits for the categorical likelihood predicting the next character, and are trained using maximum likelihood (cross-entropy loss) with a batch size of 64. The LSTM models have the dimensionality of the hidden state equal to the dimensionality D of the cell state (the usual default implementation). The Transformer models are trained with a varying number of blocks (number of layers) with H = 8 heads and D = 512, tuning hyperparameters for dropout rate in {0, 0.1, 0.2} and base learning rate γ ∈ {0.2, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0} with number of warmup iterations w ∈ {1000, 2000, 4000, 8000} for the standard custom learning rate schedule in Vaswani et al. (2017):\nt = γ√ D min(t−1/2, tw−3/2),\nwhere t is the learning rate at training iteration t. Hence the learning rate linearly increases from 0 to (Dw)−1/2 over w iterations, then decays proportionally to t−1/2. We use Glorot Uniform\ninitialisation (Glorot & Bengio, 2010) for all weights (U [ − √\n1 din+dout\n, √\n1 din+dout\n] ), except for\nweights in L2-MHA that are initialised fromU [ − s√\nD , s√ D\n] , and s is a hyperparameter. ForD = 512,\nwe used s = 124 . All experiments were done in Tensorflow 1.14 (Abadi et al., 2016). The code will be released upon de-anonymisation.\nIn Table 2 we show the best Test NLL across training of Transformer models in Figure 4." }, { "heading": "J NUMERICAL INVERTIBILITY OF MHA RESIDUAL MAP", "text": "Following Section 5.2, Figure 5 confirms that numerical invertibility does not hold for trained weights for dot-product multihead self-attention (DP-MHA) (obtained from one-layer Transformer (DP)\n1We use the standard training-validation-test split, and the dataset can be found at e.g. https://github. com/harvardnlp/TextFlow/tree/master/data/ptb.\nmodel used for Figure 4), similar to the randomly initialised weight case. Figure 6 shows additional results for different values of N and D.\nK BEHAVIOUR OF LOWER BOUND ON Lip2(F )\nIn Figure 7, we show the lower bound on Lip2(F ) obtained by optimising ‖JF (X)‖2 using the same optimisation procedure as for Figure 2 of Section 5.1. Here the optimisation is more difficult, evident in the variance of the top 5 values, and the trend is less clear, but it appears that Lip2(f) grows at a rate of O(logN). The message is less clear here, and there are at least two possibilities:\n(1) The optimisation is difficult even for small values of N , hence Figure 7 shows a loose lower bound. (2) If the lower bound is tight, this suggests that the O( √ N logN) bound in Theorem 3.2 is not\nasymptotically tight, and could be improved to O(logN) (or O(logN − log logN) as for p =∞)." }, { "heading": "L OPTIMISING THE NORM OF THE JACOBIAN OF DP-MHA", "text": "In Figure 8, we show how the norm of the Jacobian ‖Jf (X)‖∞ for DP-MHA f keeps increasing when being optimised with respect to X . This is a useful sanity check validating our theoretical result of Theorem 3.1, that DP-MHA is not Lipshchitz. The oscillations are likely due to momentum term of Adam optimizer that was used to optimise the norm." }, { "heading": "M EXPERIMENT TYING KEYS AND QUERIES OF L2-MHA BUT PRESERVING PARAMETER COUNT", "text": "In Figure 4 of Section 5.3, we have shown that there is a clear reduction in performance when tying the keys and queries. To test whether this can be attributed to the reduction in parameter count, we tried doubling the number of columns ofWQ when the keys and queries are shared (i.e. fromD/H to 2D/H) so that the shared model has the same number of parameters as the unshared model. In Figure 9, the third column shows results for shared L2-MHA, but with the same number of parameters as the unshared L2-MHA i.e. without tying the keys and queries. The performance is similar to the second column (tying with a reduced number of parameters), suggesting that there is an inherent limitation in expressiveness to tying the keys and queries, and that the reduction in number of parameters is an insufficient explanation this phenomenon." }, { "heading": "N STABILITY EXPERIMENTS", "text": "In Figure 10 below, we compare the output variance of trained L2-MHA against trained DP-MHA, with weights from the one-layer Transformer (L2), WQ =WK model and (DP) model used for Figure 4 respectively. We take the same distribution of inputs as used for the numerical invertibility experiment in Section 5.2, and show the histogram of inputs and outputs after flattening the input/output tensors. We see that the range of outputs remains similar to the range of inputs for Lipschitz L2-MHA, whereas for DP-MHA the outputs have a much wider range, because the Jacobian norm is large for DP-MHA at these inputs.\nIn practice, this leads to instabilities while training for DP-MHA, hence requiring careful tuning of the learning rate schedule for training deeper Transformer models: linear warmup and square root decay, as detailed in Appendix I. In Figure 11, we show how training instabilities arise for DP-MHA with deeper Transformer models if we use a fixed learning rate. We compare the training curves of DP-MHA, L2-MHA (WQ = WK) and Contractive-L2-MHA for a varying number of layers for the first 20 epochs of training with a fixed learning rate. We see that DP-MHA fails to train beyond 10 layers, whereas both L2-MHA (WQ = WK) (i.e. Lipschitz L2-MHA but not contractive) and Contractive-L2-MHA shows stable training for up to 18 layers. This was the deepest model we could fit on a single GPU, and we expect to be able to train even deeper models with L2-MHA (WQ =WK) and Contractive-L2-MHA. In Table 2 we show the best Test NLL across training of Transformer models with fixed learning rate. Note that for DP-MHA training becomes unstable beyond 10 layers, so we are only able to provide results up to 10 layers. The generalisation performance of the best model for each setting is similar." } ]
2,020
null
SP:a0f8915a46a06042331002c9fe2ed47382cc25e9
[ "This paper proposes a simple way to increase the robustness of the learned representations in a network perform a series of object recognition tasks by adding a random convolution layer as a pre-processing stage, thus “filtering the image” and preserving the global shape but altering the local `texture’ of the newly transformed image. Here, the hope is that -- analogous to Geirhos et al. 2019 that induces a shape bias by transforming the image distribution into a new one with altered *global* textures that induce a shape bias and increases general robustness to o.o.d distortions -- the authors here go about doing something similar at the local level given the small size of the receptive field of the filter, thus preserving the shape and slightly altering “the texture”." ]
While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation. 1
[ { "affiliations": [], "name": "RANDOM CONVOLUTIONS" }, { "affiliations": [], "name": "Zhenlin Xu" }, { "affiliations": [], "name": "Deyi Liu" }, { "affiliations": [], "name": "Junlin Yang" }, { "affiliations": [], "name": "Colin Raffel" }, { "affiliations": [], "name": "Marc Niethammer" } ]
[ { "authors": [ "Yogesh Balaji", "Swami Sankaranarayanan", "Rama Chellappa" ], "title": "Metareg: Towards domain generalization using meta-regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Fabio M Carlucci", "Antonio D’Innocente", "Silvia Bucci", "Barbara Caputo", "Tatiana Tommasi" ], "title": "Domain generalization by solving jigsaw puzzles", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Joel Dapello", "Tiago Marques", "Martin Schrimpf", "Franziska Geiger", "David Cox", "James J DiCarlo" ], "title": "Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "John S Denker", "WR Gardner", "Hans Peter Graf", "Donnie Henderson", "Richard E Howard", "W Hubbard", "Lawrence D Jackel", "Henry S Baird", "Isabelle Guyon" ], "title": "Neural network recognizer for handwritten zip code digits", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Adam Gaier", "David Ha" ], "title": "Weight agnostic neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "arXiv preprint arXiv:1409.7495,", "year": 2014 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Muhammad Ghifary", "David Balduzzi", "W Bastiaan Kleijn", "Mengjie Zhang" ], "title": "Scatter component analysis: A unified framework for domain adaptation and domain generalization", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "In search of lost domain generalization", "venue": "arXiv preprint arXiv:2007.01434,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kun He", "Yan Wang", "John Hopcroft" ], "title": "A powerful generative model using random weights for the deep image representation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "David J Heeger", "James R Bergen" ], "title": "Pyramid-based texture analysis/synthesis", "venue": "In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques,", "year": 1995 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": "arXiv preprint arXiv:2006.16241,", "year": 2020 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin Dogus Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple method to improve robustness and uncertainty under data shift", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "William B Johnson", "Joram Lindenstrauss" ], "title": "Extensions of lipschitz mappings into a hilbert space", "venue": "Contemporary mathematics,", "year": 1984 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V Le" ], "title": "Do better imagenet models transfer better", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Jinwoo Shin", "Honglak Lee" ], "title": "Network randomization: A simple technique for generalization in deep reinforcement learning", "venue": "In International Conference on Learning Representations. https://openreview. net/forum,", "year": 2020 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Deeper, broader and artier domain generalization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Haoliang Li", "Sinno Jialin Pan", "Shiqi Wang", "Alex C Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ya Li", "Xinmei Tian", "Mingming Gong", "Yajing Liu", "Tongliang Liu", "Kun Zhang", "Dacheng Tao" ], "title": "Deep domain generalization via conditional invariant adversarial networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yawei Luo", "Liang Zheng", "Tao Guan", "Junqing Yu", "Yi Yang" ], "title": "Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Norman Mu", "Justin Gilmer" ], "title": "Mnist-c: A robustness benchmark for computer vision", "venue": "arXiv preprint arXiv:1906.02337,", "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Ian Osband", "John Aslanides", "Albin Cassirer" ], "title": "Randomized prior functions for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xingchao Peng", "Qinxun Bai", "Xide Xia", "Zijun Huang", "Kate Saenko", "Bo Wang" ], "title": "Moment matching for multi-source domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xingchao Peng", "Zijun Huang", "Ximeng Sun", "Kate Saenko" ], "title": "Domain agnostic learning with disentangled representations", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Javier Portilla", "Eero P Simoncelli" ], "title": "A parametric texture model based on joint statistics of complex wavelet coefficients", "venue": "International journal of computer vision,", "year": 2000 }, { "authors": [ "Fengchun Qiao", "Long Zhao", "Xi Peng" ], "title": "Learning to learn single domain generalization", "venue": "arXiv preprint arXiv:2003.13216,", "year": 2020 }, { "authors": [ "Wullianallur Raghupathi", "Viju Raghupathi" ], "title": "Big data analytics in healthcare: promise and potential", "venue": "Health information science and systems,", "year": 2014 }, { "authors": [ "Hadi Salman", "Andrew Ilyas", "Logan Engstrom", "Ashish Kapoor", "Aleksander Madry" ], "title": "Do adversarially robust imagenet models transfer better", "venue": "arXiv preprint arXiv:2007.08489,", "year": 2020 }, { "authors": [ "Andrew M Saxe", "Pang Wei Koh", "Zhenghao Chen", "Maneesh Bhand", "Bipin Suresh", "Andrew Y Ng" ], "title": "On random weights and unsupervised feature learning", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Rui Shao", "Xiangyuan Lan", "Jiawei Li", "Pong C Yuen" ], "title": "Multi-adversarial discriminative deep domain generalization for face presentation attack detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "William B Shen", "Danfei Xu", "Yuke Zhu", "Leonidas J Guibas", "Li Fei-Fei", "Silvio Savarese" ], "title": "Situational fusion of visual representation for visual navigation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Baifeng Shi", "Dinghuai Zhang", "Qi Dai", "Zhanxing Zhu", "Yadong Mu", "Jingdong Wang" ], "title": "Informative dropout for robust representation learning: A shape-bias perspective", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Patrice Y Simard", "David Steinkraus", "John C Platt" ], "title": "Best practices for convolutional neural networks applied to visual document analysis", "venue": "In Icdar,", "year": 2003 }, { "authors": [ "Baochen Sun", "Kate Saenko" ], "title": "From virtual to reality: Fast adaptation of virtual object detectors to real domains", "venue": "In Proceedings of the British Machine Vision Conference. BMVA Press,", "year": 2014 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ international conference on intelligent robots and systems (IROS),", "year": 2017 }, { "authors": [ "Nguyen Xuan Vinh", "Sarah Erfani", "Sakrapee Paisitkriangkrai", "James Bailey", "Christopher Leckie", "Kotagiri Ramamohanarao" ], "title": "Training robust models using random projection", "venue": "In 2016 23rd International Conference on Pattern Recognition (ICPR),", "year": 2016 }, { "authors": [ "Riccardo Volpi", "Hongseok Namkoong", "Ozan Sener", "John C Duchi", "Vittorio Murino", "Silvio Savarese" ], "title": "Generalizing to unseen domains via adversarial data augmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Zachary Lipton", "Eric P Xing" ], "title": "Learning robust global representations by penalizing local predictive power", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haohan Wang", "Zexue He", "Eric P. Xing" ], "title": "Learning robust representations by projecting superficial statistics out", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "John Wieting", "Douwe Kiela" ], "title": "No training required: Exploring random encoders for sentence classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xiangyu Yue", "Yang Zhang", "Sicheng Zhao", "Alberto Sangiovanni-Vincentelli", "Kurt Keutzer", "Boqing Gong" ], "title": "Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Tianyuan Zhang", "Zhanxing Zhu" ], "title": "Interpreting adversarially trained convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "He" ], "title": "2016a). Specifically, we run the PACS and ImageNet experiments with ResNet-18 as the baseline and RandConv. As Table 5 shows, RandConv improves the baseline using ResNet18 on ImageNet-sketch by 10.5% accuracy. When using a RandConv pretrained ResNet-18 on PACS, the performance of finetuning with DeepAll and RandConv", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generalizability and robustness to out-of-distribution samples have been major pain points when applying deep neural networks (DNNs) in real world applications (Volpi et al., 2018). Though DNNs are typically trained on datasets with millions of training samples, they still lack robustness to domain shift, small perturbations, and adversarial examples (Luo et al., 2019). Recent research has shown that neural networks tend to use superficial features rather than global shape information for prediction even when trained on large-scale datasets such as ImageNet (Geirhos et al., 2019). These superficial features can be local textures or even patterns imperceptible to humans but detectable to DNNs, as is the case for adversarial examples (Ilyas et al., 2019). In contrast, image semantics often depend more on object shapes rather than local textures. For image data, local texture differences are one of the main sources of domain shift, e.g., between synthetic virtual images and real data (Sun & Saenko, 2014). Our goal is therefore to learn visual representations that are invariant to local texture and that generalize to unseen domains. While texture and color may be treated as different concepts, we follow the convention in Geirhos et al. (2019) and include color when talking about texture.\nWe address the challenging setting of robust visual representation learning from single domain data. Limited work exists in this setting. Proposed methods include data augmentation (Volpi et al., 2018; Qiao et al., 2020; Geirhos et al., 2019), domain randomization (Tobin et al., 2017; Yue et al., 2019), self-supervised learning (Carlucci et al., 2019), and penalizing the predictive power of low-level network features (Wang et al., 2019a). Following the spirit of adding inductive bias towards global shape information over local textures, we propose using random convolutions to improve the robustness to domain shifts and small perturbations. While recently Lee et al. (2020) proposed a similar technique for improving the generalization of reinforcement learning agents in\n1Code is available at https://github.com/wildphoton/RandConv.\nunseen environments, we focus on visual representation learning and examine our approach on visual domain generalization benchmarks. Our method also includes the multiscale design and a mixing variant. In addition, considering that many computer vision tasks rely on training deep networks based on ImageNet-pretrained weights (including some domain generalization benchmarks), we ask “Can a more robust pretrained model make the finetuned model more robust on downstream tasks?” Different from (Kornblith et al., 2019; Salman et al., 2020) who studied the transferability of a pretrained ImageNet representation to new tasks while focusing on in-domain generalization, we explore generalization performance on unseen domains for new tasks.\nWe make the following contributions:\n• We develop RandConv, a data augmentation technique using multi-scale random-convolutions to generate images with random texture while maintaining global shapes. We explore using the RandConv output as training images or mixing it with the original images. We show that a consistency loss can further enforce invariance under texture changes. • We provide insights and justification on why RandConv augments images with different local\ntexture but the same semantics with the shape-preserving property of random convolutions. • We validate RandConv and its mixing variant in extensive experiments on synthetic and real-\nworld benchmarks as well as on the large-scale ImageNet dataset. Our methods outperform single domain generalization approaches by a large margin on digit recognition datasets and for the challenging case of generalizing to the Sketch domain in PACS and to ImageNet-Sketch. • We explore if the robustness/generalizability of a pretrained representation can transfer. We\nshow that transferring a model pretrained with RandConv on ImageNet can further improve domain generalization performance on new downstream tasks on the PACS dataset." }, { "heading": "2 RELATED WORK", "text": "Domain Generalization (DG) aims at learning representations that perform well when transferred to unseen domains. Modern techniques range between feature fusion (Shen et al., 2019), metalearning (Li et al., 2018a; Balaji et al., 2018), and adversarial training (Shao et al., 2019; Li et al., 2018b). Note that most current DG work (Ghifary et al., 2016; Li et al., 2018a;b) requires a multisource training setting to work well. However, in practice, it might be difficult and expensive to collect data from multiple sources, such as collecting data from multiple medical centers (Raghupathi & Raghupathi, 2014). Instead, we consider the more strict single-domain generalization DG setting,\nwhere we train the model on source data from a single domain and generalize it to new unseen domains (Carlucci et al., 2019; Wang et al., 2019b).\nDomain Randomization (DR) was first introduced as a DG technique by Tobin et al. (2017) to handle the domain gap between simulated and real data. As the training data in (Tobin et al., 2017) is synthesized in a virtual environment, it is possible to generate diverse training samples by randomly selecting background images, colors, lighting, and textures of foreground objects. When a simulation environment is not accessible, image stylization can be used to generate new domains (Yue et al., 2019; Geirhos et al., 2019). However, this requires extra effort to collect data and to train an additional model; further, the number of randomized domains is limited by the number of predefined styles.\nData Augmentation has been widely used to improve the generalization of machine learning models (Simard et al., 2003). DR approaches can be considered a type of synthetic data augmentation. To improve performance on unseen domains, Volpi et al. (2018) generate adversarial examples to augment the training data; Qiao et al. (2020) extend this approach via meta-learning. As with other adversarial training algorithms, significant extra computation is required to obtain adversarial examples.\nLearning Representations Biased towards Global Shape Geirhos et al. (2019) demonstrated that convolutional neural networks (CNNs) tend to use superficial local features even when trained on large datasets. To counteract this effect, they proposed to train on stylized ImageNet, thereby forcing a network to rely on object shape instead of textures. Wang et al. improved out-of-domain performance by penalizing the correlation between a learned representation and superficial features such as the gray-level co-occurrence matrix (Wang et al., 2019b), or by penalizing the predictive power of local, low-level layer features in a neural network via an adversarial classifier (Wang et al., 2019a). Our approach shares the idea that learning representations invariant to local texture helps generalization to unseen domains. However, RandConv avoids searching over many hyper-parameters, collecting extra data, and training other networks. It also scales to large-scale datasets since it adds minimal computation overhead.\nRandom Mapping in Machine Learning Random projections have also been effective for dimensionality reduction based on the distance-preserving property of the Johnson–Lindenstrauss lemma (Johnson & Lindenstrauss, 1984). (Vinh et al., 2016) applied random projections on entire images as data augmentation to make neural networks robust to adversarial examples. Lee et al. (2020) recently used random convolutions to help reinforcement learning (RL) agents generalize to new environments. Neural networks with fixed random weights can encode meaningful representations (Saxe et al., 2011) and are therefore useful for neural architecture search (Gaier & Ha, 2019), generative models (He et al., 2016b), natural language processing (Wieting & Kiela, 2019), and RL (Osband et al., 2018; Burda et al., 2019). In contrast, RandConv uses non-fixed randomly-sampled weights to generate images with different local texture." }, { "heading": "3 RANDCONV: RANDOMIZE LOCAL TEXTURE AT DIFFERENT SCALES", "text": "We propose using a convolution layer with non-fixed random weights as the first layer of a DNN during training. This strategy generates images with random local texture but consistent shapes, and is beneficial for robust visual representation learning. Sec. 3.1 justifies the shape-preserving property of a random convolution layer. Sec. 3.2 describes RandConv, our data augmentation algorithm using a multi-scale randomized convolution layer and input mixing." }, { "heading": "3.1 A RANDOM CONVOLUTION LAYER PRESERVES GLOBAL SHAPES", "text": "Convolution is the key building block for deep convolutional neural networks. Consider a convolution layer with filters Θ ∈ Rh×w×Cin×Cout with an input image I ∈ RH×W×Cin , where H and W are the height and width of the input and Cin and Cout are the number of feature channels for the input and output, and h and w are the height and width of the layer’s filter. The output (with appropriate input padding) will be g = I ∗Θ with g ∈ RH×W×Cout . In images, nearby pixels with similar color or texture can be grouped into primitive shapes that represent parts of objects or the background. A convolution layer linearly projects local image patches to features at corresponding locations on the output map using shared parameters. While a\nconvolution with random filters can project local patches to arbitrary output features, the output of a random linear projection approximately preserves relative similarity between input patches, proved in Appendix B. In other words, since any two locations within the same shape have similar local textures in the input image, they tend to be similar in the output feature map. Therefore, shapes that emerge in the output feature map are similar to shapes in the input image provided that the filter size is sufficiently small compared to the size of a typical shape.\nIn other words, the size of a convolution filter determines the smallest shape it can preserve. For example, 1x1 random convolutions preserve shapes at the single-pixel level and thus work as a random color mapping; large filters perturb shapes smaller than the filter size that are considered local texture of a shape at this larger scale. See Fig. 1 for examples. More discussion and a formal proof are in Appendix A and B." }, { "heading": "3.2 MULTI-SCALE IMAGE AUGMENTATION WITH A RANDOMIZED CONVOLUTION LAYER", "text": "Algorithm 1 Learning with Data Augmentation by Random Convolutions\n1: Input: Model Φ, task loss Ltask, training images {Ii}Ni=1 and their labels {yi}Ni=1, pool of filter sizes K = {1, ..., n}, fraction of original data p, whether to mix with original images, consistency loss weight λ 2: function RANDCONV(I, K, mix, p) 3: Sample p0 ∼ U(0, 1) 4: if p0 < p and mix is False then 5: return I . When not in mix mode, use the original image with probability p 6: else 7: Sample scale k ∼ K 8: Sample convolution weights Θ ∈ Rk×k×3×3 ∼ N(0, 1\n3k2 )\n9: Irc = I ∗Θ . Apply convolution on I 10: if mix is True then 11: Sample α ∼ U(0, 1) 12: return αI + (1− α)Irc . Mix with original images 13: else 14: return Irc 15: Learning Objective: 16: for i = 1→ N do 17: for j = 1→ 3 do 18: ŷji = Φ(RandConv(Ii)) . Predict labels for three augmented variants of the same image 19: Lcons = λ ∑3 j=1 KL(ŷ j i ||ȳi) where ȳi = ∑3 j=1 ŷ j i /3 . Consistency Loss 20: L = Ltask(ŷ1i , yi) + λLcons . Learning with the task loss and the consistency loss\nSec. 3.1 discussed how outputs of randomized convolution layers approximately maintain shape information at a scale larger than their filter sizes. Here, we develop our RandConv data augmentation technique using a randomized convolution layer with Cout = Cin to generate shape-consistent images with randomized texture (see Alg. 1). Our goal is not to use RandConv to parameterize or represent texture as in previous filter-bank based texture models (Heeger & Bergen, 1995; Portilla & Simoncelli, 2000). Instead, we only use the three-channel outputs of RandConv as new images with the same shape and different “style” (loosely referred to as \"texture\"). We also note that, a convolution layer is different from a convolution operation in image filtering. Standard image filtering applies the same 2D filter on three color channels separately. In contrast, our convolution layer applies three different 3D filters and each takes all color channels as input and generates one channel of the output. Our proposed RandConv variants are as follows:\nRCimg: Augmenting Images with Random Texture A simple approach is to use the randomized convolution layer outputs, I ∗Θ, as new images; where Θ are the randomly sampled weights and I is a training image. If the original training data is in the domain D0, a sampled weight Θk generates images with consistent global shape but random texture forming the random domain Dk. Thus, by random weight sampling, we obtain an infinite number of random domains D1, D1, . . . , D∞. Input image intensities are assumed to be a standard normal distribution N(0, 1) (which is often true in practice thanks to data whitening). As the outputs of RandConv should follow the same distribution, we sample the convolution weights from N(0, σ2) where σ = 1/ √ Cin × h× w, which is commonly applied for network initialization (He et al., 2015). We include the original images for training at a ratio p as a hyperparameter.\nRCmix: Mixing Variant As shown in Fig. 1, outputs from RCimg can vary significantly from the appearance of the original images. Although generalizing to domains with significantly different local texture distributions is useful, we may not want to sacrifice much performance on domains similar to the training domain. Inspired by the AugMix (Hendrycks et al., 2020b) strategy, we propose to blend the original image with the outputs of the RandConv layer via linear convex combinations αI + (1 − α)(I ∗Θ), where α is the mixing weight uniformly sampled from [0, 1].In RCmix, the RandConv outputs provide shape-consistent perturbations of the original images. Varying α, we continuously interpolate between the training domain and the randomly sampled domains of RCimg.\nMulti-scale Texture Corruption As discussed in Sec. 3.1„ image shape information at a scale smaller than a filter’s size will be corrupted by RandConv. Therefore, we can use filters of varying sizes to preserve shapes at various scales. We choose to uniformly randomly sample a filter size k from a pool K = 1, 3, ...n before sampling convolution weights Θ ∈ Rk×k×Cin×Cout from a Gaussian distribution N(0, 1k2Cin ). Fig. 1 shows examples of multi-scale RandConv outputs.\nConsistency Regularization To learn representations invariant to texture changes, we use a loss encouraging consistent network predictions for the same RandConv-augmented image for different random filter samples. Approaches for transform-invariant domain randomization (Yue et al., 2019), data augmentation (Hendrycks et al., 2020b), and semi-supervised learning (Berthelot et al., 2019) use similar strategies. We use Kullback-Leibler (KL) divergence to measure consistency. However, enforcing prediction similarity of two augmented variants may be too strong. Instead, following (Hendrycks et al., 2020b), we use RandConv to obtain 3 augmentation samples of image I: Gj = RandConvj(I) for j = 1, 2, 3 and obtain their predictions with a model Φ: yj = Φ(Gj). We then compute the relaxed loss as λ ∑3 j=1 KL(y j ||ȳ), where ȳ = ∑3 j=1 y j/3 is the sample average." }, { "heading": "4 EXPERIMENTS", "text": "Secs. 4.1 to 4.3 evaluate our methods on the following datasets: multiple digit recognition datasets, PACS, and ImageNet-sketch. Sec. 4.4 uses PACS to explore the out-of-domain generalization of a pretrained representation in transfer learning by checking if pretraining on ImageNet with our method improves the domain generalization performance in downstream tasks. All experiments are in the single-domain generalization setting where training and validation sets are drawn from one domain. Additional experiments with ResNet18 as the backbone are given in the Appendix." }, { "heading": "4.1 DIGIT RECOGNITION", "text": "The five digit recognition datasets (MNIST (LeCun et al., 1998), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), SYNTH (Ganin & Lempitsky, 2014) and USPS (Denker et al., 1989)) have been widely used for domain adaptation and generalization research (Peng et al., 2019a;b; Qiao et al., 2020). Following the setups in (Volpi et al., 2018) and (Qiao et al., 2020), we train a simple CNN with 10,000 MNIST samples and evaluate the accuracy on the test sets of the other four datasets. We also test on MNIST-C (Mu & Gilmer, 2019), a robustness benchmark with 15 common corruptions of MNIST and report the average accuracy over all corruptions.\nSelecting Hyperparameters and Ablation Study. Fig. 2(a) shows the effect of the hyperparameter p on RCimg with filter size 1. We see that adding only 10% RandConv data (p = 0.9) immediately improves the average performance (DG-Avg) on MNIST-M, SVHN, SYNTH and USPS performance from 53.53 to 69.19, outperforming all other approaches (see Tab. 1) for every dataset. We choose p = 0.5, which obtains the best DG-Avg. Fig. 2(b) shows results for a multiscale ablation study. Increasing the pool of filter sizes up to 7 improves DG-Avg performance. Therefore we use multiscale 1-7 to study the consistency loss weight λ, shown in Fig. 2(c). Adding the consistency loss improves both RandConv variants on DG-avg: RCmix1−7 favors λ = 10 while RCimg1−7,p=0.5 performs similarly for λ = 5 and λ = 10. We choose λ = 10 for all subsequent experiments.\nResults. Tab. 1 compares the performance of RCimg1−7,p=0.5,λ=10 and RCmix1−7,λ=10 with other state-of-the-art approaches. We show results of the adversarial training based methods GUD (Volpi et al., 2018), M-ADA (Qiao et al., 2020), and PAR (Wang et al., 2019a). The baseline model is trained only on the standard classification loss. To show RandConv is more than a trivial color/contrast adjustment method, we also compare to ColorJitter2 data augmentation (which randomly changes image brightness, contrast, and saturation) and GreyScale (where images are transformed to greyscale for training and testing). We also tested data augmentation with a fixed Laplacian of Gaussian filter (Band-Pass) of size=3 and σ = 1 and the data augmentation pipeline (Multi-Aug) that was used in a recently proposed large scale study on domain generalization algorithms and datasets (Gulrajani & Lopez-Paz, 2020). RandConv and its mixing variant outperforms the best competing method (MADA) by 17% on DG-Avg and achieves the best 91.62% accuracy on MNIST-C. While the difference between the two variants of RandConv is marginal, RCmix1−7,λ=10 performs better on both DG-Avg and MNIST-C. When combined with Multi-Aug, RandConv achieves improved performance except on MNIST-C. Fig 3 shows t-SNE image feature plots for unseen domains generated by the baseline approach and RCmix1−7,λ=10. The RandConv embeddings suggest better generalization to unseen domains." }, { "heading": "4.2 PACS EXPERIMENTS", "text": "The PACS dataset (Li et al., 2018b) considers 7-class classification on 4 domains: photo, art painting, cartoon, and sketch, with very different texture styles. Most recent domain generalization work studies the multi-source domain setting on PACS and uses domain labels of the training data. Although we follow the convention to train on 3 domains and to test on the fourth, we simply pool the data from the 3 training domains as in (Wang et al., 2019a), without using domain labels during the training.\nBaseline and State-of-the-Art. Following (Li et al., 2017), we use Deep-All as the baseline, which finetunes an ImageNet-pretrained AlexNet on 3 domains using only the classification loss and tests on the fourth domain. We test our RandConv variants RCimg1-7,p=0.5 and RCmix1-7 with and without consistency loss, and ColorJitter/GreyScale/BandPass/MultiAug data augmentation as in the digit datasets. We also implemented PAR (Wang et al., 2019a) using our baseline model. RCmix1-7\n2See PyTorch documentation for implementation details; all parameters are set to 0.5.\ncombined with MultiAug is also tested. Further, we compare to the following state-of-the-art approaches: Jigen (Carlucci et al., 2019) using self-supervision, MLDG (Li et al., 2018a) using meta-learning, and the conditional invariant deep domain generalization method CIDDG (Li et al., 2018c). Note that previous methods used different Deep-All baselines which make the final accuracy not directly comparable, and MLDG and CIDDG use domain labels for training.\nResults. Tab. 2 shows significant improvements on Sketch for both RandConv variants. Sketch is the most challenging domain with no color and much less texture compared to the other 3 domains. The success on Sketch demonstrates that our methods can guide the DNN to learn global representations focusing on shapes that are robust to texture changes. Without using the consistency loss, RCmix1-7 achieves the best overall result improving over Deep-All by∼4% but adding MultiAug does not further improve the performance. Adding the consistency loss with λ = 10, RCmix1-7 and\nRCimg1-7,p=0.5 performs better on Sketch but degrades performance on the other 3 domains, so do GreyScale and ColorJitter. This observation will be discussed in Sec 4.4." }, { "heading": "4.3 GENERALIZING AN IMAGENET MODEL TO IMAGENET-SKETCH", "text": "ImageNet-Sketch (Wang et al., 2019a) is an out-of-domain test set for models trained on ImageNet. We trained AlexNet from scratch with RCimg1-7,p=0.5,λ=10 and RCmix1-7,λ=10. We evaluate their performance on ImageNet-Sketch. We use the AlexNet model trained without RandConv as our baseline. Tab. 3 compares PAR and its baseline model and AlexNet trained with Stylized ImageNet (SIN) (Geirhos et al., 2019) on ImageNet-Sketch. Although PAR uses a stronger baseline, RandConv achieves significant improvements over our baseline and outperforms PAR by a large margin. Our methods achieve more than a 7% accuracy improvement over the baseline and surpass PAR by 5%. SIN as an image stylization approach that can modify image texture in a hierarchical and realistic way. However, albeit its complexity, it still performs on par with RandConv. Note that image stylization techniques require additional data and heavy precomputation. Further, the images for the style source also need to be chosen. In contrast, RandConv is much easier to use: it can be applied to any dataset via a simple convolution layer. We also measure the shape-bias metric proposed by Geirhos et al. (2019) for RandConv trained AlexNet. RCimg1-7,p=0.5,λ=10 and RCmix1-7,λ=10 improve the baseline from 25.36% to 48.24% and 54.85% respectively." }, { "heading": "4.4 REVISITING PACS WITH MORE ROBUST PRETRAINED REPRESENTATIONS", "text": "A common practice for many computer vision tasks (including the PACS benchmark) is transfer learning, i.e. finetuning a backbone model pretrained on ImageNet. Recently, how the accuracy on ImageNet (Kornblith et al., 2019) and adversial robustness (Salman et al., 2020) of the pretrained model affect transfer learning has been studied in the context of domain generalization. Instead, we study how out-of-domain generalizability transfers from pretraining to downstream tasks and shed light on how to better use pretrained models.\nImpact of ImageNet Pretraining A model trained on ImageNet may be biased towards textures (Geirhos et al., 2019). Finetuning ImageNet pretrained models on PACS may inherit this texture bias, thereby benefitting generalization on the Photo domain (which is similar to ImageNet), but hurting performance on the Sketch domain. Therefore, as shown in Sec. 4.2, using RandConv to correct this texture bias improves results on Sketch, but degrades them on the Photo domain. Since pretraining has such a strong impact on transfer performance to new tasks, we ask: \"Can the generalizability of a pretrained model transfer to downstream tasks? I.e., does a pretrained model with better generalizability improve performance on unseen domains on new tasks?\" To answer this, we revisit the PACS tasks based on ImageNet-pretrained weights where our two RandConv variants of Sec. 4.3 are used during ImageNet training. We study if this results in performance changes for the Deep-All baseline and for finetuning with RandConv.\nBetter Performance via RandConv pretrained model We start by testing the Deep-All baselines using the two RandConv-trained ImageNet models of Sec. 4.3 as initialization. Tab. 4 shows significant improvements on Sketch. Results are comparable to finetuning with RandConv on a normal pretrained model. Art is also consistently improved. Performance drops slightly on Photo as expected, since we reduced the texture bias in the pretrained model, which is helpful for the Photo domain. A similar performance improvement is observed when using the SIN-trained AlexNet as initialization. Using RandConv for both ImageNet training and PACS finetuning, we achieve 76.11% accuracy on Sketch. As far as we know, this is the best performance using an AlexNet baseline. This approach even outperforms Jigen (Carlucci et al., 2019) (71.35%) with a stronger ResNet18 baseline\nmodel. Cartoon and Art are also improved. The best average domain generalization accuracy is 73.03%, with a more than 6% improvement over our initial Deep-All baseline.\nThis experiment confirms that generalizability may transfer: removing texture bias may not only make a pretrained model more generalizable, but it may help generalization on downstream tasks. For similar target and pretraining domains like Photo and ImageNet, where learning texture bias may actually be beneficial, performance may degrade slightly." }, { "heading": "5 CONCLUSION AND DISCUSSION", "text": "Randomized convolution (RandConv) is a simple but powerful data augmentation technique for randomizing local image texture. RandConv helps focus visual representations on global shape information rather than local texture. We theoretically justified the approximate shape-preserving property of RandConv and developed RandConv techniques using multi-scale and mixing designs. We also make use of a consistency loss to encourage texture invariance. RandConv outperforms state-of-the-art approaches on the digit recognition benchmark and on the sketch domain of PACS and on ImageNet-Sketch by a large margin. By finetuning a model pretrained with RandConv on PACS, we showed that the generalizability of a pretrained model may transfer to and benefit a new downstream task. This resulted in a new state-of-art performance on PACS in the Sketch domain.\nRandConv can help computer vision tasks when a shape-biased model is helpful e.g. for object detection. RandConv can also provide a shape-biased pretrained model to improve performance on downstream tasks when generalizing to unseen domains. However, local texture features can be useful for many computer vision tasks, especially for fixed-domain fine-grained visual recognition. In such cases, visual representations that are invariant to local texture may hurt in-domain performance. Therefore, important future work includes learning representations that disentangle shape and texture features and building models to use such representations in an explainable way.\nAdversarial robustness of deep neural networks has received significant recent attention. Interestingly, Zhang & Zhu (2019) find that adversarially-trained models are more shape biased; Shi et al. (2020) show that their method for increasing shape bias also helps adversarial robustness, especially when combined with adversarial training. Therefore, exploring how RandConv affects the adversarial robustness of models could be interesting future work. Moreover, recent biologically inspired models for improving adversarial robustness (Dapello et al., 2020) use Gabor filters with fixed random configurations followed by a stochastic layer to add Gaussian noise to the network input, which may explain the importance of randomness in RandConv. Exploring connections between RandConv and biological mechanisms in the human visual system would be interesting future work.\nAcknowledgments We thank Zhiding Yu for discussions on initial ideas and the experimental setup. We also thank Nathan Cahill for advice on proving the properties of random convolutions." }, { "heading": "A SHAPES AND TEXTURE IN IMAGES", "text": "As discussed in the main text, we define shapes in images that are preserved by a random convolution layer as primitive shapes: spatial clusters of pixels with similar local texture. An object in a image can be a single primitive shape alone but in most cases it is the composition of multiple primitive shapes e.g. a car includes wheels, body frames, windshields. Note that the definition of texture is not necessarily opposite to shapes, since the texture of a larger shape can includes smaller shapes. For example, in Fig.4, the left occluded triangle shape has texture composed by shapes of cobble stones while cobble stones have their own texture. Random convolution can preserve those large shapes that usually define the image semantics while distorting the small shapes as local texture.\nTo formally define the shape-preserving property, we assume (x1, y1), (x2, y2) and (x3, y3) are three locations on a image and (x1, y1) has closer color and local texture with (x2, y2) than (x3, y3). For example, (x1, y1) and (x2, y2) are within the same shape while (x3, y3) is located at a neighboring shape. Then we have ‖p(x1, y1) − p(x2, y2)‖ < ‖p(x1, y1) − p(x3, y3)‖, where p(xi, yi) is the image patch at location (xi, yi). A transformation f is shape-preserving if it maintains such relative distance relations for most location triplets, i.e.\n‖f(p(xi, yi))− f(p(xj , yj))‖/‖p(xi, yi)− p(xj , yj)‖ ≈ r (1)\nfor any two spatial location (xi, yi) and (xj , yj); r ≥ 0 is a constant." }, { "heading": "B RANDOM CONVOLUTION IS SHAPE-PRESERVING AS RANDOM LINEAR", "text": "PROJECTION IS DISTANCE PRESERVING\nWe can express a convolution layer as a local linear projection:\ng(x, y) = Up(x, y) , (2)\nwhere p(x, y) ∈ Rd (d = h× w × Cin) is the vectorized image patch centerized at location (x, y), g(x, y) ∈ RCout is the output feature at location (x, y), and U ∈ RCout×d is the matrix expressing the convolution layer filters Θ. I.e., for each sliding window centered at (x, y), a convolution layer applies a linear transform f : Rd → RCout projecting the d dimensional local image patch p(x, y) to its Cout dimensional feature g(x, y). When Θ is independently randomly sampled, e.g. from\na Gaussian distribution, the convolution layer preserves global shapes since that a random linear projection is approximately distance-preserving by bounding the range of r in Eq. 1 in Theorem 1.\nTheorem 1. Suppose we have N data points z1, · · · , zN ∈ Rd. Let f(z) = Uz be a random linear projection f : Rd → Rm such that U ∈ Rm×d and Ui,j ∼ N(0, σ2). Then we have:\nP (\nsup i 6=j;i,j∈[N ]\n{ ri,j :=\n‖f(zi)− f(zj)‖ ‖zi − zj‖\n} > δ1 ) ≤ ,\nP (\ninf i 6=j;i,j∈[N ]\n{ ri,j :=\n‖f(zi)− f(zj)‖ ‖zi − zj‖\n} < δ2 ) ≤ ,\n(3)\nwhere δ1 := σ √ χ2 2 N(N−1) (m) and δ2 := σ √ χ2 1− 2 N(N−1) (m). Here, χ2α(m) denotes the α-upper quantile of the χ2 distribution with m degrees of freedom.\nThm. 1 tells us that for any data pair (zi, zj) in a set of N points, the distance rescaling ratio ri,j after a random linear projection is bounded by δ1 and δ2 with probability 1− . A Smaller N and a larger output dimension m give better bounds. E.g., when m = 3, N = 1, 000, σ = 1 and = 0.1, δ1 = 5.8 and δ2 = 0.01. Thm. 1 gives a theoretical bound for all the N(N − 1)/2 pairs. However, in practice, preserving distances for a majority of N(N − 1)/2 pairs is sufficient. To empirically verify this, we test the range of central 80% of {ri,j} on real image data. Using the same (m,N, σ, ), 80% of the pairs lie in [0.56, 2.87], which is significantly better than the strict bound: [0.01, 5.8]. A proof of the theorem and simulation details are given in the following.\nProof. Let Uk represent to the k-th row of U. It is easy to check that vk := 〈Uk, zi−zj〉/‖zi−zj‖ ∼ N(0, σ2). Therefore,\n‖f(zi)− f(zj)‖2\nσ2‖zi − zj‖2 =\n1 σ2 (zi − zj)>U>U(zi − zj) ‖zi − zj‖2 = m∑ k=1 v2k σ2 ∼ χ2(m).\nTherefore, for 0 < < 1, we have P (‖f(zi)− f(zj)‖2\nσ2‖zi − zj‖2 > χ2 2 N(N−1) (m) ) ≤ 2 N(N − 1) .\nFrom the above inequality, we have P (\nsupi 6=j;i,j∈[N ] { ‖f(zi)−f(zj)‖2 ‖zi−zj‖2 } > σ2χ2 2 N(N−1) (m) ) = P ( supi 6=j;i,j∈[N ] { ‖f(zi)−f(zj)‖2 σ2‖zi−zj‖2 } > χ2 2 N(N−1) (m)\n) = P\n( ⋃ i 6=j;i,j∈[N ] { ‖f(zi)−f(zj)‖2 σ2‖zi−zj‖2 > χ 2 2 N(N−1) (m) }) ≤\n∑ i 6=j;i,j∈[N ] P ( ‖f(zi)−f(zj)‖2 σ2‖zi−zj‖2 > χ 2 2 N(N−1) (m) ) ≤ ,\nwhich is equivalent to P (\nsup i 6=j;i,j∈[N ] {‖f(zi)− f(zj)‖ ‖zi − zj‖ } > σ √ χ2 2 N(N−1) (m) ) ≤ .\nSimilarly, we have P (\ninf i 6=j;i,j∈[N ] {‖f(zi)− f(zj)‖ ‖zi − zj‖ } < σ √ χ2 1− 2 N(N−1) (m) ) ≤ .\nSimulation on Real Image Data To better understand the relative distance preservation property of random linear projections in practice, we use Algorithm 2 to empirically obtain a bound for real image data. We choose m = 3, N = 1, 000, σ = 1 and = 0.1 as in computing our theoretical bounds. We use M = 1, 000 real images from the PACS dataset for this simulation. Note that the image patch size or d does not affect the bound. We use a patch size of 3 × 3 resulting in d = 27. This simulation tell us that applying linear projections with a randomly sampled U on N local images patches in every image, we have a 1− chance that 80% of ri,j is in the range [δ10%, δ90%].\nAlgorithm 2 Simulate the range of central 80% of ri,j on real image data\n1: Input: M images {Ii}Mi=1, number of data points N , projection output dimension m, standard deviation σ of normal distribution, confidence level . 2: for m = 1→M do 3: Sample images patches in Im at 1,000 locations and vectorize them as {zml }Nl=1 4: Sample a projection matrix U ∈ Rm×d and Ui,j ∼ N(0, σ2) 5: for i = 1→ N do 6: for j = i+ 1→ N do 7: Compute rmi,j = ‖f(zmi )−f(z m j )‖\n‖zmi −z m j ‖\n, where f(z) = Uz\n8: qm10% = 10% quantile of r m i,j for Im 9: qm90% = 90% quantile of r m i,j for Im . Get the central 80% of ri,j in each image 10: δ10% = quantile of all qm10% 11: δ90% = (1− ) quantile of all qm90% . Get the confident bound for qm10% and qm90% 12: return δ10%, δ90%" }, { "heading": "C EXPERIMENTAL DETAILS", "text": "Digits Recognition The network for our digits recognition experiments is composed of two Conv5×5ReLU-MaxPool2×2 blocks with 64/128 output channels and three fully connected layer with 1024/1024/10 output channels. We train the network with batch size 32 for 10,000 iterations. During training, the model is validated every 250 iterations and saved with the best validation score for testing. We apply the Adam optimizer with an initial learning rate of 0.0001.\nPACS We use the official data splits for training/validation/testing; no extra data augmentation is applied. We use the official PyTorch implementation and the pretrained weights of AlexNet for our PACS experiments. AlextNet is finetuned for 50,000 iterations with a batch size 128. Samples are randomly selected from the training data mixed between the three domains. We use the validation data of source domains only at every 100 iterations. We use the SGD optimizer for training with an initial learning rate of 0.001, Nesterov momentum, and weight decay set to 0.0005. We let the learning rate decay by a factor of 0.1 after finishing 80% of the iterations.\nImageNet Following the PyTorch example 3 on training ImageNet models, we set the batch size to 256 and train AlexNet from scratch for 90 epochs. We apply the SGD optimizer with an initial learning rate of 0.01, momentum 0.9, and weight decay 0.0001. We reduce the learning rate via a factor of 0.1 every 30 epochs." }, { "heading": "D MORE EXPERIMENTS WITH RESNET-18", "text": "In this section, we demonstrate that RandConv also works on other stronger backbone architectures, e.g. for a Residual Network He et al. (2016a). Specifically, we run the PACS and ImageNet experiments with ResNet-18 as the baseline and RandConv. As Table 5 shows, RandConv improves the baseline using ResNet18 on ImageNet-sketch by 10.5% accuracy. When using a RandConv pretrained ResNet-18 on PACS, the performance of finetuning with DeepAll and RandConv are both improved shown in Table 7. The best average domain generalization accuracy is 84.09%, with a more than 8% improvement over our initial Deep-All baseline. A model pretrained with RCmix1-7,λ=10 generally performs better than when pretrained with RCimg1-7,p=0.5,λ=10. We also provide the ResNet-18 performance of JiGen (Carlucci et al., 2019) on PACS as reference. Note\n3https://github.com/pytorch/examples/tree/master/imagenet\nthat JiGen uses extra data augmentation and a different data split than our approach and it only improves over its own baseline by 1.5%. In addition, we test RandConv trained ResNet-18 on ImageNet-R (Hendrycks et al., 2020a), a domain generalization benchmark that contains images of artistic renditions of 200 object classes from the original ImageNet dataset. As Table 6 shows, RandConv also improve the generalization performance on ImageNet-R and reduce the gap between the in-domain (ImageNet-200) and out-of-domain (ImageNet-R) performance." }, { "heading": "E HYPERPARAMETER SELECTIONS AND ABLATION STUDIES ON DIGITS RECOGNITION BENCHMARKS", "text": "We provide detailed experimental results for the digits recognition datasets. Table 8 shows results for different hyperameters p for RCimg1. Table 9 shows results for an ablation study on the multi-scale design for RCmix and RCimg,p=0.5. Table 10 shows results for studying the consistency loss weight λ for RCmix1-7 and RCimg1-7,p=0.5. Tables 8, 9, and 10 correspond to Fig. 2 (a)(b)(c) in the main text respectively.\nF MORE EXAMPLES OF RANDCONV DATA AUGMENTATION\nWe provide additional examples of RandConv outputs for different convolution filter sizes in Fig. 6 and for its mixing variants at scale k = 7 with different mixing coefficients in Fig. 5. We observe that RandConv with different filter sizes retains shapes at different scales. The mixing strategy can continuously interpolate between the training domain and a randomly sampled domain." } ]
2,021
null
SP:9d2df0c7b57ce7d4ee0a222ad11361172cb7cbc7
[ "The paper considers a generalization of convolutional neural networks (CNNs) to manifold-valued data such as Symmetric Positive Definite (SPD). This paper proposes a neural architecture search problem of SPD manifold networks. A SPD cell representation and corresponding candidate operation search space is introduced. They demonstrate on drone, action and emotion recognition datasets that their method is performed well compared to SPD approaches." ]
In this paper, we propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks. Unlike the conventional NAS problem, our problem requires to search for a unique computational cell called the SPD cell. This SPD cell serves as a basic building block of SPD neural architectures. An efficient solution to our problem is important to minimize the extraneous manual effort in the SPD neural architecture design. To accomplish this goal, we first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design. Further, we model our new NAS problem using the supernet strategy, which models the architecture search problem as a one-shot training process of a single supernet. Based on the supernet modeling, we exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search. Statistical evaluation of our method on drone, action, and emotion recognition tasks mostly provides better results than the stateof-the-art SPD networks and NAS algorithms. Empirical results show that our algorithm excels in discovering better SPD network design and providing models that are more than 3 times lighter than searched by state-of-the-art NAS algorithms.
[]
[ { "authors": [ "P-A Absil", "Robert Mahony", "Rodolphe Sepulchre" ], "title": "Optimization algorithms on matrix manifolds", "venue": null, "year": 2009 }, { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "J Zico Kolter" ], "title": "Differentiable convex optimization layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Karim Ahmed", "Lorenzo Torresani" ], "title": "Connectivity learning in multi-branch networks", "venue": "arXiv preprint arXiv:1709.09582,", "year": 2017 }, { "authors": [ "Bowen Baker", "Otkrist Gupta", "Ramesh Raskar", "Nikhil Naik" ], "title": "Accelerating neural architecture search using performance prediction", "venue": "arXiv preprint arXiv:1705.10823,", "year": 2017 }, { "authors": [ "Alexandre Barachant", "Stéphane Bonnet", "Marco Congedo", "Christian Jutten" ], "title": "Multiclass brain– computer interface classification by riemannian geometry", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2011 }, { "authors": [ "Gabriel Bender" ], "title": "Understanding and simplifying one-shot architecture", "venue": null, "year": 2019 }, { "authors": [ "Rajendra Bhatia", "John Holbrook" ], "title": "Riemannian geometry and matrix geometric means", "venue": "Linear algebra and its applications,", "year": 2006 }, { "authors": [ "Silvere Bonnabel" ], "title": "Stochastic gradient descent on riemannian manifolds", "venue": "IEEE Transactions on Automatic Control,", "year": 2013 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Daniel Brooks", "Olivier Schwander", "Frédéric Barbaresco", "Jean-Yves Schneider", "Matthieu Cord" ], "title": "Riemannian batch normalization for spd neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In Thirty-Second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Rudrasis Chakraborty" ], "title": "Manifoldnorm: Extending normalizations on riemannian manifolds", "venue": "arXiv preprint arXiv:2003.13869,", "year": 2020 }, { "authors": [ "Rudrasis Chakraborty", "Chun-Hao Yang", "Xingjian Zhen", "Monami Banerjee", "Derek Archer", "David Vaillancourt", "Vikas Singh", "Baba Vemuri" ], "title": "A statistical recurrent model on the manifold of symmetric positive definite matrices", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rudrasis Chakraborty", "Jose Bouza", "Jonathan Manton", "Baba C Vemuri" ], "title": "Manifoldnet: A deep neural network for manifold-valued data with applications", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Victor C Chen", "Fayin Li", "S-S Ho", "Harry Wechsler" ], "title": "Micro-doppler effect in radar: phenomenon, model, and simulation study", "venue": "IEEE Transactions on Aerospace and electronic systems,", "year": 2006 }, { "authors": [ "Guang Cheng", "Jeffrey Ho", "Hesamoddin Salehian", "Baba C Vemuri" ], "title": "Recursive computation of the fréchet mean on non-positively curved riemannian manifolds with applications", "venue": "In Riemannian Computing in Computer Vision,", "year": 2016 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Jixiang Li", "Qingyuan Li", "Ruijun Xu" ], "title": "Scarletnas: Bridging the gap between scalability and fairness in neural architecture", "venue": null, "year": 1908 }, { "authors": [ "Xiangxiang Chu", "Tianbao Zhou", "Bo Zhang", "Jixiang Li" ], "title": "Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search", "venue": "In 16th Europoean Conference On Computer Vision,", "year": 2020 }, { "authors": [ "Kalyanmoy Deb", "Amrit Pratap", "Sameer Agarwal", "TAMT Meyarivan" ], "title": "A fast and elitist multiobjective genetic algorithm: Nsga-ii", "venue": "IEEE transactions on evolutionary computation,", "year": 2002 }, { "authors": [ "Abhinav Dhall", "Roland Goecke", "Jyoti Joshi", "Karan Sikka", "Tom Gedeon" ], "title": "Emotion recognition in the wild challenge 2014: Baseline, data and protocol", "venue": "In Proceedings of the 16th international conference on multimodal interaction,", "year": 2014 }, { "authors": [ "Tingxing Dong", "Azzam Haidar", "Stanimire Tomov", "Jack J Dongarra" ], "title": "Optimizing the svd bidiagonalization process for a batch of small matrices", "venue": "In ICCS,", "year": 2017 }, { "authors": [ "Zhen Dong", "Su Jia", "Chi Zhang", "Mingtao Pei", "Yuwei Wu" ], "title": "Deep manifold learning of symmetric positive definite matrices with application to face recognition", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan-Hendrik Metzen", "Frank Hutter" ], "title": "Simple and efficient architecture search for convolutional neural networks", "venue": "arXiv preprint arXiv:1711.04528,", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Melih Engin", "Lei Wang", "Luping Zhou", "Xinwang Liu" ], "title": "Deepkspd: Learning kernel-matrix-based spd representation for fine-grained image recognition", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhi Gao", "Yuwei Wu", "Mehrtash Harandi", "Yunde Jia" ], "title": "A robust distance measure for similarity-based classification on the spd manifold", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2019 }, { "authors": [ "Mark Gates", "Stanimire Tomov", "Jack Dongarra" ], "title": "Accelerating the svd two stage bidiagonal reduction and divide and conquer using gpus", "venue": "Parallel Computing,", "year": 2018 }, { "authors": [ "Xinyu Gong", "Shiyu Chang", "Yifan Jiang", "Zhangyang Wang" ], "title": "Autogan: Neural architecture search for generative adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Mehrtash Harandi", "Mathieu Salzmann", "Richard Hartley" ], "title": "Dimensionality reduction on spd manifolds: The emergence of geometry-aware methods", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Mehrtash T Harandi", "Mathieu Salzmann", "Richard Hartley" ], "title": "From manifold to manifold: Geometryaware dimensionality reduction for spd matrices", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Zhiwu Huang", "Luc Van Gool" ], "title": "A riemannian network for spd matrix learning", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Zhiwu Huang", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Learning euclidean-to-riemannian metric for point-to-set classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Zhiwu Huang", "Ruiping Wang", "Shiguang Shan", "Xianqiu Li", "Xilin Chen" ], "title": "Log-euclidean metric learning on symmetric positive definite manifold with application to image set classification", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric P Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hermann Karcher" ], "title": "Riemannian center of mass and mollifier smoothing", "venue": "Communications on pure and applied mathematics,", "year": 1977 }, { "authors": [ "Suryansh Kumar" ], "title": "Jumping manifolds: Geometry aware dense non-rigid structure from motion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Suryansh Kumar", "Anoop Cherian", "Yuchao Dai", "Hongdong Li" ], "title": "Scalable dense non-rigid structurefrom-motion: A grassmannian perspective", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hanwen Liang", "Shifeng Zhang", "Jiacheng Sun", "Xingqiu He", "Weiran Huang", "Kechen Zhuang", "Zhenguo Li" ], "title": "Darts+: Improved differentiable architecture search with early stopping", "venue": null, "year": 1909 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chenxi Liu", "Liang-Chieh Chen", "Florian Schroff", "Hartwig Adam", "Wei Hua", "Alan L Yuille", "Li FeiFei" ], "title": "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Oriol Vinyals", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture search", "venue": "arXiv preprint arXiv:1711.00436,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Andre Martins", "Ramon Astudillo" ], "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Debin Meng", "Xiaojiang Peng", "Kai Wang", "Yu Qiao" ], "title": "Frame attention networks for facial expression recognition in videos", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2019 }, { "authors": [ "Maher Moakher" ], "title": "A differential geometric approach to the geometric mean of symmetric positivedefinite matrices", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 2005 }, { "authors": [ "Meinard Müller", "Tido Röder", "Michael Clausen", "Bernhard Eberhardt", "Björn Krüger", "Andreas Weber" ], "title": "Documentation mocap database hdm05", "venue": null, "year": 2007 }, { "authors": [ "Niv Nayman", "Asaf Noy", "Tal Ridnik", "Itamar Friedman", "Rong Jin", "Lihi Zelnik" ], "title": "Xnas: Neural architecture search with expert advice", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Renato Negrinho", "Geoff Gordon" ], "title": "Deeparchitect: Automatically designing and training deep architectures", "venue": "arXiv preprint arXiv:1704.08792,", "year": 2017 }, { "authors": [ "Xavier Pennec" ], "title": "Manifold-valued image processing with spd matrices", "venue": "In Riemannian Geometric Statistics in Medical Image Analysis,", "year": 2020 }, { "authors": [ "Xavier Pennec", "Pierre Fillard", "Nicholas Ayache" ], "title": "A riemannian framework for tensor computing", "venue": "International Journal of computer vision,", "year": 2006 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Shreyas Saxena", "Jakob Verbeek" ], "title": "Convolutional neural fabrics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Richard Shin", "Charles Packer", "Dawn Song" ], "title": "Differentiable neural network architecture", "venue": null, "year": 2018 }, { "authors": [ "Yuan Tian", "Qin Wang", "Zhiwu Huang", "Wen Li", "Dengxin Dai", "Minghao Yang", "Jun Wang", "Olga Fink" ], "title": "Off-policy reinforcement learning for efficient and effective gan architecture search", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Oncel Tuzel", "Fatih Porikli", "Peter Meer" ], "title": "Region covariance: A fast descriptor for detection and classification", "venue": "In European conference on computer vision,", "year": 2006 }, { "authors": [ "Oncel Tuzel", "Fatih Porikli", "Peter Meer" ], "title": "Pedestrian detection via classification on riemannian manifolds", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2008 }, { "authors": [ "Tom Veniat", "Ludovic Denoyer" ], "title": "Learning time/memory-efficient deep architectures with budgeted super networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qilong Wang", "Peihua Li", "Lei Zhang" ], "title": "G2denet: Global gaussian distribution embedding network and its application to visual recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Qilong Wang", "Jiangtao Xie", "Wangmeng Zuo", "Lei Zhang", "Peihua Li" ], "title": "Deep cnns meet global covariance pooling: Better representation and generalization", "venue": null, "year": 1904 }, { "authors": [ "Ruiping Wang", "Huimin Guo", "Larry S Davis", "Qionghai Dai" ], "title": "Covariance discriminative learning: A natural and efficient approach to image set classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yan Wu", "Aoming Liu", "Zhiwu Huang", "Siwei Zhang", "Luc Van Gool" ], "title": "Neural architecture search as sparse supernet", "venue": "arXiv preprint arXiv:2007.16112,", "year": 2020 }, { "authors": [ "X. Zhang", "Z. Huang", "N. Wang", "S. XIANG", "C. Pan" ], "title": "You only search once: Single shot neural architecture search via direct sparse optimization", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Xingjian Zhen", "Rudrasis Chakraborty", "Nicholas Vogt", "Barbara B Bendlin", "Vikas Singh" ], "title": "Dilated convolutional neural networks for sequential manifold-valued data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xiawu Zheng", "Rongrong Ji", "Lang Tang", "Baochang Zhang", "Jianzhuang Liu", "Qi Tian" ], "title": "Multinomial distribution learning for effective neural architecture search", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "H Zhou", "M Yang", "J Wang", "W Pan" ], "title": "Bayesnas: A bayesian approach for neural architecture search", "venue": "In 36th International Conference on Machine Learning, ICML 2019,", "year": 2019 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Unlike Huang", "Van Gool" ], "title": "2017) work on the SPD network, where multiple transformation matrices", "venue": null, "year": 2017 }, { "authors": [ "Dong" ], "title": "position (SVD) or eigendecomposition (EIG). Both decompositions suffer from weak support on GPU platforms. Hence, our training did not benefit from GPU acceleration and we decided to train on CPU. As a future work, we aim to speedup our implementation on GPU by optimizing the SVD Householder bi-diagonalization process", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Designing a favorable neural network architecture for a given application requires a lot of time, effort, and domain expertise. To mitigate this issue, researchers in the recent years have started developing algorithms to automate the design process of neural network architectures (Zoph & Le, 2016; Zoph et al., 2018; Liu et al., 2017; 2018a; Real et al., 2019; Liu et al., 2018b; Tian et al., 2020). Although these neural architecture search (NAS) algorithms have shown great potential to provide an optimal architecture for a given application, it is limited to handle architectures with Euclidean operations and representation. To deal with non-euclidean data representation and corresponding set of operations, researchers have barely proposed any NAS algorithms —to the best of our knowledge.\nIt is well-known that manifold-valued data representation such as symmetric positive definite (SPD) matrices have shown overwhelming accomplishments in many real-world applications such as pedestrian detection (Tuzel et al., 2006; 2008), magnetic resonance imaging analysis (Pennec et al., 2006), action recognition (Harandi et al., 2014), face recognition (Huang et al., 2014; 2015), braincomputer interfaces (Barachant et al., 2011), structure from motion (Kumar et al., 2018; Kumar, 2019), etc. Also, in applications like diffusion tensor imaging of the brain, drone imaging, samples are collected directly as SPD’s. As a result, neural network usage based on Euclidean data representation becomes inefficient for those applications. Consequently, this has led to the development of the SPD neural network (SPDNet) architectures for further improvements in these areas of research (Huang & Van Gool, 2017; Brooks et al., 2019). However, these architectures are handcrafted, so the operations or the parameters defined for these networks generally change as per the application. This motivated us to propose a new NAS problem of SPD manifold networks. A solution to this problem can reduce unwanted efforts in SPDNet design. Compared to the traditional NAS problem, our NAS problem requires a new definition of computation cell and proposal for diverse SPD candidate operation set. In particular, we model the basic architecture cell with a specific directed acyclic graph (DAG), where each node is a latent SPD representation, and each edge corresponds to a SPD candidate operation. Here, the intermediate transformations between nodes respect the geometry of the SPD manifolds.\nFor solving the suggested NAS problem, we exploit a supernet search strategy which models the architecture search problem as a one-shot training process of a supernet that comprises of a mixture\nof SPD neural architectures. The supernet modeling enables us to perform a differential architecture search on a continuous relaxation of SPD neural architecture search space, and therefore, can be solved using a gradient descent approach. Our evaluation validates that the proposed method can build a reliable SPD network from scratch. We show the results of our method on benchmark datasets that clearly show results better than handcrafted SPDNet. Our work makes the following contributions:\n• We introduce a NAS problem of SPD manifold networks that opens up a new direction of research in automated machine learning and SPD manifold learning. Based on a supernet modeling, we propose a novel differentiable NAS algorithm for SPD neural architecture search. Concretely, we exploit a sparsemax-based Fréchet mixture of SPD operations to introduce sparsity that is essential for an effective diffentiable search, and bi-level optimization with manifold-based update and convexity-based update to jointly optimize architecture parameters and network kernel weights. • Besides well-studied operations from exiting SPDNets (Huang & Van Gool, 2017; Brooks et al., 2019; Chakraborty et al., 2020), we follow Liu et al. (2018b) to further introduce some new SPD layers, i.e., skip connection, none operation, max pooling and averaging pooling. Our introduced additional set of SPD operations make the search space more diverse for the neural architecture search algorithm to obtain more generalized SPD neural network architectures. • Evaluation on three benchmark datasets shows that our searched SPD neural architectures can outperform the existing handcrafted SPDNets (Huang & Van Gool, 2017; Brooks et al., 2019; Chakraborty et al., 2020) and the state-of-the-art NAS methods (Liu et al., 2018b; Chu et al., 2020). Notably, our searched architecture is more than 3 times lighter than those searched by the traditional NAS algorithms." }, { "heading": "2 BACKGROUND", "text": "In recent years, plenty of research work has been published in the area of NAS (Gong et al., 2019; Liu et al., 2019; Nayman et al., 2019; Guo et al., 2020). This is probably due to the success of deep learning for several applications which has eventually led to the automation of neural architecture design. Also, improvements in the processing capabilities of machines has influenced the researchers to work out this computationally expensive yet an important problem. Computational cost for some of the well-known NAS algorithms is in thousands of GPU days which has resulted in the development of several computationally efficient methods (Zoph et al., 2018; Real et al., 2019; Liu et al., 2018a; 2017; Baker et al., 2017; Brock et al., 2017; Bender, 2019; Elsken et al., 2017; Cai et al., 2018; Pham et al., 2018; Negrinho & Gordon, 2017; Kandasamy et al., 2018; Chu et al., 2020). In this work, we propose a new NAS problem of SPD networks. We solve this problem using a supernet modeling methodology with a one-shot differentiable training process of an overparameterized supernet. Our modeling is driven by the recent progress in supernet methodology. Supernet methodology has shown a great potential than other NAS methodologies in terms of search efficiency. Since our work is directed towards solving a new NAS problem, we confine our discussion to the work that have greatly influenced our method i.e., one-shot NAS methods and SPD networks.\nTo the best of our knowledge, there are mainly two types of one-shot NAS methods based on the architecture modeling (Elsken et al., 2018) (a) parameterized architecture (Liu et al., 2018b; Zheng et al., 2019; Wu et al., 2019; Chu et al., 2020), and (b) sampled architecture (Deb et al., 2002; Chu et al., 2019). In this paper, we adhere to the parametric modeling due to its promising results on conventional neural architectures. A majority of the previous work on NAS with continuous search space fine-tunes the explicit feature of specific architectures (Saxena & Verbeek, 2016; Veniat & Denoyer, 2018; Ahmed & Torresani, 2017; Shin et al., 2018). On the contrary, Liu et al. (2018b); Liang et al. (2019); Zhou et al. (2019); Zhang et al. (2020); Wu et al. (2020); Chu et al. (2020) provides architectural diversity for NAS with highly competitive performances. The other part of our work focuses on SPD network architectures. There exist algorithms to develop handcrafted SPDNet (Huang & Van Gool, 2017; Brooks et al., 2019; Chakraborty et al., 2020). To automate the process of SPD network design, in this work, we choose the most promising approaches from these fields (NAS (Liu et al., 2018b), SPD networks (Huang & Van Gool, 2017)) and propose a NAS algorithm for SPD inputs. Next, we summarize the essential notions of Riemannian geometry of SPD manifolds, followed by an introduction of some basic SPDNet operations and layers. As some of the introduced operations and layers have been well-studied by the existing literature, we applied them directly to define our SPD neural architectures’ search space.\nRepresentation and Operation: We denote n × n real SPD as X ∈ Sn++. A real SPD matrix X ∈ Sn++ satisfies the property that for any non-zero z ∈ Rn, zTXz > 0 (Harandi et al., 2017). We denote TXM as the tangent space of the manifoldM at X ∈ Sn++ and log corresponds to matrix logarithm. Let X1,X2 be any two points on the SPD manifold then the distance between them is given by\nδM(X1,X2) = 0.5‖ log(X − 1 2 1 X2X − 1 2 1 )‖F (1)\nThere are other efficient methods to compute distance between two points on the SPD manifold (Gao et al., 2019; Dong et al., 2017b), however, their discussion is beyond the scope of our work. Other property of the Riemannian manifold of our interest is local diffeomorphism of geodesics which is a one-to-one mapping from the point on the tangent space of the manifold to the manifold (Pennec, 2020; Lackenby, 2020). To define such notions, letX ∈ Sn++ be the base point and, Y ∈ TXSn++, then Eq:(2) associates Y ∈ TXSn++ to a point on the manifold (Pennec, 2020).\nexpX(Y ) = X 1 2 exp(X− 1 2Y X− 1 2 )X 1 2 ∈ Sn++, ∀ Y ∈ TX (2)\nSimilarly, an inverse map is defined as logX(Z) = X 1 2 log(X− 1 2ZX− 1 2 )X 1 2 ∈ TX , ∀ Z ∈ Sn++.\n1) Basic operations of SPD Network: It is well-known that operations such as mean centralization, normalization, and adding bias to a batch of data are inherent performance booster for most neural networks. In the same spirit, existing works like Brooks et al. (2019); Chakraborty (2020) use the notion of these operations for the SPD or general manifold data to define analogous operations on manifolds. Below we introduce them following the work of Brooks et al. (2019).\n• Batch mean, centering and bias: Given a batch of N SPD matrices {Xi}Ni=1, we can compute its Riemannian barycenter (B) as B = argmin\nXµ∈Sn++\n∑N i=1 δ 2 M(Xi,Xµ). It is sometimes referred\nas Fréchet mean (Moakher, 2005; Bhatia & Holbrook, 2006). This definition can be extended to compute the weighted Riemannian Barycenter 1 also known as weighted Fréchet Mean (wFM) .\nB = argmin Xµ∈Sn++ N∑ i=1 wiδ 2 M(Xi,Xµ); s.t. wi ≥ 0 and N∑ i=1 wi = 1 (3)\nEq:(3) can be approximated using Karcher flow (Karcher, 1977; Bonnabel, 2013; Brooks et al., 2019) or recursive geodesic mean (Cheng et al., 2016; Chakraborty et al., 2020).\n2) Basic layers of SPD Network: Analogous to standard CNN, methods like Huang & Van Gool (2017); Brooks et al. (2019); Chakraborty et al. (2020) designed SPD layers to perform operations that respect SPD manifold constraints. AssumingXk−1 ∈ Sn++ be the input SPD matix to the kth layer, the SPD network layers are defined as follows:\n• BiMap layer: This layer corresponds to a dense layer for SPD data. The BiMap layer reduces the dimension of a input SPD matrix via a transformation matrix Wk as Xk = WkXk−1W Tk . To ensure the matrixXk to be an SPD matrix, theWk matrix must be of full row-rank.\n• Batch normalization layer: To perform batch normalization after each BiMap layer, we first compute the Riemannian barycenter of the batch of SPD matrices followed by a running mean update step, which is Riemannian weighted average between the batch mean and the current running mean, with the weights (1 − θ) and (θ) respectively. Once mean is calculated, we centralize and add bias to each SPD sample of the batch using Eq:(4) (Brooks et al., 2019), where P is the notation used for parallel transport :\nBatch centering : Centering the B : Xci = PB→I(Xi) = B − 1 2XiB − 1 2 , I is the identity matrix\nBias the batch : Bias towards G : Xbi = PI→G(X c i ) = G 1 2XciG 1 2 , I is the identity matrix\n(4)\n• ReEig layer: The ReEig layer is analogous to ReLU like layers present in the classical ConvNets. It aims to introduce non-linearity to SPD network. The ReEig for the kth layer is defined as: Xk = Uk−1 max( I,Σk−1)U T k−1 where,Xk−1 = Uk−1Σk−1U T k−1, I is the identity matrix, and\n> 0 is a rectification threshold value. Uk−1,Σk−1 are the orthonormal matrix and singular-value matrix respectively which are obtained via matrix factorization ofXk−1. 1Following (Tuzel et al. (2006; 2008); Brooks et al. (2019)), we focus on the estimate of wFM with Karcher flow, and the thorough study on the general wFM’s existence and uniqueness is beyond the focus of this paper.\n• LogEig layer: To map the manifold representation of SPD to flat space so that a Euclidean operation can be performed, LogEig layer is introduced. The LogEig layer is defined as: Xk = Uk−1 log(Σk−1)U T k−1 where,Xk−1 = Uk−1Σk−1U T k−1. The LogEig layer is used with fully\nconnected layers to solve tasks with SPD representation. • ExpEig layer: This layer maps the corresponding SPD representation from flat space back to SPD\nmanifold space. It is defined asXk = Uk−1 exp(Σk−1)UTk−1 where,Xk−1 = Uk−1Σk−1U T k−1. • Weighted Riemannian pooling layer: It uses wFM definition to compute the output of the layer. Recent method use recursive geodesic mean algorithm to calculate the mean (Chakraborty et al., 2020), in contrast, we use Karcher flow algorithm to compute it (Karcher, 1977) as it is simple and widely used in practice." }, { "heading": "3 NEURAL ARCHITECTURE SEARCH OF SPD MANIFOLD NETWORK", "text": "As alluded before, to solve the suggested problem, there are a few key changes that must be introduced. Firstly, a new definition of the computation cell is required. In contrast to the computational cells designed by regular NAS algorithms like Liu et al. (2018b); Chu et al. (2020), our computational cell —which we call as SPD cell, additionally incorporate the notion of SPD manifold geometry so that SPD representations can be treated properly. On the other hand, like the regular NAS cell design, our SPD cell can either be a normal cell that returns SPD feature maps of the same width and height or, a reduction cell in which the SPD feature maps are reduced by a certain factor in width and height. Secondly, to solve our new NAS problem will require an appropriate and diverse SPD search space that can help NAS method to optimize for an effective SPD cell, which can then be stacked and trained to build an efficient SPD neural network architecture.\nConcretely, a SPD cell is modeled by a directed asyclic graph (DAG) which is composed of nodes and edges. In our DAG each node is an latent representation of the SPD manifold valued data i.e. an intermediate SPD feature map and, each edge corresponds to a valid candidate operation on SPD manifold (see Fig.1(a)). Each edge of a SPD cell is associated with a set of candidate SPD manifold operations (OM) that transforms the SPD valued latent representation from the source node (say X\n(i) M) to the target node (sayX (j) M ). We define the intermediate transformation between the nodes in\nour SPD cell as: X(j)M = argmin X\n(j) M\n∑ i<j δ 2 M ( O (i,j) M ( X (i) M ) ,X (j) M ) , where δM denotes the geodesic\ndistance Eq:(1). Generally, this transformation result corresponds to the unweighted Fréchet mean of the operations based on the predecessors, such that the mixture of all operations still reside on SPD manifolds. Note that our definition of SPD cell ensures that each computational graph preserves the appropriate geometric structure of the SPD manifold. Equipped with the notion of SPD cell and its intermediate transformation, we are prepared to propose our search space (§3.1) followed by the solution to our SPDNet NAS problem (§3.2) and its results (§4)." }, { "heading": "3.1 SEARCH SPACE", "text": "Our search space consists of a set of valid SPD network operations which is defined for the supernet search. First of all, the search space includes some existing SPD operations2, e.g., BiMap, batch normalization, ReEig, LogEig, ExpEig and weighted Riemannian pooling layers, all of which are\n2Our search space can include some other exiting SPD operations like manifoldnorm Chakraborty (2020). However, a comprehensive study on them is out of our focus, which is on studying a NAS algorithm on a given promising search space." }, { "heading": "Operation Definition Operation Definition", "text": "introduced in Sec.2. Though those individual operations (e.g., BiMap, LogEig, ExpEig) have been explored well by existing works, different aggregations on them are still understudied, which are essential to enrich our search space. To be specific to enrich the search space, following Liu et al. (2018b); Gong et al. (2019) traditional NAS methods, we apply the SPD batch normalization to every SPD convolution operation (i.e., BiMap), and design three variants of convolution blocks including the one without activation (i.e., ReEig), the one using post-activation and the one using pre-activation (see Table 1). In addition, we introduce five new operations analogous to DARTS (Liu et al., 2018b) to enrich the search space in the context of SPD networks. These are skip normal, none normal, average pooling, max pooling and skip reduced. The effect of such diverse operation choices have not been fully explored for SPD networks. All the candidate operations are illustrated in Table (1), and the definitions of the new operations are detailed as follows:\n(a) Skip normal: It preserves the input representation and is similar to skip connection. (b) None normal: It corresponds to the operation that returns identity as the output i.e, the notion of zero in the SPD space. (c) Max pooling: Given a set of SPD matrices, max pooling operation first projects these samples to a flat space via a LogEig operation, where a standard max pooling operation is performed. Finally, an ExpEig operation is used to map the sample back to the SPD manifold. (d) Average pooling: Similar to Max pooling, the average pooling operation first projects the samples to the flat space using a LogEig operation, where a standard average pooling is employed. To map the sample back to SPD manifold, an ExpEig operation is used. (e) Skip reduced: It is similar to ‘skip normal’ but in contrast, it decomposes the input into small matrices to reduces the inter-dependency between channels. Our definition of reduce operation is in line with the work of Liu et al. (2018b).\nThe newly introduced operations allow us to generate a more diverse discrete search space. As presented in Table 2, the randomly selected architecture (generally consisting of the newly introduced SPD operations) shows some improvement over SPDNet and SPDNetBN, both of which only contain conventional SPD operations. This establishes the effectiveness of the introduced rich search space." }, { "heading": "3.2 SUPERNET SEARCH", "text": "To solve the suggested new NAS problem, one of the most promising NAS methodologies is supernet modeling. While we can resort to some other NAS methods to solve the problem like reinforcement learning based method (Zoph & Le, 2016) or evolution based algorithm (Real et al., 2019), in general, the supernet method models the architecture search problem as a one-shot training process of a single supernet that consists of all architectures. Based on the supernet modeling, we can search for the optimal SPD neural architecture either using parameterization of architectures or sampling of single-path architectures. In this paper, we focus on the parameterization approach that is based on the continuous relaxation of the SPD neural architecture representation. Such an approach allows for an efficient search of architecture using the gradient descent approach. Next, we introduce our supernet search method, followed by a solution to our proposed bi-level optimization problem. Fig.1(b) and Fig.1(c) illustrates an overview of our proposed method.\nTo search for an optimal SPD architecture (α), we optimize the over parameterized supernet. In essence, it stacks the basic computation cells with the parameterized candidate operations from our search space in a one-shot search manner. The contribution of specific subnets to the supernet helps in deriving the optimal architecture from the supernet. Since the proposed operation search space is discrete in nature, we relax the explicit choice of an operation to make the search space continuous. To do so, we use wFM over all possible candidate operations. Mathematically,\nŌM(XM) = argmin X µ M Ne∑ k=1 α̃kδ2M ( O(k)M ( XM ) ,XµM ) ; subject to: 1T α̃ = 1, 0 ≤ α̃ ≤ 1 (5)\nAlgorithm 1: The proposed Neural Architecture Search of SPD Manifold Nets (SPDNetNAS) Require: Mixed Operation ŌM which is parameterized by αk for each edge k ∈ Ne; while not converged do\nStep1: Update α (architecture) using Eq:(8) solution by satisfying an additional strict convex constraint. Note that updates on w and w̃ (Eq:(9), Eq:(10)) should follow the gradient descent on SPD manifold; Step2: Update w by solving∇wEtrain(w,α); Ensure SPD manifold gradient to update w (Absil et al., 2009; Huang & Van Gool, 2017; Brooks et al., 2019);\nend Ensure: Final architecture based on α. Decide the operation at an edge k using argmax\no∈OM {αko}\nwhere, OkM is the k th candidate operation between nodes, Xµ is the intermediate SPD manifold mean (Eq.3) and, Ne denotes number of edges. We can compute wFM solution either using Karcher flow (Karcher, 1977) or recursive geodesic mean (Chakraborty et al., 2020) algorithm. Nonetheless, we adhere to Karcher flow algorithm as it is widely used to calculate wFM3. To impose the explicit convex constraint on α̃, we project the solution onto the probability simplex as\nminimize α\n‖α− α̃‖22; subject to: 1Tα = 1, 0 ≤ α ≤ 1 (6)\nEq:(6) enforces the explicit constraint on the weights to supply α for our task and can easily be added as a convex layer in the framework (Agrawal et al., 2019). This projection is likely to reach the boundary of the simplex, in which case α becomes sparse (Martins & Astudillo, 2016). Optionally, softmax, sigmoid and other regularization methods can be employed to satisfy the convex constraint. However, Chu et al. (2020) has observed that the use of softmax can cause performance collapse and may lead to aggregation of skip connections. While Chu et al. (2020) suggested sigmoid can overcome the unfairness problem with softmax, it may output smoothly changed values which is hard to threshold for dropping redundant operations with non-marginal contributions to the supernet. Also, FairDARTS (Chu et al., 2020) regularization, may not preserve the summation equal to 1 constraint. Besides, Chakraborty et al. (2020) proposes recursive statistical approach to solve wFM with convex constraint, however, the definition proposed do not explicitly preserve the equality constraint and it requires re-normalization of the solution. In contrast, our approach composes of the sparsemax transformation for convex Fréchet mixture of SPD operations with the following two advantages: 1) It can preserve most of the important properties of softmax such as, it is simple to evaluate, cheaper to differentiate (Martins & Astudillo, 2016). 2) It is able to produce sparse distributions such that the best operation associated with each edge is more likely to make more dominant contributions to the supernet, and thus more optimal architecture can be derived (refer Figure 2(a),2(b) and §4). From Eq:(5–6), the mixing of operations between nodes is determined by the weighted combination of alpha’s (αk) and the set of operations. This relaxation makes the search space continuous and therefore, architecture search can be achieved by learning a set of alpha (α = {αk, ∀ k ∈ Ne}). To achieve our goal, we must simultaneously learn the contribution of several possible operation within all the mixed operations (w) and the corresponding architecture α. Consequently, for a given w, we can find α and vice-versa resulting in the following bi-level optimization problem.\nminimize α\nEUval ( wopt(α), α ) ; subject to: wopt(α) = argmin\nw ELtrain(w,α) (7)\nThe lower-level optimization ELtrain corresponds to the optimal weight variable learned for a given α i.e., wopt(α) using a training loss. The upper-level optimization EUval solves for the variable α given the optimal w using a validation loss. This bi-level search method gives optimal mixture of multiple small architectures. To derive each node in the discrete architecture, we maintain top-k operations i.e, with the kth highest weight among all the candidate operations associated with all the previous nodes.\nBi-level Optimization: The bi-level optimization problem proposed in Eq:(7) is difficult to solve. Following Liu et al. (2018b) work, we approximate wopt(α) in the upper- optimization problem to skip inner-optimization as follows:\n∇αEUval ( wopt(α), α ) ≈ ∇αEUval ( w − η∇wELtrain(w,α), α ) (8)\n3In Appendix, we provide some comparison between Karcher flow and recursive geodesic mean method. A comprehensive study on this is actually beyond the scope of our paper.\nHere, η is the learning rate and ∇ is the gradient operator. Note that the gradient based optimization for w must follow the geometry of SPD manifold to update the structured connection weight, and its corresponding SPD matrix data. Applying the chain rule to Eq:(8) gives\nfirst term︷ ︸︸ ︷ ∇αEUval ( w̃, α ) − second term︷ ︸︸ ︷ η∇2α,wELtrain(w,α)∇w̃EUval(w̃, α) (9)\nwhere, w̃ = Ψr ( w − η∇̃wELtrain(w,α) ) denotes the weight update on the SPD manifold for the forward model. ∇̃w, Ψr symbolizes the Riemannian gradient and the retraction operator respectively. The second term in the Eq:(9) involves second order differentials with very high computational complexity, hence, using the finite approximation method the second term of Eq:(9) reduces to:\n∇2α,wELtrain(w,α)∇w̃EUval(w̃, α) = ( ∇αELtrain(w+, α)−∇αELtrain(w−, α) ) /2δ (10)\nwhere, w± = Ψr(w ± δ∇̃w̃EUval(w̃, α)) and δ is a small number set to 0.01/‖∇w̃EUval(w̃, α)‖2. Though the structure of bi-level optimization the same as the DARTS Liu et al. (2018b), there are some key differences. Firstly, the updates on the manifold-valued kernel weights are constrained on manifolds, which ensures that the feature maps at every intermediate layer are SPDs. For concrete derivations on back-propagation for SPD network layers, refer to Huang & Van Gool (2017) work. Secondly, the update on the aggregation weights of the involved SPD operations needs to satisfy an additional strict convex constraint, which is enforced as part of the optimization problem. The pseudo code of our method is outlined in Algorithm(1)." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "To keep the experimental evaluation consistent with the previously proposed SPD networks (Huang & Van Gool, 2017; Brooks et al., 2019), we used RADAR (Chen et al., 2006), HDM05 (Müller et al., 2007), and AFEW (Dhall et al., 2014) datasets. For SPDNetNAS, we first optimize the supernet on the training/validation sets, and then prune it with the best operation for each edge. Finally, we train the optimized architecture from scratch to document the results. For both these stages, we consider the same normal and reduction cells. A cell receives preprocessed inputs which is performed using fixed BiMap 2 to make the input of same initial dimension. All architectures are trained with a batch size of 30. Learning rate (η) for RADAR, HDM05, and AFEW dataset is set to 0.025, 0.025 and 0.05 respectively. Besides, we conducted experiments where we select architecture using a random search path (SPDNetNAS (R)), to justify whether our search space with the introduced SPD operations can derive meaningful architectures. We refer to SPDNet (Huang & Van Gool, 2017), SPDNetBN (Brooks et al., 2019), and ManifoldNet (Chakraborty et al., 2020) for comparison against handcrafted SPD networks. SPDNet and SPDNetBN are evaluated using their original implementations. We follow the video classification setup of (Chakraborty et al., 2020) to evaluate ManifoldNet on AFEW. It is non-trivial to adapt ManifoldNet to RADAR and HDM05, as ManifoldNet requires SPD features with multiple channels and both of the two datasets can hardly obtain them. For comparing against Euclidean NAS methods, we used DARTS (Liu et al., 2018b) and FairDARTS (Chu et al., 2020) by treating SPD’s logarithm maps as Euclidean data in their official implementation with default setup. We observed that using raw SPD’s as input to Euclidean NAS algorithms degrades its performance.\na) Drone Recognition: For this task, we used the RADAR dataset from (Chen et al., 2006). The synthetic setting for this dataset is composed of radar signals, where each signal is split into windows\nof length 20 resulting in a 20x20 covariance matrix for each window (one radar data point). The synthesized dataset consists of 1000 data points per class. Given 20× 20 input covariance matrices, our reduction cell reduces them to 10× 10 matrices followed by normal cell to provide complexity to our network. Following Brooks et al. (2019), we assign 50%, 25%, and 25% of the dataset for training, validation, and test set respectively. The Euclidean NAS algorithms are evaluated on the euclidean map of the input. For direct SPD input the performance of darts(95.86%) and fairdarts (92.26%) are worse as expected. For this dataset, our algorithm takes 1 CPU day of search time to provide the SPD architecture. Training and validation take 9 CPU hours for 200 epochs4. Test results on this dataset are provided in Table (2) which clearly shows the benefit of our method. Statistical performance show that our NAS algorithm provides an efficient architecture with much fewer parameters (more than 140 times) than state-of-the-art Euclidean NAS on the SPD manifold valued data. The normal and reduction cells obtained on this dataset are shown in Fig. 2(b).\nb) Action Recognition: For this task, we used the HDM05 dataset (Müller et al., 2007) which contains 130 action classes, yet, for consistency with previous work (Brooks et al., 2019), we used 117 class for performance comparison. This dataset has 3D coordinates of 31 joints per frame. Following the previous works (Harandi et al., 2017; Huang & Van Gool, 2017), we model an action for a sequence using 93× 93 joint covariance matrix. The dataset has 2083 SPD matrices distributed among all 117 classes. Similar to the previous task, we split the dataset into 50%, 25%, and 25% for training, validation, and testing. Here, our reduction cell is designed to reduce the matrices dimensions from 93 to 30 for legitimate comparison against Brooks et al. (2019). To search for the best architecture, we ran our algorithm for 50 epoch (3 CPU days). Figure 2(b) show the final cell architecture that got selected based on the validation performance. The optimal architecture is trained from scratch for 100 epochs which took approximately 16 CPU hours. The test accuracy achieved on this dataset is provided in Table (2). Statistics clearly show that our models despite being lighter performs better than the NAS models and the handcrafted SPDNets. The NAS models’ inferior results show that the use of SPD layers for respecting SPD geometries is crucial for SPD data analysis.\nc) Emotion Recognition: We used AFEW dataset (Dhall et al., 2014) to evaluate the transferability of our searched architecture for emotion recognition. This dataset has 1345 videos of facial expressions classified into 7 distinct classes. To train on the video frames directly, we stack all the handcrafted SPDNets and our searched SPDNet on top of a covolutional network Meng et al. (2019) with its official implementation. For ManifoldNet, we compute a 64× 64 spatial covariance matrix for each frame on the intermediate CNN features of 64× 56× 56 (channels, height, width). We follow the reported setup of Chakraborty et al. (2020) to first apply a single wFM layer with kernel size 5, stride 3 and 8 channels, followed by three temporal wFM layers of kernel size 3 and stride 2, with the channels being 1, 4, 8 respectively. We closely follow the official implementation of ManifoldNet 5 for the wFM layers and adapt the code to our specific task. Since SPDNet, SPDNetBN and our SPDNetNAS require a single channel SPD matrix as input, we use the final 512 dimensional vector extracted from the covolutional network, project it using a dense layer to a 100 dimensional feature vector and compute a 100× 100 temporal covariance matrix. To study the transferability of our algorithm, we evaluate its searched architecture on RADAR and HDM05. In addition, we evaluate DARTS and FairDARTS directly on the video frames of AFEW. Table (3) reports the evaluations results. As we can observe, the transferred architectures can handle the new dataset quite convincingly, and their test accuracies are better than those of the existing SPDNets and the Euclidean NAS algorithms. In Appendix, we present results of competing methods and our searched models on the raw SPD features of AFEW.\n4For details on the choice of CPU rather than GPU, see appendix 5https://github.com/jjbouza/manifold-net-vision\nd) Ablation study:\nLastly, we conducted some ablation study to realize the effect of probability simplex constraint (sparsemax) on our suggested Fréchet mixture of SPD operations. Although in Fig. 2(a) we show better probability weight distribution with sparsemax, Table(4) shows that\nit performs better empirically as well on both RADAR and HDM05 compared to the softmax and sigmoid cases. Therefore, SPD architectures derived using the sparsemax is observed to be better.\ne) Statistical comparison under same model complexity: We compare the statistical performance of our method against the other competing methods under similar model sizes. Table 5 show the results obtained on the RADAR dataset. One key point to note here is that when we increase the number of parameters in SPDNet and SPDNetBN, we observe a very severe degradation in the performance accuracy —mainly because the network starts overfitting rapidly. The performance degradation is far more severe for the HDM05 dataset with SPDNet (1.047MB) performing 0.7619% and SPDNetBN (1.082MB) performing 1.45% and hence, is not reported in the table below. That further indicates the ability of SPDNetNAS to generalize better and avoid overfitting despite the larger model size.\nSimilarly, we experimented on the AFEW dataset. To have a fair comparison against the related method like ManifoldNet, whose model size is about (76MB), we must reduce the model size accordingly. ManifoldNet model size is large mainly due to multiple final dense fully connected layers. Hence, to reduce the model size, we decreased the number of FC layers. The performance result with comparable model sizes on the AFEW dataset is shown in Table 5. Again, we can infer that our SPDNetNAS achieves a significant performance improvement over the others." }, { "heading": "5 CONCLUSION AND FUTURE DIRECTION", "text": "In this work, we present a neural architecture search problem of SPD manifold networks. To solve it, a SPD cell representation and corresponding candidate operation search space is introduced. A parameterized supernet search method is employed to explore the relaxed continuous SPD search space following a bi-level optimization problem with probability simplex constraint for effective SPD network design. The solution to our proposed problem using back-propagation is carefully crafted, so that, the weight updates follow the geometry of the SPD manifold. Quantitative results on the benchmark dataset show a commendable performance gain over handcrafted SPD networks and Euclidean NAS algorithms. Additionally, we demonstrate that the learned SPD architecture is much lighter than other NAS based architecture and, it is transferable to other datasets as well.\nOur work provides an architecture search methodology for the scenarios where the acquired input data are SPD’s, for example, diffusion tensor imaging for medical applications, drone recognition, etc. In addition, our method offers a paradigm to automate the neural architecture design for the scenarios that require the second-order representations/poolings for robust visual recognition (e.g., Wang et al. (2017); Engin et al. (2018); Wang et al. (2019)). Accordingly, we encourage more future works to pursue these two directions. Also, it is fairly interesting to extend our proposed method to sequential manifold valued data (Zhen et al., 2019; Chakraborty et al., 2018)." }, { "heading": "A ADDITIONAL EXPERIMENTAL ANALYSIS", "text": "" }, { "heading": "A.1 EFFECT OF MODIFYING PREPROCESSING LAYERS FOR MULTIPLE DIMENSIONALITY REDUCTION", "text": "Unlike Huang & Van Gool (2017) work on the SPD network, where multiple transformation matrices are applied at multiple layers to reduce the dimension of the input data, our reduction cell presented in the main paper is one step. For example: For HDM05 dataset (Müller et al., 2007), the author’s of SPDNet (Huang & Van Gool, 2017) apply 93 × 70, 70 × 50, 50 × 30, transformation matrices to reduce the dimension of the input matrix, on the contrary, we reduce the dimension in one step from 93 to 30 which is inline with Brooks et al. (2019) work.\nTo study the behaviour of our method under multiple dimesionality reduction pipeline on HDM05, we use the preprocessing layers to perform dimensionality reduction. To be precise, we consider a preprocessing step to reduce the dimension from 93 to 70 to 50 and then, a reduction cell that reduced the dimension from 50 to 24. This modification has the advantage that it reduces the search time from 3 CPU days to 2.5 CPU days, and in addition, provides a performance gain (see Table (6)). The normal and the reduction cells for the multiple dimension reduction are shown in Figure (3).\nTable 6: Results of modifying preprocessing layers for multiple dimentionality reduction on HDM05\nPreprocess dim reduction Cell dim reduction SPDNetNAS Search time NA 93→ 30 68.74%± 0.93 3 CPU days\n93→70→50 50→24 69.41 %± 0.13 2.5 CPU days\nc_{k-2}\n0 WeightedPooling_normal 1\nWeightedPooling_normal\nc_{k-1} Skip_normal\nSkip_normal c_{k}\n(a) Normal cell\nc_{k-2} 0 AvgPooling2_reduced\n1\nMaxPooling_reduced\nc_{k-1} AvgPooling2_reduced\nAvgPooling2_reduced\nc_{k}\n(b) Reduction cell\nFigure 3: (a)-(b) Normal cell and Reduction cell for multiple dimensionality reduction respectively" }, { "heading": "A.2 EFFECT OF ADDING NODES TO THE CELL", "text": "Experiments presented in the main paper consists of N = 5 nodes per cell which includes two input nodes, one output node, and two intermediate nodes. To do further analysis of our design choice, we added nodes to the cell. Such analysis can help us study the critical behaviour of our cell design i.e, whether adding an intermediate nodes can improve the performance or not?, and how it affects the computational complexity of our algorithm? To perform this experimental analysis, we used HDM05 dataset (Müller et al., 2007). We added one extra intermediate node (N = 6) to the cell design. We observe that we converge towards an architecture design that is very much similar in terms of operations (see Figure 4). The evaluation results shown in Table (7) help us to deduce that adding more intermediate nodes increases the number of channels for output node, subsequently leading to increased complexity and almost double the computation time." }, { "heading": "A.3 EFFECT OF ADDING MULTIPLE CELLS", "text": "In our paper we stack 1 normal cell over 1 reduction cell for all the experiments. For more extensive analysis of the proposed method, we conducted training experiments by stacking multiple cells which\nis in-line with the experiments conducted by Liu et al. (2018b). We then transfer the optimized architectures from the singe cell search directly to the multi-cell architectures for training. Hence, the search time for all our experiments is same as for a single cell search i.e. 3 CPU days. Results for this experiment are provided in Table 8. The first row in the table shows the performance for single cell model, while the second and third rows show the performance with multi-cell stacking. Remarkably, by stacking multiple cells our proposed SPDNetNAS outperforms SPDNetBN Brooks et al. (2019) by a large margin (about 8%, i.e., about 12% for the relative improvement)." }, { "heading": "A.4 AFEW PERFORMANCE COMPARISON ON RAW SPD FEATURES", "text": "In addition to the evaluation on CNN features in the major paper, we also use the raw SPD features (extracted from gray video frames) from Huang & Van Gool (2017); Brooks et al. (2019) to compare the competing methods. To be specific, each frame is normalized to 20× 20 and then represent each video using a 400× 400 covariance matrix (Wang et al., 2012; Huang & Van Gool, 2017). Table 9 summarizes the results. As we can see, the transferred architecture can handle the new dataset quite convincingly. The test accuracy is comparable to the best SPD network method for RADAR model transfer. For HDM05 model transfer, the test accuracy is much better than the existing SPD networks." }, { "heading": "A.5 DERIVED CELL ARCHITECTURE USING SIGMOID ON FRÉCHET MIXTURE OF SPD OPERATION", "text": "Figure 5(a) and Figure 5(b) show the cell architecture obtained using the softmax and sigmoid respectively on the Fréchet mixture of SPD operation. It can be observed that it has relatively more skip and pooling operation than sparsemax ((see Figure 2(b))). In contrast to softmax and sigmoid, the SPD cell obtained using sparsemax is composed of more convolution type operation in the architecture, which in fact is important for better representation of the data." }, { "heading": "A.6 COMPARISON BETWEEN KARCHER FLOW AND RECURSIVE APPORACH FOR WEIGHTED FRECHET MEAN", "text": "The proposed NAS algorithm is based on Fréchet Mean computations. From the weighted mixture of operations between nodes to the derivation of intermediate nodes, both compute the Fréchet mean of a set of points on the SPD manifold. It is well known that there is no closed form solution when the number of input samples is bigger than 2 (Brooks et al., 2019). We can only compute an approximation using the famous Karcher flow algorithm (Brooks et al., 2019) or recursive geodesic mean (Chakraborty et al., 2020). For comparison, we replace our used Karcher flow algorithm with the recursive approach under our SPDNetNAS framework. Table 10 sumarizes the comparison between these two algorithms. We observe considerable decrease in accuracy for both the training and test set when using the recursive methods, showing that the Karcher flow algorithm favors our proposed algorithm more." }, { "heading": "A.7 CONVERGENCE CURVE ANALYSIS", "text": "Figure 6(a) shows the validation curve which almost saturates at 200 epoch demonstrating the stability of our training process. First column bar of Figure (6(b)) show the test accuracy comparison when only 10% of the data is used for training our architecture which demonstrate the effectiveness of our algorithm. Further, we study this for our SPDNetNAS architecture by taking 10%, 33%, 80% of the data for training. Figure 6(b)) clealy show our superiority of SPDNetNAS algorithm than handcrafted SPD networks.\nFigure (7(a)) and Figure (7(b)) show the convergence curve of our loss function on the RADAR and HDM05 datasets respectively. For the RADAR dataset the validation and training losses follow a similar trend and converges at 200 epochs. For the HDM05 dataset, we observe the training curve plateaus after 60 epochs, where as the validation curve takes 100 epochs to provide a stable performance. Additionally, we noticed a reasonable gap between the training loss and validation loss for the HDM05 dataset (Müller et al., 2007). A similar pattern of convergence gap between validation loss and training loss has been observed by Huang & Van Gool (2017) work." }, { "heading": "A.8 WHY WE PREFERRED TO SIMULATE OUR EXPERIMENTS ON CPU RATHER THAN GPU?", "text": "When dealing with SPD matrices, we need to carry out complex computations. These computations are performed to make sure that our transformed representation and corresponding operations respect the underlying manifold structure. In our study, we analyzed SPD matrices with the Affine Invariant\nRiemannian Metric (AIRM), this induces operations heavily dependent on singular value decomposition (SVD) or eigendecomposition (EIG). Both decompositions suffer from weak support on GPU platforms. Hence, our training did not benefit from GPU acceleration and we decided to train on CPU. As a future work, we aim to speedup our implementation on GPU by optimizing the SVD Householder bi-diagonalization process as studied in some existing works like Dong et al. (2017a); Gates et al. (2018)." }, { "heading": "B DETAILED DESCRIPTION OF OUR PROPOSED OPERATIONS", "text": "In this section, we describe some of the major operations defined in the main paper from an intuitive point of view. We particularly focus on some of the new operations that are defined for the input SPDs, i.e., the Weighted Riemannian Pooling, the Average/Max Pooling, the Skip Reduced operation and the Mixture of Operations." }, { "heading": "B.1 WEIGHTED RIEMANNIAN POOLING", "text": "Figure 8 provides an intuition behind the Weighted Riemannian Pooling operation. Here, w 11, w 21, etc., corresponds to the set of normalized weights for each channel (shown as two blue channels). The next channel —shown in orange, is then computed as weighted Fréchet mean over these two input channels. This procedure is repeated to achieve the desired number of output channels (here two), and finally all the output channels are concatenated. The weights are learnt as a part of the optimization procedure ensuring the explicit convex constraint is imposed." }, { "heading": "B.2 AVERAGE AND MAX POOLING", "text": "In Figure 9 we show our average and max pooling operations. We first perform a LogEig map on the SPD matrices to project them to the Euclidean space. Next, we perform average and max pooling on these Euclidean matrices similar to classical convolutional neural networks. We further perform an ExpEig map to project the Euclidean matrices back on the SPD manifold. The diagram shown in Figure 9 is inspired by Huang & Van Gool (2017) work. The kernel size of AveragePooling reduced and MaxPooling reduced is set to 2 or 4 for all experiments according to the specific dimensionality reduction factors." }, { "heading": "B.3 SKIP REDUCED", "text": "Following Liu et al. (2018b), we defined an analogous of Skip operation on a single channel for the reduced cell (Figure 10). We start by using a BiMap layer —equivalent to Conv in Liu et al. (2018b), to map the input channel to an SPD whose space dimension is half of the input dimension. We further perform an SVD decomposition on the two SPDs followed by concatenating the Us, Vs and Ds obtained from SVD to block diagonal matrices. Finally, we compute the output by multiplying the block diagonal U, V and D computed before." }, { "heading": "B.4 MIXED OPERATION ON SPDS", "text": "In Figure 11 we provide an intuition of the mixed operation we have proposed in the main paper. We consider a very simple base case of three nodes, two input nodes (1 and 2) and one output\nnode (node 3). The goal is to compute the output node 3 from input nodes 1 and 2. We perform a candidate set of operations on the input node, which correspond to edges between the nodes (here two for simplicity). Each operation has a weight αi j where i corresponds to the node index and j is the candidate operation identifier. In Figure 11 below i and j ∈ {1, 2} and α1 = {α1 1, α1 2} , α2 = {α2 1, α2 2} . α’s are optimized as a part of the bi-level optimization procedure proposed in the main paper. Using these alpha’s, we perform a channel-wise weighted Fréchet mean (wFM) as depicted in the figure below. This effectively corresponds to a mixture of the candidate operations. Note that the alpha’s corresponding to all channels of a single operation are assumed to be the same. Once the weighted Fréchet means have been computed for nodes 1 and 2, we perform a channel-wise concatenation on the outputs of the two nodes, effectively doubling the number of channels in node 3.\n1 1 wFMα1wFMα1\nα 1_1\nα 2_1\nα1_2\nα2_2 wFMα2\nConcat\nwFMα2\nop1\nop1\nop2\nop2 2 2\n3 3" }, { "heading": "C DIFFERENTIABLE CONVEX LAYER FOR SPARSEMAX OPTIMIZATION", "text": "" } ]
2,020
null
SP:eb8d96fd5cd18569cfa519c5a09af90ea272d533
[ "This paper explores the knowledge distillation problem in object detection. It claims that the failure of knowledge distillation in object detection is mainly caused by the imbalance between pixels of foreground and background, and the relation distillation between different pixels. The authors then propose non-local distillation to tackle the problem. Extensive experiments are conducted on MS COCO and verify the effectiveness of the proposed method. " ]
Knowledge distillation, in which a student model is trained to mimic a teacher model, has been proved as an effective technique for model compression and model accuracy boosting. However, most knowledge distillation methods, designed for image classification, have failed on more challenging tasks, such as object detection. In this paper, we suggest that the failure of knowledge distillation on object detection is mainly caused by two reasons: (1) the imbalance between pixels of foreground and background and (2) lack of distillation on the relation between different pixels. Observing the above reasons, we propose attention-guided distillation and non-local distillation to address the two problems, respectively. Attention-guided distillation is proposed to find the crucial pixels of foreground objects with attention mechanism and then make the students take more effort to learn their features. Non-local distillation is proposed to enable students to learn not only the feature of an individual pixel but also the relation between different pixels captured by non-local modules. Experiments show that our methods achieve excellent AP improvements on both one-stage and two-stage, both anchor-based and anchor-free detectors. For example, Faster RCNN (ResNet101 backbone) with our distillation achieves 43.9 AP on COCO2017, which is 4.1 higher than the baseline. Codes have been released on Github†.
[ { "affiliations": [], "name": "EFFICIENT DETECTORS" }, { "affiliations": [], "name": "Linfeng Zhang" }, { "affiliations": [], "name": "Kaisheng Ma" } ]
[ { "authors": [ "Sungsoo Ahn", "Shell Xu Hu", "Andreas Damianou", "Neil D Lawrence", "Zhenwen Dai" ], "title": "Variational information distillation for knowledge transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Mohammad Farhadi Bajestani", "Yezhou Yang" ], "title": "Tkd: Temporal knowledge distillation for active perception", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Cascade r-cnn: High quality object detection and instance segmentation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1939 }, { "authors": [ "Guobin Chen", "Wongun Choi", "Xiang Yu", "Tony Han", "Manmohan Chandraker" ], "title": "Learning efficient object detection models with knowledge distillation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kai Chen", "Jiaqi Wang", "Jiangmiao Pang", "Yuhang Cao", "Yu Xiong", "Xiaoxiao Li", "Shuyang Sun", "Wansen Feng", "Ziwei Liu", "Jiarui Xu" ], "title": "Mmdetection: Open mmlab detection toolbox and benchmark", "venue": null, "year": 1906 }, { "authors": [ "Jang Hyun Cho", "Bharath Hariharan" ], "title": "On the efficacy of knowledge distillation", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR, pp", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Shiming Ge", "Shengwei Zhao", "Chenyu Li", "Jia Li" ], "title": "Low-resolution face recognition in the wild via selective knowledge distillation", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Byeongho Heo", "Jeesoo Kim", "Sangdoo Yun", "Hyojin Park", "Nojun Kwak", "Jin Young Choi" ], "title": "A comprehensive overhaul of feature distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Yuenan Hou", "Zheng Ma", "Chunxiao Liu", "Chen Change Loy" ], "title": "Learning lightweight lane detection cnns by self attention distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": null, "year": 1905 }, { "authors": [ "Han Hu", "Jiayuan Gu", "Zheng Zhang", "Jifeng Dai", "Yichen Wei" ], "title": "Relation networks for object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Quanquan Li", "Shengying Jin", "Junjie Yan" ], "title": "Mimicking very efficient network for object detection", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yifan Liu", "Ke Chen", "Chris Liu", "Zengchang Qin", "Zhenbo Luo", "Jingdong Wang" ], "title": "Structured knowledge distillation for semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "arXiv preprint arXiv:1810.05270,", "year": 2018 }, { "authors": [ "Xin Lu", "Buyu Li", "Yuxin Yue", "Quanquan Li", "Junjie Yan" ], "title": "Grid r-cnn", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Seyed-Iman Mirzadeh", "Mehrdad Farajtabar", "Ang Li", "Hassan Ghasemzadeh" ], "title": "Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher", "venue": null, "year": 1902 }, { "authors": [ "Hossein Mobahi", "Mehrdad Farajtabar", "Peter L Bartlett" ], "title": "Self-distillation amplifies regularization in hilbert space", "venue": "arXiv preprint arXiv:2002.05715,", "year": 2020 }, { "authors": [ "Markus Nagel", "Mart van Baalen", "Tijmen Blankevoort", "Max Welling" ], "title": "Data-free quantization through weight equalization and bias correction", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Wonpyo Park", "Dongju Kim", "Yan Lu", "Minsu Cho" ], "title": "Relational knowledge distillation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "Fitnets: Hints for thin deep nets", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Frederick Tung", "Greg Mori" ], "title": "Similarity-preserving knowledge distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Tao Wang", "Li Yuan", "Xiaopeng Zhang", "Jiashi Feng" ], "title": "Distilling object detectors with fine-grained feature imitation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Canwen Xu", "Wangchunshu Zhou", "Tao Ge", "Furu Wei", "Ming Zhou" ], "title": "Bert-of-theseus: Compressing bert by progressive module replacing", "venue": "arXiv preprint arXiv:2002.02925,", "year": 2020 }, { "authors": [ "Ze Yang", "Shaohui Liu", "Han Hu", "Liwei Wang", "Stephen Lin" ], "title": "Reppoints: Point set representation for object detection", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Jihoon Bae", "Junmo Kim" ], "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Li Yuan", "Francis EH Tay", "Guilin Li", "Tao Wang", "Jiashi Feng" ], "title": "Revisit knowledge distillation: a teacher-free framework", "venue": null, "year": 1909 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Hongkai Zhang", "Hong Chang", "Bingpeng Ma", "Naiyan Wang", "Xilin Chen" ], "title": "Dynamic R-CNN: Towards high quality object detection via dynamic training", "venue": "arXiv preprint arXiv:2004.06002,", "year": 2020 }, { "authors": [ "Linfeng Zhang", "Jiebo Song", "Anni Gao", "Jingwei Chen", "Chenglong Bao", "Kaisheng Ma" ], "title": "Be your own teacher: Improve the performance of convolutional neural networks via self distillation", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October", "year": 2019 }, { "authors": [ "Linfeng Zhang", "Zhanhong Tan", "Jiebo Song", "Jingwei Chen", "Chenglong Bao", "Kaisheng Ma" ], "title": "Scan: A scalable neural networks framework towards compact and efficient models", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Linfeng Zhang", "Muzhou Yu", "Tong Chen", "Zuoqiang Shi", "Chenglong Bao", "Kaisheng Ma" ], "title": "Auxiliary training: Towards accurate and robust models", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tianyun Zhang", "Shaokai Ye", "Kaiqi Zhang", "Jian Tang", "Wujie Wen", "Makan Fardad", "Yanzhi Wang" ], "title": "A systematic dnn weight pruning framework using alternating direction method of multipliers", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "arXiv preprint arXiv:1702.03044,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Chenchen Zhu", "Yihui He", "Marios Savvides" ], "title": "Feature selective anchor-free module for singleshot object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nRecently, excellent breakthrough in various domains has been achieved with the success of deep learning (Ronneberger et al., 2015; Devlin et al., 2018; Ren et al., 2015). However, the most advanced deep neural networks always consume a large amount of computation and memory, which has limited their deployment in edge devices such as self-driving cars and mobile phones. To address this problem, abundant techniques are proposed, including pruning (Han et al., 2016; Zhang et al., 2018; Liu et al., 2018; Frankle & Carbin, 2018), quantization (Nagel et al., 2019; Zhou et al., 2017), compact model design (Sandler et al., 2018; Howard et al., 2019; Ma et al., 2018; Iandola et al., 2016) and knowledge distillation (Hinton et al., 2014; Buciluǎ et al., 2006). Knowledge distillation, which is also known as teacher-student learning, aims to transfer the knowledge of an over-parameterized teacher to a\nlightweight student. Since the student is trained to mimic the logits or features of the teacher, the student can inherit the dark knowledge from the teacher, and thus often achieves much higher accuracy. Due to its simplicity and effectiveness, knowledge distillation has become a popular technique for both model compression and model accuracy boosting. ∗Corresponding author †https://github.com/ArchipLab-LinfengZhang/Object-Detection-Knowledge-Distillation-ICLR2021\nAs one of the most crucial challenges in computer vision, object detection has an urgent requirement of both accurate and efficient models. Unfortunately, most of the existing knowledge distillation methods in computer vision are designed for image classification and usually leads to trivial improvements on object detection (Li et al., 2017). In this paper, we impute the failure of knowledge distillation on object detection to the following two issues, which will be solved later, respectively.\nImbalance between foreground and background. In an image to be detected, the background pixels are often more overwhelming than the pixels of the foreground objects. However, in previous knowledge distillation, the student is always trained to mimic the features of all pixels with the same priority. As a result, students have paid most of their attention to learning background pixels features, which suppresses student’s learning on features of the foreground objects. Since foreground pixels are more crucial in detection, the imbalance hurts the performance of knowledge distillation severely. To overcome this obstacle, we propose the attention-guided distillation which distills only the crucial foreground pixels. Since the attention map can reflect the position of the important pixels (Zhou et al., 2016), we adopt the attention map as the mask for knowledge distillation. Concretely, the pixel with a higher attention value is regarded as a pixel of a foreground object and then is learned by the student model with a higher priority. Compared with the previous binary mask method (Wang et al., 2019), the mask generated by attention maps in our methods is more fine-grained and requires no additional supervision. Compared with the previous attention-based distillation methods (Zagoruyko & Komodakis, 2017), the attention map in our methods is not only utilized as the information to be distilled but also utilized as the mask signal for feature distillation.\nLack of distillation on relation information. It is generally acknowledged that the relation between different objects contains valuable information in object detection. Recently, lots of researchers successfully improve the performance of detectors by enabling detectors to capture and make use of these relations, such as non-local modules (Wang et al., 2018) and relation networks (Hu et al., 2018). However, the existing object detection knowledge distillation methods only distill the information of individual pixels but ignore the relation of different pixels. To solve this issue, we propose the non-local distillation, which aims to capture the relation information of students and teachers with non-local modules and then distill them from teachers to students.\nSince the non-local modules and attention mechanism in our methods are only required in the training period, our methods don’t introduce additional computation and parameters in the inference period. Besides, our methods are feature-based distillation methods which do not depend on a specific detection algorithm so they can be directly utilized in all kinds of detectors without any modification. On MS COCO2017, 2.9, 2.9 and 2.2 AP improvements can be observed on two-stage, one-stage, and anchor-free models on average, respectively. Experiments on Mask RCNN show that our methods can also improve the performance of instance segmentation by 2.0 AP, on average. We have conducted a detailed ablation study and sensitivity study to show the effectiveness and stability of each distillation loss. Moreover, we study the relation between teachers and students on object detection and find that knowledge distillation on object detection requires a high AP teacher, which is different from the conclusion in image classification where a high AP teacher may harm the performance of students (Mirzadeh et al., 2019; Cho & Hariharan, 2019). We hope that these results are worth more contemplation of knowledge distillation on tasks except for image classification. To sum up, the contribution of this paper can be summarized as follows.\n• We propose the attention-guided distillation, which emphasizes students’ learning on the foreground objects and suppresses students’ learning on the background pixels.\n• We propose the non-local distillation, which enables the students to learn not only the information of the individual pixel but also the relation between different pixels from teachers.\n• We show that a teacher with higher AP is usually a better teacher in knowledge distillation on object detection, which is different from the conclusion in image classification." }, { "heading": "2 RELATED WORK", "text": "As an effective method for model compression and model accuracy boosting, knowledge distillation has been widely utilized in various domains and tasks, including image classification (Hinton et al., 2014; Romero et al., 2015; Zagoruyko & Komodakis, 2017), object detection (Chen et al., 2017; Li et al., 2017; Wang et al., 2019; Bajestani & Yang, 2020), semantic segmentation (Liu et al., 2019),\nface recognition (Ge et al., 2018), pretrained language model (Sanh et al., 2019; Xu et al., 2020), multi-exit networks training (Zhang et al., 2019b;a), model robustness (Zhang et al., 2020b) and so on. Hinton et al. (2014) first propose the concept of knowledge distillation where the students are trained to mimic the results after softmax layers of teachers. Then, abundant methods are proposed to transfer the knowledge in teacher’s features (Romero et al., 2015) or the variants, such as attention (Zagoruyko & Komodakis, 2017; Hou et al., 2019), FSP (Yim et al., 2017), mutual information (Ahn et al., 2019), positive features (Heo et al., 2019), relation of samples in a batch (Park et al., 2019; Tung & Mori, 2019).\nImproving the performance of object detection becomes a hot topic in knowledge distillation recently. Chen et al. (2017) design the first knowledge distillation method on object detection, which includes distillation loss on the backbone, the classification head and the regression head. Then, many researchers find that the imbalance between the foreground objects and background is a crucial problem in detection distillation. Instead of distilling the whole features of backbone networks, Li et al. (2017) only apply L2 distillation loss to the features\nsampled by RPN. Bajestani & Yang (2020) propose the temporal knowledge distillation, which introduces a hyper-parameter to balance the distillation loss between the pixels of the foreground and background. Wang et al. (2019) propose the fine-grained feature imitation, which only distills the feature near object anchor locations. However, although these works have tried to distill only the pixels of foreground objects, they always reply on the annotation in groundtruth, anchors, and bounding boxes and thus can not be transferred to different kinds of detectors and tasks. In contrast, in our method, the pixels of foreground objects are found with attention mechanism, which can be easily generated from features. As a result, it can be directly utilized in all kinds of detectors without any modification. As shown in Figure 3, the difference between the previous mask-based detection distillation method (Wang et al., 2019) and our attention-guided distillation can be summarized as\nfollows (i) Our methods generate the mask with attention mechanism while they generate the mask with ground truth bounding boxes and anchor priors. (ii) The mask in our methods is a pixel-wise and fine-grained mask while the mask in their method is an object-wise and binary mask. (iii) The masks in our methods are composed of a spatial mask and a channel mask while they only have a spatial mask. More detailed comparison with related work can be found in Appendix.E." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 ATTENTION-GUIDED DISTILLATION", "text": "We use A ∈ RC,H,W to denote the feature of the backbone in an object detection model, where C,H,W denotes its channel number, height and width, respectively. Then, the generation of the spatial attention map and channel attention map is equivalent to finding the mapping function Gs : RC,H,W −→ RH,W and Gc : RC,H,W −→ RC , respectively. Note that the superscripts s and c here are utilized to discriminate ‘spatial’ and ‘channel’. Since the absolute value of each element in the feature implies its importance, we construct Gs by summing the absolute values across the channel dimension and construct Gc by summing the absolute values across the width and height dimension, which can be formulated as Gc(A) = 1HW ∑i=1 H ∑j=1 W |A·,i,j | and Gs(A) = 1 C ∑k=1 C |Ak,·,·|, where i, j, k denotes the ith, jth, kth slice of A in the height, width, and channel dimension, respectively. Then, the spatial attention mask Ms and the channel attention mask M c used in attentionguided distillation can be obtained by summing the attention maps from the teacher and the student detector, which can be formulated as Ms = HW · softmax((Gs(AS) + Gs(AT ))/T ),M c = C · softmax((Gc(AS) + Gc(AT ))/T ). Note that the superscripts S and T here are used to discriminate students and teachers. T is a hyper-parameter in softmax introduced by Hinton et al. to adjust the distribution of elements in attention masks (see Figure 4). The attention-guided distillation loss LAGD is composed of two components – attention transfer loss LAT and attention-masked loss LAM . LAT is utilized to encourage the student model to mimic the spatial and channel attention of the teacher model, which can be formulated as\nLAT = L2(Gs(AS),Gs(AT )) + L2(Gc(AS),Gc(AT )). (1)\nLAM is utilized to encourage the student to mimic the features of teacher models by a L2 norm loss masked by Ms and M c, which can be formulated as\nLAM = C∑ k=1 H∑ i=1 W∑ j=1 (ATk,i,j −ASk,i,j)2 ·Msi,j ·M ck 12 . (2)" }, { "heading": "3.2 NON-LOCAL DISTILLATION", "text": "Non-local module (Wang et al., 2018) is an effective method to improve the performance of neural networks by capturing the global relation information. In this paper, we apply non-local modules to capture the relation between pixels in an image, which can be formulated as ri,j =\n1 WH ∑i′=1 H ∑j′=1 W f(A·,i,j , A·,i′ ,j′ )g(A·,i′ ,j′ ), where r denotes the obtained relation information. i, j are the spatial indexes of an output position whose response is to be computed. i ′ , j ′ are the spatial indexes that enumerates all possible positions. f is a pairwise function for computing the relation of two pixels and g is an unary function for computing the representation of an individual pixel. Now, we can introduce the proposed non-local distillation loss LNLD as the L2 loss between the relation information of the students and teachers, which can be formulated as LNLD = L2(rS , rT )." }, { "heading": "3.3 OVERALL LOSS FUNCTION", "text": "We introduce three hyper-parameters α, β, γ to balance different distillation loss in our methods. The overall distillation loss can be formulated as\nLDistill(AT , AS) = α · LAT + β · LAM︸ ︷︷ ︸ Attention-guided distillation + γ · LNLD.︸ ︷︷ ︸ Non-local distillation\n(3)\nThe overall distillation loss is a model-agnostic loss, which can be added to the original training loss of any detection model directly. The sensitivity study of each hyper-parameter and the ablation study of each loss are shown in Figure 5 and Table 4, respectively." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 EXPERIMENTS SETTINGS", "text": "The proposed knowledge distillation method is evaluated on MS COCO2017, which is a large-scale dataset that contains over 120k images spanning 80 categories (Lin et al., 2014). The benchmark detection networks are composed of both two-stage detection models, including Faster RCNN (Ren et al., 2015), Cascade RCNN (Cai & Vasconcelos, 2019), Dynamic RCNN (Zhang et al., 2020a), Grid RCNN (Lu et al., 2019) and one-stage detection models, including the RetinaNet (Lin et al., 2017), Fsaf RetinaNet (Zhu et al., 2019). Besides, we also evaluate our methods on the Mask RCNN(He et al., 2017), Cascade Mask RCNN (Cai & Vasconcelos, 2019), and anchor-free models - RepPoints (Yang et al., 2019). We adopt the ResNet50 and ResNet101 (He et al., 2016) as the backbone network of each detection model. We pre-train the backbone model on ImageNet (Deng et al., 2009) and then finetune it on MS COCO2017. We have compared our methods with three kinds of object detection knowledge distillation methods (Chen et al., 2017; Wang et al., 2019; Heo et al., 2019). All the experiments in this paper are implemented with PyTorch (Paszke et al., 2019) with mmdetection2 framework (Chen et al., 2019). The reported fps is measured on one RTX 2080Ti GPU. We adopt the same hyper-parameters settings {α = γ = 7× 10−5, β = 4× 10−3, T = 0.1} for all the two-stage models and {α = γ = 4 × 10−4, β = 2 × 10−2, T = 0.5} for all the onestage models. Cascade Mask RCNN with ResNeXt101 backbone is utilized as the teacher for all the two-stage students and RetinaNet with ResNeXt101 backbone is utilized as the teacher for all the one-stage students. Please refer to the codes in Github for more details." }, { "heading": "4.2 EXPERIMENT RESULTS", "text": "In this section, we show the experiment results of the baseline detectors and our models in Table 1 and Table 2, and compare our methods with other three knowledge distillation methods in Table 3. It is observed that: (i) Consistent and significant AP boost can be observed on all the 9 kinds of detectors. On average, there are 2.9, 2.9, and 2.2 AP improvements on the two-stage, one-stage, and anchor-free detectors, respectively. (ii) With the proposed method, a student model with ResNet50 backbone can outperform the same model with ResNet101 backbone by 1.2 AP on average. (iii) On Mask RCNN related models, there are 2.3 improvements on bounding box AP and 2.0 improvements on mask AP on average respectively, indicating the proposed method can be utilized in not only object detection but also instance segmentation. (iv) Our methods achieve 2.2 higher AP than the second-best distillation method, on average. (v) There are 2.7 and 2.9 AP improvements on models with ResNet50 and ResNet101 backbones, respectively, indicating that deeper detectors benefit more from knowledge distillation." }, { "heading": "4.3 ABLATION STUDY AND SENSITIVITY STUDY", "text": "Ablation study. Table 4 shows the ablation study of the proposed attention-guided distillation (LAT and LAM ) and non-local distillation (LNLD). It is observed that: (i) Attention-guided distillation and non-local distillation lead to 2.8 and 1.4 AP improvements, respectively. (ii) LAT and LAM lead to 1.2 and 2.4 AP improvements respectively, indicating that most of the benefits of attentionguided distillation are obtained from the feature loss masked by the attention maps (LAM ). (iii)\nThere are 3.1 AP improvements with the combination of attention-guided distillation and non-local distillation. These observations indicate that each distillation loss in our methods has their individual effectiveness and they can be utilized together to achieve better performance. We also give an ablation study to the spatial and channel attention in Appendix A.\nSensitivity study on hyper-parameters. Four hyper-parameters are introduced in this paper. α, β, and γ are utilized to balance the magnitude of different distillation loss and T is utilized to adjust the distribution of attention masks. The hyper-parameter sensitivity study on MS COCO2017 with Faster RCNN (ResNet50 backbone) is introduced in Figure 5. It is observed that the worst hyper-parameters only lead to 0.3 AP drop compared with the highest AP, which is still 2.9 higher compared with the baseline model, indicating that our methods are not sensitive to the choice of hyper-parameters.\nSensitivity study on the types of non-local modules. There are four kinds of non-local modules, including Gaussian, embedded Gaussian, dot production, and concatenation. Table 5 shows the performance of our methods with different types of non-local modules. It is observed that the worst non-local type (Gaussian) is only 0.2 AP lower than the best non-local type (Embedded Gaussian and Concatenation), indicating our methods are not sensitive to the choice of non-local modules.\nTable 5: Results of different types of non-local modules on Faster RCNN (ResNet50 backbone).\nNon-Local Type AP Embedded Gaussian 41.5 Dot Production 41.4 Concatenation 41.5 Gaussian 41.3" }, { "heading": "5 DISCUSSION", "text": "" }, { "heading": "5.1 ANALYSIS ON THE BENEFITS OF KNOWLEDGE DISTILLATION", "text": "Qualitative analysis. Figure 6 shows the comparison of detection results between a baseline and a distilled detector. It is observed that: (i) Our methods improve the detection ability on small-objects. In the first three figures, the distilled model can correctly detect cars, the handbag, and the person in\nthe car, respectively. (ii) Our methods prevent models from generating multiple bounding boxes for the same object. In the last two figures, the baseline model generates multiple bounding boxes for the boat and the train while the distilled model avoids these errors.\nAnalysis on the types of detection error. We have analyzed the different types of detection errors in the baseline and distilled models in Figure 7. The number in the legend indicates AUC (area under the curve). It is observed that our distillation method leads to error reduction on all kinds of error. In brief, our methods can improve the ability of both localization and classification." }, { "heading": "5.2 RELATION BETWEEN STUDENT DETECTORS AND TEACHER DETECTORS.", "text": "There is sufficient research focusing on the relation between students and teachers. Mirzadeh et al. (2019) and Cho & Hariharan (2019) show that a teacher with higher accuracy may not be the better teacher for knowledge distillation and sometimes a teacher with too high accuracy may harm the performance of students. Besides, Mobahi et al. (2020) and Yuan et al. (2019) show that the same model and even a model with lower accuracy than the student model can be utilized as the teacher model for knowledge distillation. However, all their experiments are conducted on image classification. In this section, we study whether these observations still hold in the task of object detection. As shown in Figure 8 , we conduct experiments on Faster RCNN (ResNet50 backbone) and Cascade RCNN (ResNet50 backbone) students with teacher models of different AP. It is observed that: (i) In all of our experiments, the student with a higher AP teacher always achieves higher AP. (ii) When the teacher has lower or the same AP as the student, there are very limited and even negative improvements with knowledge distillation. This observation indicates that the relation between students and teachers on object detection is opposite to that on image classification. Our experiment results suggest that there is a strong positive correlation between the AP of students and teachers. A high AP teacher tends to improve the performance of students significantly.\nWe think that the reason why a high AP teacher model is crucial in object detection but not very necessary in image classification is that object detection is a more challenging task. As a result, a weaker teacher model may introduce more negative influence on students, which prevents students from achieving higher AP. In contrast, on image classification, most of teacher models can achieve a very high training accuracy so they don’t introduce so much error." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have proposed two knowledge distillation methods, including attention-guided distillation and the non-local distillation to improve the performance of object detection models. Attention-guided distillation manages to find the crucial pixels and channels from the whole feature map with attention mechanism and then enables the student to focus more on these crucial pixels and channels instead of the whole feature map. Non-local distillation enables students to learn not only the information of an individual pixel but also the relation between different pixels captured by the non-local modules. Experiments on 9 kinds of models including two-stage, one-stage, anchor-free and anchor-based models have been provided to evaluate our methods.\nBesides, we have also given a study on the relation between students and teachers in object detection. Our experiments show that there is a strong positive correlation between the AP of teachers and students. A high AP teacher detector plays an essential role in knowledge distillation. This observation is much different from the previous conclusion in image classification, where a teacher model with very high accuracy may harm the performance of knowledge distillation. We hope that our result may call for more rethinking works on knowledge distillation in tasks except image classification." }, { "heading": "7 ACKNOWLEDGEMENT", "text": "This work was partially supported by Institute for interdisciplinary Information Core Technology." }, { "heading": "A ABLATION STUDY ON THE SPATIAL AND CHANNEL ATTENTION", "text": "Different from previous attention-based knowledge distillation methods, the attention-guided distillation in our methods uses not only spatial attention but also the channel attention. In this appendix, we have conducted an ablation study on the two kinds of attention with Faster RCNN (ResNet50) on MS COCO2017 to show their individual effectiveness.\nIt is observed that spatial attention and channel attention lead to 2.6 and 2.3 AP improvements, respectively. In contrast, the combination of the two kinds of attention leads to 2.8 AP improvements. These results indicate that both spatial and channel attention have their individual effectiveness and they can be utilized together to achieve better performance." }, { "heading": "B ADAPTATION LAYERS IN KNOWLEDGE DISTILLATION", "text": "The adaptation layers in knowledge distillation are first proposed by Romero et al. (2015) to adjust the feature size of students and teachers. Then, recent research finds that the adaptation layers play an important role in improving the performance of students (Chen et al., 2017). In this paper, we adopt different kinds of adaptation layers for different distillation loss. Concretely, We adopt 1x1 convolutional layers for LAM and LNLD, 3x3 convolutional layers for LspatialAT , and fully connected layers for LchannelAT . Note that the adaptation layers are only utilized in the training period and they don’t introduce additional computation and parameters." }, { "heading": "C EXPERIMENTS ON SMALLER BACKBONES", "text": "According to the insightful comments of the reviewers, we conduct a series of experiments on models with small backbones including ResNet18 and RegNet-800M, and compact detectors including Yolo v3 and SSD. As shown in Table 7, our methods also achieve significant AP improvements on these compact models. Note that more experiments with small backbones will be added in the camera ready version." }, { "heading": "D EXPERIMENTS ON CITYSCAPES", "text": "According to the insightful comments of the reviewers, as shown in Table 8, we conduct a series of experiments to show the effectiveness of our method on Cityscapes. Note that more experiments on Cityscapes will be added in the camera ready version." }, { "heading": "E COMPARISION WITH RELATED WORK", "text": "Comparison on Methodology and Application. Feature distillation is utilized in all the five methods. However, Chen et al. distill not only the feature, but also the classification logits and bounding box regression results, which has limited their application scenes in one stage and anchor-free models. Li et al. and Wang et al. distill the features in the regions of proposals and near object anchor locations, respectively. As a result, their methods reply on the supervision of anchors and groundtruths and can’t be utilized in one stage and anchor free models. Bajestani & Yang is utilized for active perception on video, which can not be utilized in image-based detection. In contrast, in our method, the attention mask and relation information can be easily generated from the backbone features, which has no requirements on groundtruths, anchors and proposals. As a result, it can be easily used in different kinds of models and tasks without any modification.\nComparison on Motivation. Chen et al.’s method is a direct application of knowledge distillation on object detection. The other three methods and our method are motivated by the imbalance between foreground and background piexles and these methods try to address this issue by reweighting the distillation loss. Besides, our method is also motivated by the effect of the relation among pixels in an image, which is ignored by the other methods." } ]
2,021
null
SP:d5f2c31689e6b6f52bb6f21916e8acacba444f76
[ "The paper presents an approach for predicting edits in programs, by modeling the programs as trees. The approach is mainly an extension of Yin et al. (2019), with the main difference that the model is required to predict only the output **actions**, instead of generating the entire output tree as in Yin et al. (2019). This difference of predicting only output actions is shared with other previous work though." ]
While most neural generative models generate outputs in a single pass, the human creative process is usually one of iterative building and refinement. Recent work has proposed models of editing processes, but these mostly focus on editing sequential data and/or only model a single editing pass. In this paper, we present a generic model for incremental editing of structured data (i.e. “structural edits”). Particularly, we focus on tree-structured data, taking abstract syntax trees of computer programs as our canonical example. Our editor learns to iteratively generate tree edits (e.g. deleting or adding a subtree) and applies them to the partially edited data, thereby the entire editing process can be formulated as consecutive, incremental tree transformations. To show the unique benefits of modeling tree edits directly, we further propose a novel edit encoder for learning to represent edits, as well as an imitation learning method that allows the editor to be more robust. We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches that generate the edited program directly in one pass. Finally, we demonstrate that training our editor to imitate experts and correct its mistakes dynamically can further improve its performance.
[ { "affiliations": [], "name": "Ziyu Yao" }, { "affiliations": [], "name": "Frank F. Xu" }, { "affiliations": [], "name": "Pengcheng Yin" }, { "affiliations": [], "name": "Huan Sun" }, { "affiliations": [], "name": "Graham Neubig" } ]
[ { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marc Brockschmidt", "Miltiadis Allamanis", "Alexander L Gaunt", "Oleksandr Polozov" ], "title": "Generative code modeling with graphs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shaked Brody", "Uri Alon", "Eran Yahav" ], "title": "A structural model for contextual code changes", "venue": "Proceedings of the ACM on Programming Languages,", "year": 2020 }, { "authors": [ "Saikat Chakraborty", "Yangruibo Ding", "Miltiadis Allamanis", "Baishakhi Ray" ], "title": "CODIT: Code editing with tree-based neural models", "venue": "IEEE Transactions on Software Engineering,", "year": 2020 }, { "authors": [ "Zimin Chen", "Steve James Kommrusch", "Michele Tufano", "Louis-Noël Pouchet", "Denys Poshyvanyk", "Martin Monperrus" ], "title": "Sequencer: Sequence-to-sequence learning for end-to-end program repair", "venue": "IEEE Transactions on Software Engineering,", "year": 2019 }, { "authors": [ "Noam Chomsky" ], "title": "Three models for the description of language", "venue": "IRE Transactions on information theory,", "year": 1956 }, { "authors": [ "Trevor Cohn", "Phil Blunsom", "Sharon Goldwater" ], "title": "Inducing tree-substitution grammars", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Elizabeth Dinella", "Hanjun Dai", "Ziyang Li", "Mayur Naik", "Le Song", "Ke Wang" ], "title": "Hoppity: Learning graph transformations to detect and fix bugs in programs", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yue Dong", "Zichao Li", "Mehdi Rezagholizadeh", "Jackie Chi Kit Cheung" ], "title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ahmed Elgohary", "Saghar Hosseini", "Ahmed Hassan Awadallah" ], "title": "Speak to your parser: Interactive text-to-SQL with natural language feedback", "venue": "In Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Dan Feblowitz", "David Kauchak" ], "title": "Sentence simplification as tree transduction", "venue": null, "year": 2013 }, { "authors": [ "António Góis", "Kyunghyun Cho", "André Martins" ], "title": "Learning non-monotonic automatic post-editing of translations from human orderings", "venue": "In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation,", "year": 2020 }, { "authors": [ "Yoav Goldberg", "Joakim Nivre" ], "title": "A dynamic oracle for arc-eager dependency parsing", "venue": "In Proceedings of COLING", "year": 2012 }, { "authors": [ "Jiatao Gu", "Yong Wang", "Kyunghyun Cho", "Victor O.K. Li" ], "title": "Search engine guided neural machine translation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jiatao Gu", "Qi Liu", "Kyunghyun Cho" ], "title": "Insertion-based decoding with automatically inferred generation", "venue": "order. Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Kelvin Guu", "Tatsunori B Hashimoto", "Yonatan Oren", "Percy Liang" ], "title": "Generating sentences by editing", "venue": "prototypes. Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Vincent J. Hellendoorn", "Charles Sutton", "Rishabh Singh", "Petros Maniatis", "David Bieber" ], "title": "Global relational models of source code", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thong Hoang", "Hong Jin Kang", "David Lo", "Julia Lawall" ], "title": "CC2Vec: Distributed representations of code changes", "venue": "In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering,", "year": 2020 }, { "authors": [ "Hayate Iso", "Chao Qiao", "Hang Li" ], "title": "Fact-based text editing", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "arXiv preprint arXiv:1511.05493,", "year": 2015 }, { "authors": [ "Eric Malmi", "Sebastian Krause", "Sascha Rothe", "Daniil Mirylenka", "Aliaksei Severyn" ], "title": "Encode, tag, realize: High-precision text editing", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Sheena Panthaplackel", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "Copy that! editing sequences by copying spans", "venue": "arXiv preprint arXiv:2006.04771,", "year": 2020 }, { "authors": [ "Sheena Panthaplackel", "Pengyu Nie", "Milos Gligoric", "Junyi Jessy Li", "Raymond Mooney" ], "title": "Learning to update natural language comments based on code changes", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Maxim Rabinovich", "Mitchell Stern", "Dan Klein" ], "title": "Abstract syntax networks for code generation and semantic parsing", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Richard Shin", "Illia Polosukhin", "Dawn Song" ], "title": "Towards specification-directed program repair", "venue": "Workshop Track at International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Michel Simard", "Cyril Goutte", "Pierre Isabelle" ], "title": "Statistical phrase-based post-editing", "venue": "Proceedings of the Main Conference,", "year": 2007 }, { "authors": [ "Felix Stahlberg", "Shankar Kumar" ], "title": "Seq2Edits: Sequence transduction using span-level edit operations", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Mitchell Stern", "William Chan", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "Insertion transformer: Flexible sequence generation via insertion operations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alane Suhr", "Srinivasan Iyer", "Yoav Artzi" ], "title": "Learning to map context-dependent sentences to executable formal queries. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Daniel Tarlow", "Subhodeep Moitra", "Andrew Rice", "Zimin Chen", "Pierre-Antoine Manzagol", "Charles Sutton", "Edward Aftandilian" ], "title": "Learning to fix build errors with graph2diff neural networks", "venue": null, "year": 1911 }, { "authors": [ "Thuy Vu", "Gholamreza Haffari" ], "title": "Automatic post-editing of machine translation: A neural programmer-interpreter approach", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Daniel C Wang", "Andrew W Appel", "Jeffrey L Korn", "Christopher S Serra" ], "title": "The zephyr abstract syntax description language", "venue": "In DSL,", "year": 1997 }, { "authors": [ "Sean Welleck", "Kianté Brantley", "Hal Daumé Iii", "Kyunghyun Cho" ], "title": "Non-monotonic sequential text generation", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yingce Xia", "Fei Tian", "Lijun Wu", "Jianxin Lin", "Tao Qin", "Nenghai Yu", "Tie-Yan Liu" ], "title": "Deliberation networks: Sequence generation beyond one-pass decoding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ziyu Yao", "Yu Su", "Huan Sun", "Wen-tau Yih" ], "title": "Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Ziyu Yao", "Yiqi Tang", "Wen-tau Yih", "Huan Sun", "Yu Su" ], "title": "An imitation game for learning semantic parsers from user interaction", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Michihiro Yasunaga", "Percy Liang" ], "title": "Graph-based, self-supervised program repair from diagnostic feedback", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Pengcheng Yin", "Graham Neubig" ], "title": "A syntactic neural model for general-purpose code generation", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Pengcheng Yin", "Graham Neubig" ], "title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations,", "year": 2018 }, { "authors": [ "Pengcheng Yin", "Graham Neubig", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander L. Gaunt" ], "title": "Learning to represent edits", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Longkai Zhang", "Houfeng Wang" ], "title": "Go climb a dependency tree and correct the grammatical errors", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Rui Zhang", "Tao Yu", "Heyang Er", "Sungrok Shim", "Eric Xue", "Xi Victoria Lin", "Tianze Shi", "Caiming Xiong", "Richard Socher", "Dragomir Radev" ], "title": "Editing-based SQL query generation for crossdomain context-dependent questions", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Rui Zhao", "David Bieber", "Kevin Swersky", "Daniel Tarlow" ], "title": "Neural networks for modeling source code edits", "venue": "arXiv preprint arXiv:1904.02818,", "year": 2019 }, { "authors": [ "Panthaplackel" ], "title": "Graph2Tree + Seq Edit” is better than both “Seq2Seq + Seq Edit” and “Graph2Tree + Graph Edit”. As we have slightly adjusted the evaluation settings (i.e. adding Fixersgold and changing the evaluation procedure of Fixers-one shot compared with Yin et al", "venue": "Seq Edit” on both GHE-gold and Fixers-one", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Iteratively revising existing data for a certain purpose is ubiquitous. For example, researchers repetitively polish their manuscript until the writing becomes satisfactory; computer programmers keep editing existing code snippets and fixing bugs until desired programs are produced. Can we properly model such iterative editing processes with neural generative models?\nTo answer this question, previous works have examined models for editing sequential data such as natural language sentences. Some example use cases include refining results from a first-pass text generation system (Simard et al., 2007; Xia et al., 2017), editing retrieved text into desired outputs (Gu et al., 2018; Guu et al., 2018), or revising a sequence of source code tokens (Yin et al., 2019; Chen et al., 2019; Yasunaga & Liang, 2020). These examples make a single editing pass by directly generating the edited sequence. In contrast, there are also works on modeling the incremental edits of sequential data, which predict sequential edit operations (e.g. keeping, deleting or adding a token) either in a single pass (Shin et al., 2018; Vu & Haffari, 2018; Malmi et al., 2019; Dong et al., 2019; Stahlberg & Kumar, 2020; Iso et al., 2020) or iteratively (Zhao et al., 2019; Stern et al., 2019; Gu et al., 2019a;b), or modify a sequence in a non-autoregressive way (Lee et al., 2018).\nHowever, much interesting data in the world has strong underlying structure such as trees. For example, a syntactic parse can be naturally represented as a tree to indicate the compositional relations among constituents (e.g. phrases, clauses) in a sentence. A computer program inherently is also a tree defined by the programming language’s syntax. In the case that this underlying structure exists, many edits can be expressed much more naturally and concisely as transformations over the underlying trees than conversions of the tokens themselves. For example, removing a statement from a ∗Work done while interning at CMU.\ncomputer program can be easily accomplished by deleting the corresponding tree branch as opposed to deleting tokens one by one. Despite this fact, work on editing tree-structured data has been much more sparse. In addition, it has focused almost entirely on single-pass modification of structured outputs as exemplified by Yin et al. (2019); Chakraborty et al. (2020) for computer program editing.\nIn this work, we are interested in a generic model for incremental editing of structured data (“structural edits”). Particularly, we focus on tree-structured data, taking abstract syntax trees of computer programs as our canonical example. We propose a neural editor that runs iteratively. At each step, the editor generates and applies a tree edit (e.g. deleting or adding a subtree) to the partially edited tree, which deterministically transforms the tree into its modified counterpart. Therefore, the entire tree editing process can be formulated as consecutive, incremental tree transformations (Fig. 1).\nWhile recent works (Tarlow et al., 2019; Dinella et al., 2020; Brody et al., 2020) have also examined models that make changes to trees, our work is distinct from them in that: First, compared with Dinella et al. (2020), we studied a different problem of editing tree-structured data particularly triggered by an edit specification (which implies a certain edit intent such as a code refactoring rule). Second, we model structural edits via incremental tree transformations, while Tarlow et al. (2019) and Brody et al. (2020) predict a complete edit sequence based on the fixed input tree, without applying the edits or performing any tree transformations incrementally. Although Dinella et al. (2020) have explored a similar idea, our proposed tree editor is more general owing to the adoption of the Abstract Syntax Description Language (ASDL; Wang et al. (1997)). This offers our editor two properties: being language-agnostic and ensuring grammar validity. In contrast, Dinella et al. (2020) include JavaScript-specific design and employ only ad-hoc grammar checking. Finally, our tree editor supports a comprehensive set of operations such as adding or deleting a tree node and copying a subtree, which can fulfill a broad range of tree editing requirements. These operations are not fully allowed by previous work, e.g., Brody et al. (2020) cannot add (or generate) a new tree node from scratch; Tarlow et al. (2019) and Dinella et al. (2020) do not support subtree copying.\nWe further propose two modeling and training improvements, specifically enabled by and tailored to our incremental editing formalism. First, we propose a new edit encoder for learning to represent the edits to be performed. Unlike existing edit encoders, which compress tree differences at the token level (Yin et al., 2019; Hoang et al., 2020; Panthaplackel et al., 2020b) or jointly encode the initial and the target tree pairs in their surface forms (Yin et al., 2019), our proposed edit encoder learns the representation by encoding the sequence of gold tree edit actions. Second, we propose a novel imitation learning (Ross et al., 2011) method to train our editor to correct its mistakes dynamically, given that it can modify any part of a tree at any time.\nWe evaluate our proposed tree editor on two source code edit datasets (Yin et al., 2019). Our experimental results show that, compared with previous approaches that generate the edited program in one pass, our editor can better capture the underlying semantics of the intended edits, which allows it to outperform existing approaches by more than 7% accuracy in a one-shot evaluation setting. With the proposed edit encoder, our editor significantly improves accuracy over existing state-of-the-art methods on both datasets. We also demonstrate that our editor can become more robust by learning to imitate expert demonstrations dynamically. Our source code is available at https://github.com/neulab/incremental_tree_edit." }, { "heading": "2 PROBLEM FORMULATION", "text": "As stated above, our goal is to create a general-purpose editor for tree-structured data. Specifically, we are interested in editing tree structures defined following an underlying grammar that, for every parent node type, delineates the allowable choices of child nodes. Such syntactic tree structures, like syntax trees of sentences or computer programs, are ubiquitous in fields like natural language processing and software engineering. In this paper we formulate editing such tree structures as revising an input tree C− into an output tree C+ according to an edit specification ∆. As a concrete example, we use editing abstract syntax trees (ASTs) of C# programs, as illustrated in Fig. 1. This figure shows transforming the AST of “x=list.ElementAt(i+1)” (C−) to the AST of “x=list[i+1]” (C+). In this case, the edit specification ∆ could be interpreted as a refactoring rule that uses the bracket operator [ · ] for accessing elements in a list.1 In practice, the edit specification is learned\n1The corresponding Roslyn analyzer in C# can be found at https://github.com/JosefPihrt/ Roslynator/blob/master/docs/analyzers/RCS1246.md.\nby an edit encoder f∆ from a pair of input-output examples 〈C ′−, C ′+〉, and encoded as a real-valued edit representation, i.e. f∆(C ′−, C ′ +) ∈ Rn. The learned edit representation could then be used to modify C− in a similar way as editing C ′−. Onwards we use f∆ as a simplified notation for edit representations.\nRevising a tree into another typically involves a sequence of incremental edits. For instance, to modify the input tree in the aforementioned example, one may first delete the subtree rooted at the node MethodCall, which corresponds to the code fragment “list.ElementAt(i+1)”, and then replace it with an ElementAccess node denoting the bracket operator, etc. We formulate this editing process as a sequential decision making process (〈g1, a1〉, . . . , 〈gT , aT 〉), where for each tree gt at time step t, the editor executes a tree edit action at, deterministically transforming it into gt+1. In particular, g1 is the initial input tree C−.2 The process stops at gT when the editor predicts a special Stop action as aT . Denoting g1:t = (g1, ..., gt) as the tree history and a1:t = (a1, ..., at) the edit history until step t, then the editing can be framed as the following autoregressive process:\np(a1:T |f∆, g1) = p(a1|f∆, g1)p(a2|f∆, g1:2) · · · p(aT |f∆, g1:T ) = T∏ t=1 p(at|f∆, g1:t). (1)" }, { "heading": "3 MODEL", "text": "We will introduce our neural editor for modeling p(at|·) in § 3.1, followed by the edit representation model f∆ in § 3.2." }, { "heading": "3.1 NEURAL TREE EDITOR", "text": "Fig. 1(c) illustrates our editor architecture. At each time step, the editor first encodes the current tree gt and the tree history g1:t. It then employs a modular decoder to predict a tree edit action at. Next, we will first introduce our tree edit actions and then elaborate the model details." }, { "heading": "3.1.1 TREE EDIT ACTIONS", "text": "Our editor uses a sequence of editing actions to incrementally modify a tree-structured input. At each time step, the decoder takes an action at to update a partially-edited tree gt. Specifically, an action at consists of an operator (e.g. an operator that removes a subtree from gt) with its optional arguments (e.g. the target subtree to delete). Importantly, the space of actions is limited to maintain consistency with the underlying syntax of the language. While a number of syntactic formalisms such as context free grammar (Chomsky, 1956) or tree substitution grammar (Cohn et al., 2010) exist, in this work we choose the ASDL formalism due to its ability to flexibly handle optional and sequential fields (interested readers may reference Wang et al. (1997) and Yin & Neubig (2018) for details). Under this framework, we define four types of operators.\n2Notably, the special case of empty initial trees corresponds to code generation from scratch. Thus our formulation applies to both tasks of editing existing trees and generating new ones.\nDelete operators take a tree node nt as argument and remove nt and its descendants from gt (e.g. t = 1 in Fig. 1(b)). Note that removing arbitrary (child) nodes from gt might produce syntactically invalid trees, since under the grammar, a parent node type would always have a fixed set of edge types. For instance, if the node MethodCall and its incoming edge right were to be removed at t = 1, the resulting AST would be syntactically invalid under C#’s grammar, as the node AssignStmt denoting a variable assignment statement is missing a child node representing its right operand. To maintain syntactic correctness (no missing child nodes for any parent nodes), we therefore replace the to-be-deleted node with a pseudo Dummy node as a placeholder.\nNext, we define an Add operator to add new nodes to gt. The operator first locates a target position by selecting an existing tree node. We consider two cases based on the edge type (or “field”) of the target position: for single or optional fields that allow at most one child (e.g. field right in Fig. 1(b) at t = 1), the selected tree node has to be their dummy child node (e.g. node Dummy at t = 1) and the Add operator will then replace the dummy node with the new tree node; for sequential fields that accept more than one child (e.g. the field of a “statement block” that allows an arbitrary number of statements), the selected tree node can be any child node of the field (including a rightmost dummy node we append to every sequential field) and the Add operator will then insert the new node before the selected node. We elaborate this mechanism in § A.1. For our editor, adding a non-terminal node (e.g. node ElementAccess in Fig. 1(b) at t = 2) is equivalent to selecting a production rule to derive its field (e.g. AssignStmt right−−→ ElementAccess). As with Delete actions, to ensure there is no missing child node, we instantiate the set of child nodes with dummy nodes for the newly added node based on the underlying grammar, which leads to nodes Dummy1 and Dummy2 at t = 2. Add can also be used to populate empty terminal nodes with actual values (e.g. string token “list” at t = 3). This is the same as picking a token from the token vocabulary.\nAdditionally, observing that in many cases, revising a tree can be easily done by copying a subtree from the initial input g1 (e.g. subtree Expr 7→ i + 1 in Fig. 1(a)) to a new position in the updated tree gt (e.g. the right child position of node ElementAccess in Fig. 1(b) at t = 4), we introduce a high-level operator CopySubTree. This operator locates a target position similarly as the Add operator and then copies a complete subtree from g1 to the target position in a single step.\nFinally, a Stop action is used to terminate the iterative tree editing procedure, after which the remaining dummy nodes will be cleared. We note that our framework has decoupled the language grammar specifications (handled by ASDL) from the model architecture (corresponding to our languageagnostic model implementation), and thus can be applied to various languages flexibly." }, { "heading": "3.1.2 TREE AND TREE HISTORY ENCODER", "text": "Similarly to existing works in learning tree representations (Allamanis et al., 2018; Brockschmidt et al., 2018; Yin et al., 2019; Hellendoorn et al., 2020), we adopt a graph-based encoder to learn representations of each tree gt. Specifically, we follow Allamanis et al. (2018), and extend gt into a graph by adding bidirectional edges between parent and child nodes, as well as adjacent sibling nodes. We use a gated graph neural network (GGNN, Li et al. (2015)) to compute a vector representation nt for each node nt on tree gt, and mean-pool {nt} to represent gt, denoted gt. An LSTM encoder is used to track the tree history g1:t, i.e. st = LSTM([gt; f∆], st−1), where [·; ·] denotes vector concatenation. We will introduce how to learn the edit representation f∆ in § 3.2. The updated state st is then used to predict edit actions, as elaborated next." }, { "heading": "3.1.3 TREE EDIT DECODER", "text": "Our edit decoder predicts an action at using three components: an operator predictor, a node selector, and a value predictor. At each time step t, the decoder’s operator predictor first decides which operator opt ∈ {Delete, Add, CopySubTree, Stop} to apply. Next, for operators other than Stop, the node selector predicts a node nt from the tree to locate the target position for applying opt. Finally, if opt ∈ {Add, CopySubTree}, the value predictor further determines additional arguments of those operators (denoted as val t, e.g. the to-be-added node for Add). This is summarized as:\np(at|st) = p(opt|st)p(nt|st, opt)p(val t|st, opt, nt). (2)\nOperator Prediction: The operator prediction is a 4-class classification problem. We calculate the probability of taking operator opt as p(opt|st) = softmax(Wopst + bop).\nNode Selection: Given a tree gt, there could exist an arbitrary number of tree nodes. Therefore, we design the node selection module similar to a pointer network (Vinyals et al., 2015). To this end, we learn a hidden state hnode,t = tanh(Wnode[st; emb(opt)] + bnode) as the “pointer”, where emb(opt) embeds the previously selected operator opt. In our model, “emb(·)” denotes learnable embeddings. We then calculate the inner product of hnode,t and each node representation nt for node selection.\nValue Prediction: The value predictor predicts an argument val t for Add and CopySubTree actions. For Add actions, val t denotes the new tree node (corresponding to a production rule or a terminal token) to be added to gt. For CopySubTree actions, val t is the subtree from g1 to be copied to gt. In both cases, we only consider the candidate set {val t} allowable under the grammar constraints. Similarly to the node predictor, the distribution p(val t|·) is also given by a pointer network, with its hidden state defined as hval,t = tanh(Wval[st;nt; emb(pnt 7→ nt)]+bval), where emb(pnt 7→ nt) is the embedding of the edge type between the parent node pnt and the child nt (e.g. field right for AssignStmt right−−→ ElementAccess). Depending on the type of val t, its representation could be either a learned embedding of the production rule, a word embedding, or a subtree encoding given by the representation of its root node. We refer readers to § A.2 for details." }, { "heading": "3.2 TREE EDIT ENCODING", "text": "Given an edit pair 〈C−, C+〉, we aim to learn a real-valued vector f∆(C−, C+) to represent the intent behind the edits. This is a crucial task and has been investigated in several previous works. For example, Yin et al. (2019), Panthaplackel et al. (2020b), and Hoang et al. (2020) considered edits at the token level and used either a bag-of-edits encoder or a sequence encoder to encode the differences between C− and C+. As a result, these edit encoders have abandoned the syntactic structure of tree edits. Yin et al. (2019) further proposed a graph edit encoder, which connects the input and output trees with labeled edges such as “Removed” and “Added”, and then encodes the connected trees via a graph neural network. Although structural tree differences have been expressed in this edit encoder, the differences are modeled rather implicitly as the edit encoder has simply treated them as additional graph features.\nIn this section, we present a novel edit encoder which instead shifts the modeling focus completely to the targeted edit actions themselves. Specifically, it learns an edit representation by directly encoding the sequence of structural edit actions (a1, a2, ..., aT ) that transforms C− to C+. The encoder first computes a representation at for each action at, depending on the type of its operator:\naStop = WStopemb(Stop) + bStop, aDelete = WDelete[emb(Delete);nt; emb(pnt 7→nt)] + bDelete,\naAdd = WAdd[emb(Add);nt; emb(pnt 7→nt); emb(val t)] + bAdd, aCopySubTree = WCopySubTree[emb(CopySubTree);nt; emb(pnt 7→nt); emb(subtreet)] + bCopySubTree.\nThe proposed edit encoder then feeds the sequence of action representations {at}Tt=1 into a bidirectional LSTM, whose last hidden state is used as the edit representation f∆(C−, C+)." }, { "heading": "3.3 TRAINING AND INFERENCE", "text": "We jointly train the proposed editor and the edit encoder in an autoencoding style, following Yin et al. (2019). Specifically, given an edit pair 〈C−, C+〉 in the training set, we assume a gold-standard edit action sequence a∗1:T which edits C− to C+ (e.g. the edit action sequence in Fig. 1). We seek to maximize the probability of p(a∗1:T |f∆(C−, C+), C−) in training.3 By decomposing the probability according to Eq. (2), this is equivalent to jointly maximizing the probability of each edit decoder module making the gold decision at each time step. In practice, we use dynamic programming (pseudo code can be found in §C.1) to calculate the shortest tree edit sequence as a∗1:T ,4 and compute a cross entropy loss for each edit decoder module.\n3f∆(C−, C+) is one real-valued vector and thus does not directly expose C+. We set it to a low dimension following Yin et al. (2019), which bottlenecks the vector’s ability to memorize the entire output.\n4We assume a left-to-right, top-down order when comparing the input/output tree. Future work can also consider other orders to improve the editing quality (Gu et al., 2019a; Welleck et al., 2019; Góis et al., 2020).\nAt inference time, given an input tree C− and an edit representation f∆ (calculated either from 〈C−, C+〉 or another edit pair 〈C ′−, C ′+〉), we generate one tree edit at each time step t by greedily deciding the operator, the node and the value. The generated edit is then applied to the tree so it transits to gt+1 deterministically. We then update the tree history representation st+1 for generating the next tree edit. The inference process ends when a Stop operator is chosen." }, { "heading": "4 ROBUST STRUCTURAL EDITING VIA IMITATION LEARNING", "text": "A unique advantage that distinguishes our editor from existing ones is its potential to fix wrong edits and iteratively refine its own output. This is achievable because our editor can revise any part of a tree at any time. We investigate this hypothesis by training the proposed editor via imitation learning, where the editor learns to imitate gold edit actions (“expert demonstrations”) under states it visits at the inference time. Here, we define a “state” st to include the current tree history g1:t and the edit representation f∆. Our learning algorithm follows DAGGER (Ross et al., 2011), where in each training iteration, for a given 〈f∆, C−, C+〉 tuple, we first run the editor to infer and apply a sequence of edits resulting in a “trajectory” of (〈s1, a1〉, ..., 〈sT , aT 〉). We then request a gold edit action π∗(st) for each state st visited by the editor. The collected state-gold edit action pairs are aggregated to retrain the editor for the next iteration. This sampling and demonstration collecting strategy (denoted as DAGGERSAMPLING) is shown in Algo. 1 (Appendix B). Note that, in practice, instead of sampling a trajectory solely from the learning editor πθ, the DAGGER algorithm samples from a mixture policy π′, with which the actual edit action at at each step t comes from either πθ with a probability of 1− β or the “expert” π∗ with a probability of β. To simulate the “expert”, we calculate “dynamic oracles” (Goldberg & Nivre, 2012) by comparing the current tree with the target output tree. For example, in Fig. 1, if our editor incorrectly takes “Add[AssignStmt 7→ Expr]” at t = 2, the dynamic oracle will produce “Delete[AssignStmt → Expr]” as the gold edit action at t = 3 to revoke the wrong edit. This thus provides a means for the editor to learn to correct mistakes that it will likely produce at inference time.\nPreliminary results showed that the editor trained following DAGGERSAMPLING may fall into a loop of repetitively deleting and adding the same component. We hypothesize that teaching the editor to imitate experts under unstable states (i.e. amid its initial full pass of editing) could be detrimental. Therefore, we propose another sampling strategy, POSTREFINESAMPLING, which samples and collects state-action pairs from the expert as a post refinement step (Algo. 2 in Appendix B). Specifically, we first run our editor to finish its sequential editing, which gives the output tree gT (Line 2). If gT is different from target C+, we run the expert policy π∗ to continue editing until it successfully reaches C+, and return state-action pairs collected from the expert as training material for the editor (Line 3-5). When gT is correct, no further training data will be collected (Line 6-8)." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We test our methods on two source code edit datasets introduced by Yin et al. (2019), also largely following their experimental setting.\nThe GitHubEdits (GHE) dataset contains 〈C−, C+〉 pairs and their surrounding context collected from the commit logs of 54 GitHub C# projects. The dataset is split into train/dev/test sets of 91,372 / 10,176 / 10,176 samples. We jointly learn an edit representation f∆(C−, C+) while training the editor to generate C+ from C−. In evaluation, we measure the accuracy of each editor based on whether they successfully edit C− to the exact gold C+. Since the edit representation f∆ is calculated from the targeted 〈C−, C+〉 pair, we denote this setting as GHE-gold. The second dataset, C#Fixers (Fixers), is relatively small, containing 2,878 〈C−, C+〉 pairs. Unlike GHE, edit pairs in Fixers are built using 16 C# “fixers” with known semantics (e.g. removing redundant parentheses as a way to perform refactoring). As standard, we use this dataset only for testing purposes (i.e. all methods are first trained on GHE-gold). We consider a Fixers-gold setting similar as GHE-gold to evaluate the accuracy of generating C+ from 〈f∆(C−, C+), C−〉. Since edits in Fixers have known semantics, we also use Fixers to test methods in a one-shot setting (denoted as Fixers-one shot): For each 〈C−, C+〉 pair, we select another 〈C ′−, C ′+〉 pair from the\nsame fixer category (which bears the same edit intent as 〈C−, C+〉 but is applied to a different input) to infer the edit representation and evaluate the accuracy of generating C+ from 〈f∆(C ′−, C ′+), C−〉. We follow Panthaplackel et al. (2020a) and pick the first 100 samples at most per fixer category (the “seeds”), compute an edit representation of each one, and apply it to edit the others. We then report an average accuracy over the seeds as the score for this fixer category. Because the sample size of each fixer category is highly imbalanced, we report both macro average (treating all categories equally) and micro average (dependent on the sample size) edit accuracies over the 16 fixer categories.5 In this one-shot evaluation, higher accuracy also implies that the learned edit representation generalizes better from the specific edit pair to represent the semantics of an edit category.\nCompared with GHE/Fixers-gold, the Fixers-one shot setting is somewhat more realistic, since in practice one could only provide similar edits on other input trees as the edit specifications. However, GHE/Fixers-gold provides a more controllable learning benchmark. It investigates how well an editor can perform when its given edit representation has encoded the exact desired edits on the given input. Note that this is not a trivial task as the edit representation has been “bottlenecked” within a single continuous vector. Particularly for GHE, it covers way more diverse edit patterns than the 16 fixer categories by Fixers, making the GHE-gold evaluation also challenging. Consequently, in our experiments, we examine each model by analyzing their performance on all evaluation settings.\nBaselines: We compare our proposed neural editor (denoted as “Graph2Edit”6) with two stateof-the-art editors: (1) Graph2Tree (Yin et al., 2019), a model that, like ours, represents a program in its AST form and models the editing of tree-structured data. However, instead of generating a sequence of incremental edits, it decodes the edited tree in one pass; (2) CopySpan (Panthaplackel et al., 2020a), a model that represents programs as sequences of tokens and edits them by directly generating the edited code tokens from scratch.\nWe also experiment with two edit encoders for learning edit representations. Besides our proposed structural edit encoder (denoted as “TreeDiff Edit Encoder”), we consider a sequence edit encoder, which uses a bidirectional LSTM to compress three pieces of information: code tokens in C−, code tokens in C+, as well as their differences represented by a sequence of predefined edit tags (e.g. delete, add, or keep). This edit encoder (denoted as “Seq Edit Encoder”) was shown to offer more precise and generalizable edit representations than others tested in Yin et al. (2019).\nIn experiments, we reproduce and test baselines by using implementations kindly provided by their authors. We include all configuration and implementation details in Appendix C." }, { "heading": "5.2 MAIN RESULTS", "text": "Tab. 1 shows our experimental results, where we examine two questions:\n(1) How does our incremental editor compare with one-pass baselines? On Fixers-one shot, when all editors use the Seq Edit Encoder, our editor outperforms others substantially by more than 9% macro accuracy and 7% micro accuracy. This implies that our editor is better at capturing generalizable semantics underlying the edits. Given that all editors use the same architecture for the edit encoder, this also means that our editor encourages better edit repre-\nsentation learning in the edit encoder. The outstanding generalization ability of our editor demonstrates the advantage of modeling incremental edits; when our editor is trained to generate the edits rather than the edited tree from scratch, it implicitly drives its edit encoder to learn to capture the salient information about the edits (otherwise it has no means to generate the accurate edit sequence).\nIntriguingly, we observe inverse performance from the three editors when their edit representation is or is not inferred from the gold edit pair; editors performing better on GHE/Fixers-gold (CopySpan\n5Note that this one-shot evaluation procedure is different from the one used by Yin et al. (2019), so the results from Table 5 of Yin et al. (2019) are not comparable to ours. See § C.1 for details.\n6“Graph” simply indicates the use of a graph neural network to encode a tree.\n> Graph2Tree > Graph2Edit) consistently obtain worse accuracies on Fixers-one shot (CopySpan < Graph2Tree< Graph2Edit). We conjecture that when Seq Edit Encoder is jointly trained with the baseline editors, it tends to memorize the specific patterns about C+ as opposed to the generalizable information about the edits when trained with our editor, because the baseline editors are trained to decode the exact content of C+ from scratch. In comparison, this phenomenon is less prominent for Graph2Tree and more for CopySpan, since the former generates C+ in the form of an AST tree while the latter generates C+ in the form of a token sequence (which is exactly how C+ is encoded by the Seq Edit Encoder).\nCase study & expressivity of structural edits: We further showcase the generation from each editor (with Seq Edit Encoder) in the Fixers-one shot setting (Tab. 2). Example 1 illustrates typical cases where our editor Graph2Edit succeeds while the baseline editors fail. The example is about removing a redundant ToString call. Our editor learns to transfer and apply this editing pattern even when the input tree C− is very different from C ′−, while other editors behave very sensitively to the specific content of C−. This is because, from the perspective of our editor, the edits required by 〈C ′−, C ′+〉 and 〈C−, C+〉 are the same, both first deleting an InvocationExpression subtree corresponding to “VAR1.VAR2.ToString()” and then copying back its MemberAccessExpression subtree corresponding to “VAR1.VAR2”. In fact, we observe that in many cases, the actual tree edit that our editor needs to perform is irrelevant to the surface form of the input treeC−. As our editor is trained to generate the actual tree edits, together with Seq Edit Encoder, it learns a better alignment between changing at the token level (e.g. from “VAR1.VAR2.ToString()” to “VAR1.VAR2”) and performing targeted edits at the tree level.\nOn the other hand, this also means that our editor may fail when the desired edits for 〈C−, C+〉 bears a very different structure from the edits of 〈C ′−, C ′+〉, even if they are very close at the token level. Example 2 illustrates this situation where our editor fails. In this example, editing C ′− involves removing a redundant type cast from a MemberAccessExpression subtree (corresponding to “VAR2.ColBegin”) while the desired edits for C− require detaching the type cast from an InvocationExpression subtree (corresponding to “VAR1.ExceptionFromProto(VAR2.Cause)”). Therefore, even if our editor can precisely capture the structural edits expressed in 〈C ′−, C ′+〉, it cannot edit C− correctly. We observe that Graph2Tree could also be sensitive to this phenomenon while CopySpan runs successfully (although in many other cases, CopySpan produces ungrammatical programs, as we enumerate in § D.2). Finally, we note that our editor also performs comparably with or better than Graph2Tree when they are both equipped with TreeDiff Edit Encoder, as we will discuss next.\n(2) What is the influence of edit encoding? When replacing the Seq Edit Encoder with TreeDiff Edit Encoder, we observe significant improvement for both Graph2Tree and Graph2Edit on GHE/Fixers-gold; in the meantime, their performance on Fixers-one shot is comparable to the best accuracy. This implies that our proposed edit encoder is able to learn both more expressive and more generalizable edit representations. Particularly for our proposed Graph2Edit, it clearly outperforms Graph2Tree on GHE/Fixers-gold and is comparable to the latter on Fixers-one shot.\nHowever, we also notice that for Graph2Edit, switching to the TreeDiff Edit Encoder only helps the GHE/Fixers-gold scenario and results in a slight performance drop in the one-shot setting. This is likely because Graph2Edit has overfit to the specific edit representations during training, the same issue that CopySpan confronts, when the target outputs of the editor (ground-truth tree edits for Graph2Edit and ground-truth code tokens for CopySpan) have been exposed to the edit encoder (TreeDiff Edit Encoder for Graph2Edit and Seq Edit Encoder for CopySpan) in exactly the same format. We note that the issue comes with the fact that all models are trained on the GHE-gold training set while only tested on Fixers-one shot (see § 3.3 and § 5.1). It would be ideal to train and test a model on the Fixers-one shot setting, but the Fixers dataset is not large enough for this to be feasible. We leave such an evaluation as an important topic to explore in the future.\nFinally, in § D.3, we show the nearest neighbors of given edit pairs based on their edit representations, which qualitatively also demonstrate the superiority of TreeDiff Edit Encoder." }, { "heading": "5.3 IMITATION LEARNING EXPERIMENTS", "text": "We finally demonstrate that training our editor via imitation learning makes it more robust. We consider two data settings: 20% or full training data. In each case, we first pretrain our editor with gold edit sequences on the training set via supervised learning, equivalent to setting β to 1 in the first iteration of imitation learning, a commonly adopted strategy for DAGGER (Ross et al., 2011). We then run another iteration of imitation learning on the same training set to sample states and collect dynamic expert demonstrations, following either DAGGERSAMPLING (Algo. 1) or POSTREFINESAMPLING (Algo. 2). Empirically, we observe worse performance when setting β = 0 in DAGGERSAMPLING. This is likely because in the editing tasks we experiment on, offering one-step expert demonstrations is not enough to teach the model to complete all the remaining edits successfully. We eventually set β = 0.5. We include the experimental details and an analysis in § D.4.\nFor the base editor, we use “Graph2Edit w/ Seq Edit Encoder,” which is more prone to mistakes than “Graph2Edit w/ TreeDiff Edit Encoder” and thus presumably a better testbed for robust learning algorithms. The experimental results are show in Tab. 3. In the 20% training data setting, imitation learning improves supervised learning slightly with DAGGERSAMPLING and by 1.5% accuracy with POSTREFINESAMPLING. Our analysis shows that the editor trained using DAGGERSAMPLING learns to correct its previous wrong edits (e.g. Delete[\"VAR1\"]\nthen Add[\"StringX\"]). However, it may also fall into a local loop of repetitively deleting and adding the same component, which makes its edit length (T ) generally longer than other editors. This situation is neatly remedied by using POSTREFINESAMPLING to collect expert demonstrations. With this strategy, although we train the editor to correct its wrong edits as a post refinement step, the well trained editor is indeed enhanced to be more robust in making correct decisions in its initial full pass of editing (rather than making wrong decisions then revoking them). This strategy also improves the base editor under the full training data setting slightly." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "This paper presented a generic model for incremental editing of tree-structured data and demonstrated its capability using program editing as an example. In the future, this model could be extended to other tasks such syntax-based grammar error correction (Zhang & Wang, 2014) and sentence simplification (Feblowitz & Kauchak, 2013), or incorporate natural language-based edit specification, where the editing process is triggered by natural language feedback or commands (Suhr et al., 2018; Zhang et al., 2019; Yao et al., 2019; 2020; Elgohary et al., 2020)." }, { "heading": "A MODEL ARCHITECTURE DETAILS", "text": "A.1 IMPLEMENTATION WITH ASDL\nTo implement the “dummy node” mechanism, we utilize the ASDL “field”, which ensures the grammatical correctness of every edit. In ASDL, children of each tree node are grouped under different fields, and each field has a cardinality property (single, optional ?, or sequential *) indicating the number of nodes they can accept as grammatically valid children.\nFor single-cardinality fields that require exactly one child and optional-cardinality fields that require optionally zero or one child, we attach one dummy node when they do not have a child. For example, at t = 1 in Fig. 1(b), when the MethodCall-rooted subtree is deleted, we automatically attach a Dummy node to its parent field “right” (which has single cardinality). Then the new node ElementAccess can be added by selecting Dummy and replacing it with the new node. Similarly, after deriving node ElementAccess at t = 2, we automatically add a Dummy node to each of its two single-cardinality fields (i.e. obj and index). Note that, to add a new tree node or copy a subtree as a child of a single/optional field, the Add or CopySubTree operator needs to be applied to a dummy node, because the dummy node of a single/optional field indicates the only vacant position that is syntactically valid to accept a child for such fields. Then the new tree node or the copied subtree will simply replace the original dummy node of the field.\nFor sequential-cardinality fields that accept multiple children, we always attach one dummy node as their rightmost child. For example, a sequential field having two children A and B will have a child list of [A, B, Dummy]. Adding a new node in this case is implemented by selecting the right sibling of the target position and then inserting the new node to its left. For example, adding a new node C to the left of A can then be achieved by selecting A and then inserting C before it, and adding to the right of B is done by selecting Dummy and then inserting C before Dummy (resulting in [A, B, C, Dummy]).\nA.2 TREE EDIT DECODER\nIn this section, we provide detailed formulations of node selection and value prediction in our proposed tree edit decoder.\nNode Selection: Given a tree gt, there could exist an arbitrary number of tree nodes. Therefore, we design the node selection module similar to a pointer network (Vinyals et al., 2015):\nhnode,t = tanh(Wnode[st; emb(opt)] + bnode),\np(nt|st, opt) = softmax(hTnode,tnt),\nwhere emb(opt) embeds the previously selected operator opt, nt is the node representation, and Wnode, bnode are model parameters. The softmax is computed over all nodes nt ∈ gt. Value Prediction: After deciding the target position (inferred from the selected node), adding a new node or subtree to the current tree can be viewed as expanding its parent node in typical tree-based generation tasks (Yin & Neubig, 2017; Rabinovich et al., 2017; Yin & Neubig, 2018). We thus adapt the tree-based semantic parsing model of Yin & Neubig (2018) as our value predictor.\nRecall that the Add operator adds a new node to the tree by either applying a production rule (val = rule) or predicting a terminal token (val = tok), and the CopySubTree operator copies a subtree (val = subtree) to expand the current tree. In all cases, we only consider candidates that satisfy the underlying grammar constraints. The prediction probability is also calculated via a pointer network in order to handle varying numbers of valid candidates in each decision situation:\nhval,t = tanh(Wval[st;nt; emb(pnt 7→ nt)] + bval), p(val t|st, opt, nt) = softmax ( hTval,tW emb(val t) ) ,\nwhere Wval, bval and W are all model parameters, emb(pnt 7→ nt) is the embedding of the edge type (or “field”; see § A.1) between the parent node pnt and the child nt (e.g. field “right” for AssignStmt\nright−−→ ElementAccess), and emb(val t) denotes the representation of the argument candidate: for production rules, it is their learned embedding; for terminal tokens, it is their word embedding; for subtree candidates, we use the representation of their root node as the encoding of the subtree.\nAlgorithm 1 DAGGERSAMPLING Require: 〈f∆, C−, C+〉 from training set, learn-\ning editor πθ, expert policy π∗, β ∈ [0, 1] 1: Let g1 = C−. 2: Let π′ = βπ∗ + (1− β)πθ. 3: Sample a trajectory from π′(f∆, g1). 4: Collect and return {〈s, π∗(s)〉} for all states s\nvisited by π′.\nAlgorithm 2 POSTREFINESAMPLING Require: 〈f∆, C−, C+〉 from training set,\nlearning editor πθ, expert policy π∗ 1: Let g1 = C−. 2: Sample a trajectory using πθ(f∆, g1). De-\nnote gT as the output tree by the editor. 3: if gT 6= C+ then 4: Sample a trajectory from π∗(f∆, gT ); 5: Return {〈st, π∗(st)〉|t ≥ T}. 6: else 7: Return empty collection. 8: end if\nB IMITATION LEARNING ALGORITHMS\nWe present DAGGERSAMPLING and POSTREFINESAMPLING in Algo. 1 and Algo. 2, respectively." }, { "heading": "C DATASETS AND CONFIGURATIONS", "text": "C.1 DATASETS\nFor all datasets, we use the preprocessed version by Yin et al. (2019) for a fair comparison. The preprocessing includes tokenizing each code snippet and converting it into a AST.7 For each 〈C−, C+〉, we run a dynamic programming algorithm to search for the shortest edit sequence from C− to C+. The average length of gold edit sequences is 7.375 on GitHubEdits training set and 7.348 on C#Fixers.\nOur evaluation of the Fixers-one shot setting follows Panthaplackel et al. (2020a). Specifically, for each fixer category, we pick its first 100 samples to compute 100 “seed” edit representations (for categories whose numbers of samples are smaller than 100, we use all of their N samples). We then apply each seed edit representation to edit the remaining 99 samples (or N -1 samples for categories with less than 100 total samples), calculate one accuracy score for each seed, and report an average accuracy over the 100 seeds. This gives us an average score for each fixer category. Because the sample size of each fixer category is highly imbalanced, we report both macro average (treating all categories equally) and micro average (dependent on sample size) edit accuracies over the 16 fixer categories. Note that this is different from the evaluation procedure of Yin et al. (2019). Yin et al. (2019) considered only 10 “seeds” per category (although each seed edit representation is applied to all samples in the category), and reported the best accuracy over the 10 seeds as the score for the category. We believe enlarging the number of “seeds” and reporting an average accuracy can better represent a model’s capability. As a result, the numbers in Table 5 of Yin et al. (2019) are not comparable with results reported in our Tab. 1.\nAs introduced in § 3.3, for every 〈C−, C+〉 pair in the training set, we run a dynamic programming algorithm to create the gold-standard edit action sequence. We elaborate the algorithm in Algo. 3 (for simplicity, we omit the edit backtrace recording part). The algorithm edits a source tree node Cs into a target tree node Ct of the same type (and thus having the same fields), given a memory of subtrees M that can be copied during edits. In practice, this subtree memory consists of all subtrees in the input tree C−. The algorithm compares the source and the target tree node field by field (Line 2). In each field f , we assume a left-to-right, top-down order when comparing the children of the source tree node Cs,f and the children of the target tree node Ct,f within this field. Cs,fm (resp. Ct,fn ) denotes the m-th (n-th) child of C\ns,f (Ct,f ). “CountTreeNode” counts the number of tree nodes within the subtree of a root node.\nD[m,n] defines the shortest distance of editing Cs,f (up to the m-th child) into Ct,f (up to the n-th child). In Line 4-19, the algorithm initializes the distance matrix with boundary cases; in Line\n7The ASDL grammar we used for C# can be found at: https://raw.githubusercontent.com/ dotnet/roslyn/master/src/Compilers/CSharp/Portable/Syntax/Syntax.xml.\nAlgorithm 3 TREESHORTESTDIST Require: Source tree node Cs and target tree node Ct (assuming non-terminal nodes and have the\nsame node type and fields), subtree memory M. Ensure:\n1: Initialize d← 0; // total edit distance for all fields 2: for every field f of Cs do // loop over aligned fields 3: M ← number of children in Cs,f , N ← number of children in Ct,f . 4: Initialize D as a matrix of (M + 1) by (N + 1). // distance matrix 5: Initialize D[0, 0]← 0. 6: for m = 1 to M do 7: D[m, 0]←D[m− 1, 0] if Cs,fm is dummy and D[m− 1, 0] + 1 otherwise. //deleting a\ntree node takes 1 step; no need to delete dummy nodes 8: end for 9: for n = 1 to N do\n10: if field f is single/optional and Cs,fm is a valid, non-dummy node then 11: D[0, n]← inf. //cannot add to non-empty single/optional fields of Cs,f 12: else if Ct,fn is dummy then 13: D[0, n]←D[0, n− 1]. //no need to add dummy nodes 14: else if Ct,fn ∈M then 15: D[0, n]←D[0, n− 1] + 1. // copying a subtree takes 1 step 16: else 17: D[0, n] ← D[0, n − 1] + CountTreeNode(Ct,fn ). // add nodes of Ct,fn one by one,\neach requiring 1 step 18: end if 19: end for 20: for m = 1 to M do 21: for n = 1 to N do 22: v1 ←D[m− 1, n] if Cs,fm is dummy and D[m− 1, n] + 1 otherwise; 23: v2 ←D[m,n− 1] if Ct,fn is dummy, D[m,n− 1] + 1 if Ct,fn ∈M, and D[m,n−\n1] + CountTreeNode(Ct,fn ) otherwise; 24: v3 ←D[m−1, n−1] + TREESHORTESTDIST(Cs,fm , Ct,fn , M) if Cs,fm and Ct,fn are\nboth non-terminal nodes and have the same node type, D[m − 1, n − 1] if Cs,fm and Ct,fn are both terminal tokens and have the same value, and inf otherwise;\n25: D[m,n]← min{v1, v2, v3}. 26: end for 27: end for 28: d← d+ D[M,N ]. // add the shortest distance of field f 29: end for 30: return d as the total shortest edit distance.\n20-27, it considers general cases. D[M,N ] is thus the shortest distance of editing Cs,f into Ct,f completely (Line 28). The final distance from Cs to Ct is the summation of shortest distances over all fields (Line 30).\nTo generate the shortest edit distance for 〈C−, C+〉, we run the TREESHORTESTDIST algorithm with Cs and Ct being the roots of C− and C+, respectively. Based on the backtrace records (omitted in Algo. 3), we produce the gold-standard edit action sequence, with one extra Stop edit action appended in the end to signal the end of the editing process. Note that this is a global stop signal. As the model learns about when to stop editing the whole tree globally, it actually also learns about when to stop editing any certain subtrees within it.\nC.2 MODEL CONFIGURATIONS\nSince surrounding contexts around the edited program are also provided in all datasets, we additionally allow the value predictor (§ 3.1) to copy a terminal token from either the input tree’s code tokens or the contexts. To this end, we introduce another bidirectional LSTM encoder to encode the input code tokens as well as the contexts. The last hidden state is used to represent each token. The same design is also adopted in the two baseline editors.\nFor the encoder of our neural editor, the dimension of word embedding and the tree node representation is set to 128. The dimension of the bidirectional LSTM encoder for encoding input code tokens and contexts is set to 64. The hidden state for tracking tree history is set to 256 dimensions. In the decoder side, the dimensions of the operator embedding, the field embedding, the production rule embedding, and the hidden vector in value prediction are set to 32, 32, 128 and 256, respectively.\nFor a fair comparison, we follow Yin et al. (2019) and Panthaplackel et al. (2020a) to encode a code edit into a real-valued vector of 512 dimensions. For our TreeDiff Edit Encoder, each edit action is encoded into a vector of 256 dimensions. The bidirectional LSTM also has a hidden state of 256 dimensions. When training Graph2Edit/Graph2Tree jointly with TreeDiff Edit Encoder, common parameters that are designed for both the neural editor and the edit encoder (e.g. the operator/field embedding) are shared.\nIn experiments, we reproduce and evaluate baselines by using implementations kindly provided by their authors. This includes testing the baseline editors under exactly the same setting as they were tested in their original paper (e.g. decoding using beam search of size 5 for Graph2Tree and 20 for CopySpan).\nFor the supervised learning, we train our Graph2Edit for 30 epochs on GitHubEdits training set, where the best model parameters are selected based on the editor’s cross entropy loss on dev set. To enable more stable and reproducible results, we repeat the experiments for 3 times and report average performance." }, { "heading": "D ADDITIONAL EXPERIMENTAL RESULTS", "text": "D.1 A MORE COMPREHENSIVE COMPARISON WITH EXISTING APPROACHES\nWe note that there are some other approaches by Yin et al. (2019) and Panthaplackel et al. (2020a) that are not included in our Tab. 1. However, these approaches have been reported to be weaker than those we mainly tested. For example, “Seq2Seq + Seq Edit” performs worse than “CopySpan + Seq Edit” on both GHE-gold and Fixers-one shot (micro) in Panthaplackel et al. (2020a); Yin et al. (2019) claimed that “Graph2Tree + Seq Edit” is better than both “Seq2Seq + Seq Edit” and “Graph2Tree + Graph Edit”. As we have slightly adjusted the evaluation settings (i.e. adding Fixersgold and changing the evaluation procedure of Fixers-one shot compared with Yin et al. (2019)), results are largely missing for existing approaches. Therefore, we only re-tested and compared with existing state-of-the-art approaches (CopySpan and Graph2Tree) in our main results (§ 5.2). In Tab. 4, however, we include a more comprehensive set of existing approaches for reference.\nD.2 MORE EDIT EXAMPLES\nIn Tab. 5, we include more examples from each editor (with Seq Edit Encoder) in the Fixers-one shot setting. Example 1-2 demonstrate that our proposed Graph2Edit editor can successfully identify the correct target position to edit, even when the context of C− is very different from that of C ′−. Consider Example 1 for instance. Graph2Edit correctly locates the await keyword and its associated expression “await VAR0.GracefulStop(TimeSpan.FromSeconds(LITERAL))”, and then appends “.ConfigureAwait(false)” to the expression, all within the argument field of the outer “Assert.True()” expression. Graph2Tree also tries to append “.ConfigureAwait(false)” to the await expression, but as it generates the edited tree from scratch, it mistakenly copies an additional argument to the “Assert.True()” expression. CopySpan’s generation misses a right parenthesis, leading to an ungrammatical program. Similarly in Example 2, only Graph2Edit performs the desired edits when the contexts are different for C ′− and C−.\nOn the other hand, Graph2Edit can fail when the correct edits require more than structural information. For example, to succeed in Example 3 of Tab. 5, an editor needs to generalize the removal of redundant parentheses from a literal variable name (“VAR4”) to a bracket-wrapped binary expression (“VAR2/LITERAL”). We found that all editors fail in this case. Graph2Edit in this example correctly removes the parenthesized binary expression but can only copy the literal part (“LITERAL”) back; Graph2Tree incorrectly revises the irrelevant VAR3 variable; CopySpan again produces an ungrammatical output. The target edits in Example 4 are even more complicated.8 It requires an editor to understand the use of the nameof operator to replace the explicit string literal. All editors fail to identify “EnumUnderTest.ALLCAPITALS.ToString()” as another kind of string literal.\n8A de-anonymized instance of 〈C′−, C′+〉 could be: 〈parameter.EnsureIsPositiveFinite(\"parameter\"), parameter.EnsureIsPositiveFinite(nameof(parameter))〉. The nameof operator returns the name of the variable as a string literal.\nD.3 EDIT REPRESENTATIONS OF TREEDIFF EDIT ENCODER\nTab. 6 shows the nearest neighbors of given edit pairs from GHE dev set, based on the cosine similarity of their edit representations f∆(C−, C+) calculated by different edit encoders. The edit in Example 1 means to swap function arguments (e.g. from “(VAR0,VAR1)” to “(VAR1, VAR0)”). Intuitively such structural changes can be easily captured by our tree-level edit encoder. This is consistent with our results, which show that, for both Graph2Tree and Graph2Edit, TreeDiff Edit Encoder learns more consistent edit representations for this edit, while Seq Edit Encoder may confuse it with edits that replace the original argument with a new one (e.g. modifying “(VAR0,VAR1)” to “(VAR2,VAR0)”). Our proposed edit encoder can also generalize from literals (e.g. swapping between “(VAR0,VAR1)”) to more complex expressions (e.g. swapping between “(VAR0.Value, LITERAL)”). On the other hand, when the intended edits can be easily expressed as token-level\nTable 6: The nearest neighbors of given edit pairs based on their edit representations. Example 1\nC−: BoundsCheck(VAR0, VAR1); C+: BoundsCheck(VAR1, VAR0);\nGraph2Tree – Seq Edit Encoder I C−: ReleasePooledConnectorInternal(VAR0, VAR1);\nC+: ReleasePooledConnectorInternal(VAR2, VAR0);\nI C−: UngetPooledConnector(VAR0, VAR1); C+: UngetPooledConnector(VAR2, VAR0);\nI C−: VAR0.Warn(LITERAL, VAR1); C+: VAR0.Warn(VAR1, LITERAL);\nGraph2Tree – TreeDiff Edit Encoder I C−: InternalLogger.Error(LITERAL, VAR0);\nC+: InternalLogger.Error(VAR0, LITERAL);\nI C−: VAR0.Warn(LITERAL, VAR1); C+: VAR0.Warn(VAR1, LITERAL);\nI C−: AssertEqual(VAR0.Value, LITERAL); C+: AssertEqual(LITERAL, VAR0.Value);\nGraph2Edit – Seq Edit Encoder I C−: ReleasePooledConnectorInternal(VAR0, VAR1);\nC+: ReleasePooledConnectorInternal(VAR2, VAR0);\nI C−: UngetPooledConnector(VAR0, VAR1); C+: UngetPooledConnector(VAR2, VAR0);\nI C−: ReportUnusedImports(VAR0, VAR1, VAR2); C+: ReportUnusedImports(VAR2, VAR0, VAR1);\nGraph2Edit – TreeDiff Edit Encoder I C−: VAR0.Warn(LITERAL, VAR1);\nC+: VAR0.Warn(VAR1, LITERAL);\nI C−: InternalLogger.Error(LITERAL, VAR0); C+: InternalLogger.Error(VAR0, LITERAL);\nI C−: AssertEqual(VAR0.Value, LITERAL); C+: AssertEqual(LITERAL, VAR0.Value);\nExample 2\nC−: var VAR0=GetEtagFromRequest(); C+: var VAR0=GetLongFromHeaders(LITERAL);\nGraph2Tree – Seq Edit Encoder I C−: var VAR0=new ProfileConfiguration();\nC+: var VAR0=new Profile(LITERAL);\nI C−: var VAR0=PrepareForSaveChanges(); C+: var VAR0=PrepareForSaveChanges(null);\nI C−: bool VAR0=true; C+: bool VAR0=CanBeNull(VAR1);\nGraph2Tree – TreeDiff Edit Encoder I C−: var VAR0=new ProfileConfiguration();\nC+: var VAR0=new Profile(LITERAL);\nI C−: CalcGridAreas(); C+: SetDataSource(VAR0, VAR1);\nI C−: VAR0=new Win32PageFileBackedMemoryMappedPager(); C+: VAR0=new Win32PageFileBackedMemoryMappedPager(\nLITERAL);\nGraph2Edit – Seq Edit Encoder I C−: var VAR0=new ProfileConfiguration();\nC+: var VAR0=new Profile(LITERAL);\nI C−: VAR0.Dispose(); C+: VAR0.Close(VAR1);\nI C−: VAR0=VAR1(VAR2); C+: VAR0=GetSpans(VAR2, VAR1);\nGraph2Edit – TreeDiff Edit Encoder I C−: var VAR0=new ProfileConfiguration();\nC+: var VAR0=new Profile(LITERAL);\nI C−: VAR0=Thread.GetDomain().DefineDynamicAssembly(VAR1, ↪→ AssemblyBuilderAccess.Run); C+: VAR0=Thread.GetDomain().DefineDynamicAssembly(VAR1, ↪→ AssemblyBuilderAccess.RunAndSave, LITERAL);\nI C−: new DocumentsCrud().EtagsArePersistedWithDeletes(); C+: new DocumentsCrud().PutAndGetDocumentById(LITERAL);\nediting (e.g. inserting an argument token), the two edit encoders perform comparably, as shown in Example 2. However, we still observe that TreeDiff Edit Encoder works better at interpreting the editing semantics of code snippets with complex structures (e.g. more complex edit pairs are retrieved).\nD.4 MORE DETAILS ABOUT IMITATION LEARNING EXPERIMENTS\nExperimental Setup We use “Graph2Edit w/ Seq Edit Encoder” as the base editor. We do not experiment with “Graph2Edit w/ TreeDiff Edit Encoder” as it performed very well on GHE-gold training set even without imitation learning. In fact, 80% of the remaining errors were due to issues such as unknown tokens, which cannot be fixed with better training algorithms, as they are outside the search space of our current model. We leave experimenting this model on harder datasets as future work. Like the main experiments (§ C.2), we ran the imitation learning experiments for three times and reported average performance in Tab. 3.\nAnalysis We provide a more detailed analysis about the imitation learning experiments, especially how the choice of β in DAGGERSAMPLING affects the model performance. Our analysis is based on Graph2Edit+Seq Edit Encoder’s results on the GHE dev set, when they are trained with 20% of the GHE training data. We observe that the DAGGERSAMPLING algorithm generally trains the editor to behave very unstably. The editor shows to “regret” its previous decisions (mostly about terminal tokens, the prediction of which is generally harder than that of non-terminal nodes on GHE). An example is shown in Tab. 3, where the DAGGERSAMPLING editor first adds a token “VAR1” to the tree and then deletes it in the next step. This happens to around 23% of the examples (count=2,373 in Tab. 7) on the dev set for DAGGERSAMPLING when β is set to 0 (i.e. when the editor samples all states from itself throughout the imitation learning process). Among them, 84% of the deletions (count=1,989) are correct; they remove the a wrong token added in the previous step. However, we\nalso observe that the editor may then regret their correct deletions and add back the wrong tokens after “swinging” for a few steps (42% of the cases), revealing an interesting “add - delete (- add - ... - delete) - add” hesitation phenomenon. Similarly, among the remaining 384 examples where the editor deletes a correct token that it previously added, we found out in 51% of the cases the editor will add back the correct token. This unstable behavior can easily lead to an endless loop of repetitively adding and then deleting the same token (447 cases), until the editor reaches the maximum edit length that we set as a hyper-parameter (70 in our experiments).\nWe hypothesize that setting β to 0 in the DAGGERSAMPLING algorithm has forced the editor to learn only single-step correction edits (i.e., the correction edit under the current state). In other words, even if the editor could imitate the expert demonstration and correct its mistake at this single step, it still does not know how to proceed correctly for the next step. This is possibly the reason why the editor swings between adding then deleting the same token. An interesting question is why this problem shows more severely in our setting, compared with traditional imitation learning tasks (e.g. racing games). We hypothesize that editing tree-structured data for program revision requires more continuous and structured supervision, such as teaching the policy to complete the entire generation of a subtree, rather than teaching it to add only the next correct single tree node. We leave a deeper study of this problem to the future.\nIn experiments, to alleviate this problem, we propose to set β higher, because when the editor executes actions from both itself and the expert, it is more likely to collect a continuous sequence of correction edits. We show improved program editing accuracy of DAGGERSAMPLING with β = 0.5 in Tab. 3, and an analysis of its “add-then-delete” behavior in Tab. 7. Compared with when β = 0, this setting reduces the frequency of unstable editing. We also propose a new sampling algorithm, POSTREFINESAMPLING. We observe that the POSTREFINESAMPLING algorithm trains the editor to perform much more stably than the DAGGERSAMPLING algorithm. We observe almost no (less than 0.5% of dev examples) the aforementioned add-then-delete behavior." } ]
2,021
null
SP:f3fedc975625ffe149ab536fdf537871fcadcbbf
[ "This paper presents a method/tool, i.e., LowKey, to protect user privacy which leverages adversarial attacks to pre-process facial images against the black-box facial recognition system in social media, yet the processed facial images remain visually acceptable. The LowKey method proposes to attack an ensemble of facerec models by optimizing the Gaussian blur to the original face images, with the LIIPS metric on the L2 distance in the \\emph{feature space}. Thus, the processed face images remain visually legible to human. The ensemble of facerec models include ResNet-50, RestNet-152, IR-50 and IR-152 trained on MS-Celeb-1M dataset. The LowKey method has demonstrated very effective in combating black-box commercial facerec system at Amazon and Microsoft." ]
Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing facial recognition systems. However, existing methods fail on full-scale systems and commercial APIs. We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.
[ { "affiliations": [], "name": "Valeriia Cherepanova" }, { "affiliations": [], "name": "Micah Goldblum" }, { "affiliations": [], "name": "Shiyuan Duan" }, { "affiliations": [], "name": "Gavin Taylor" } ]
[ { "authors": [ "Ankan Bansal", "Anirudh Nanduri", "Carlos D Castillo", "Rajeev Ranjan", "Rama Chellappa" ], "title": "Umdfaces: An annotated face dataset for training deep networks", "venue": "IEEE International Joint Conference on Biometrics (IJCB),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Valeriia Cherepanova", "Vedant Nanda", "Micah Goldblum", "John P Dickerson", "Tom Goldstein" ], "title": "Technical challenges for training fair neural networks", "venue": "arXiv preprint arXiv:2102.06764,", "year": 2021 }, { "authors": [ "Ping-Yeh Chiang", "Jonas Geiping", "Micah Goldblum", "Tom Goldstein", "Renkun Ni", "Steven Reich", "Ali Shafahi" ], "title": "Witchcraft: Efficient pgd attacks with random step size", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "William Derringer" ], "title": "A surveillance net blankets china’s cities, giving police vast powers", "venue": "The New York Times, Dec", "year": 2019 }, { "authors": [ "Micah Goldblum", "Avi Schwarzschild", "Naftali Cohen", "Tucker Balch", "Ankit B Patel", "Tom Goldstein" ], "title": "Adversarial attacks on machine learning systems for high-frequency trading", "venue": "arXiv preprint arXiv:2002.09565,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Yandong Guo", "Lei Zhang", "Yuxiao Hu", "Xiaodong He", "Jianfeng Gao" ], "title": "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Woodrow Hartzog" ], "title": "The secretive company that might end privacy as we know it", "venue": "The New York Times, Jan", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ira Kemelmacher-Shlizerman", "Steven M Seitz", "Daniel Miller", "Evan Brossard" ], "title": "The megaface benchmark: 1 million faces for recognition at scale", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Cassidy Laidlaw", "Sahil Singla", "Soheil Feizi" ], "title": "Perceptual adversarial robustness: Defense against unseen threat models", "venue": "arXiv preprint arXiv:2006.12655,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Steve Lohr" ], "title": "Facial recognition is accurate, if you’re a white guy", "venue": "New York Times,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Parsa Saadatpanah", "Ali Shafahi", "Tom Goldstein" ], "title": "Adversarial attacks on copyright detection systems", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Shawn Shan", "Emily Wenger", "Jiayun Zhang", "Huiying Li", "Haitao Zheng", "Ben Y Zhao" ], "title": "Fawkes: Protecting privacy against unauthorized deep learning models", "venue": "In 29th {USENIX} Security Symposium ({USENIX} Security", "year": 2020 }, { "authors": [ "Natasha Singer" ], "title": "Microsoft urges congress to regulate use of facial recognition", "venue": null, "year": 2018 }, { "authors": [ "Hao Wang", "Yitong Wang", "Zheng Zhou", "Xing Ji", "Dihong Gong", "Jingchao Zhou", "Zhifeng Li", "Wei Liu" ], "title": "Cosface: Large margin cosine loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Karen Weise", "Natasha Singer" ], "title": "Amazon pauses police use of its facial recognition software", "venue": "The New York Times, Jul", "year": 2020 }, { "authors": [ "Emily Wenger", "Josephine Passananti", "Yuanshun Yao", "Haitao Zheng", "Ben Y Zhao" ], "title": "Backdoor attacks on facial recognition in the physical world", "venue": "arXiv preprint arXiv:2006.14580,", "year": 2020 }, { "authors": [ "Zuxuan Wu", "Ser-Nam Lim", "Larry Davis", "Tom Goldstein" ], "title": "Making an invisibility cloak: Real world adversarial attacks on object detectors", "venue": null, "year": 1910 }, { "authors": [ "Kaidi Xu", "Gaoyuan Zhang", "Sijia Liu", "Quanfu Fan", "Mengshu Sun", "Hongge Chen", "Pin-Yu Chen", "Yanzhi Wang", "Xue Lin" ], "title": "Adversarial t-shirt! evading person detectors in a physical world", "venue": null, "year": 1910 }, { "authors": [ "Xiao Yang", "Yinpeng Dong", "Tianyu Pang", "Jun Zhu", "Hang Su" ], "title": "Towards privacy protection by generating adversarial identity masks", "venue": "arXiv preprint arXiv:2003.06814,", "year": 2020 }, { "authors": [ "Kai Zhang", "Vı́tor Albiero", "Kevin W Bowyer" ], "title": "A method for curation of web-scraped face image datasets", "venue": "In 2020 8th International Workshop on Biometrics and Forensics (IWBF),", "year": 2020 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Jian Zhao" ], "title": "face.evolve: High-performance face recognition library based on pytorch", "venue": "https: //github.com/ZhaoJ9014/face.evoLVe.PyTorch,", "year": 2020 }, { "authors": [ "Yaoyao Zhong", "Weihong Deng" ], "title": "Towards transferable adversarial attack against deep face recognition", "venue": "arXiv preprint arXiv:2004.05790,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Facial recognition systems (FR) are widely deployed for mass surveillance by government agencies, government contractors, and private companies alike on massive databases of images belonging to private individuals (Hartzog, 2020; Derringer, 2019; Weise & Singer, 2020). Recently, these systems have been thrust into the limelight in the midst of outrage over invasion into personal life and concerns regarding fairness (Singer, 2018; Lohr, 2018; Cherepanova et al., 2021). Practitioners populate their databases by hoarding publicly available images from social media outlets, and so users are forced to choose between keeping their images outside of public view or taking their chances with mass surveillance.\nWe develop a tool, LowKey, for protecting users from unauthorized surveillance by leveraging methods from the adversarial attack literature, and make it available to the public as a webtool.\n∗Authors contributed equally.\nLowKey is the first such evasion tool that is effective against commercial facial recognition APIs. Our system pre-processes user images before they are made publicly available on social media outlets so they cannot be used by a third party for facial recognition purposes. We establish the effectiveness of LowKey throughout this work.\nOur contributions can be summarized as follows:\n• We design a black-box adversarial attack on facial recognition models. Our algorithm moves the feature space representations of gallery faces so that they do not match corresponding probe images while preserving image quality.\n• We interrogate the performance of our method on commercial black-box APIs, including Amazon Rekognition and Microsoft Azure Face, whose inner workings are not publicly known. We provide comprehensive comparisons with the existing data poisoning alternative, Fawkes (Shan et al., 2020), and we find that while Fawkes is ineffective in every experiment, our method consistently prevents facial recognition.\n• We release an easy-to-use webtool, LowKey, so that social media users are no longer confronted with a choice between withdrawing their social media presence from public view and risking the repercussions of being surveilled." }, { "heading": "2 RELATED WORK", "text": "Neural networks are known to be vulnerable to adversarial attacks, small perturbations to inputs that do not change semantic content, and yet cause the network to misbehave (Goodfellow et al., 2014). The adversarial attack literature has largely focused on developing new algorithms that, in simulations, are able to fool neural networks (Carlini & Wagner, 2017; Chiang et al., 2020). Most works to date focus on the idea of physical world attacks, in which the attacker places adversarial patterns on an object in hopes that the adversarial properties transfer to an image of the object. Such attacks do not succeed reliably because the adversarial perturbation must survive imaging under various lighting conditions, object orientations, and occlusions (Kurakin et al., 2016). While researchers have succeeded in crafting such attacks against realistic systems, these attacks do not work consistently across environments (Wu et al., 2019; Xu et al., 2019; Goldblum et al., 2020). In facial recognition, attacks have largely focused on physical backdoor threat models, evasion attacks on verification (Wenger et al., 2020; Zhong & Deng, 2020) and attacks on face detection (Pedraza et al., 2018). Unlike these physical threat models, the setting in which we operate is purely digital, meaning that we can manipulate the contents of digital media at the bit level, and then hand manipulated data directly to a machine learning system. The ability to digitally manipulate media greatly simplifies the task of attacking a system, and has been shown to enhance transferability to black box industrial systems for applications like copyright detection (Saadatpanah et al., 2020) and financial time series analysis (Goldblum et al., 2020).\nRecently, the Fawkes algorithm was developed for preventing social media images from being used by unauthorized facial recognition systems (Shan et al., 2020). However, Fawkes, along with the experimental setup on which it is evaluated in the original work, suffers from critical problems. First, Fawkes assumes that facial recognition practitioners train their models on each individual’s data.\nHowever, high-performance FR systems instead harness large pre-trained Siamese networks (Liu et al., 2017; Deng et al., 2019). Second, the authors primarily use image classifiers. In contrast, commercial systems are trained with FR-specific heads and loss functions, as opposed to the standard cross-entropy loss used by classifiers. Third, the authors perform evaluations on very small datasets. Specifically, they test Fawkes against commercial APIs with a gallery containing only 50 images. Fourth, the system was only evaluated using top-1 accuracy, but FR users such as police departments often compile a list of suspects rather than a single individual. As a result, other metrics like top-50 accuracy are often used in facial recognition, and are a more realistic metric for when a system has been successfully suppressed. Fifth, while the original work portrays Fawkes’ perturbations are undetectable by the human eye, experience with the codebase suggests the opposite (indeed, a New York Times journalist likewise noted that the Fawkes images she was shown during a demonstration were visibly heavily distorted). Finally, Fawkes has not yet released an app or a webtool, and regular social media users are unlikely to make use of git repositories. Our attack avoids the aforementioned limitations, and we perform thorough evaluations on a large collection of images and identities. When comparing with Fawkes, we use the authors’ own implementation in order to make sure that all evaluations are fair. Furthermore, we use Fawkes’ highest protection setting to make sure that LowKey performs better than Fawkes’ best attack. Another work uses targeted adversarial attack on probe images for facial recognition systems so that they cannot be matched with images in a database (Yang et al., 2020)." }, { "heading": "3 THE LOWKEY ATTACK ON MASS SURVEILLANCE", "text": "" }, { "heading": "3.1 PROBLEM SETUP", "text": "To help make our work more widely accessible, we begin by introducing common facial recognition terms.\nGallery images are database images with known identities. These often originate from such sources as passport photos and social media profiles. The gallery is used as a reference for comparing new images.\nProbe images are new photos whose subject the FR system user wants to identify. For example, probe images may be extracted from video surveillance footage. The extracted images are then fed into the FR system, and matches to gallery images with known identities.\nIdentification is the task of answering the question, “who is this person?” Identification entails comparing a probe image to gallery images in order to find potential matches. In contrast, verification answers the question, “is this person who they say they are?”, or equivalently “are these two photos of the same person?” Verification is used, for example, to unlock phones.\nIn our work, we focus on identification, which can be used for mass surveillance. State-of-the-art facial recognition systems first detect and align faces before extracting facial features from the probe image using a neural network. These systems then find gallery images with the closest feature vectors using a k-nearest neighbors search. The matched gallery images are then considered as likely identities corresponding to the person in the probe photo. LowKey applies a filter to user images which may end up in an organization’s database of gallery images. The result is to corrupt the gallery feature vectors so that they will not match feature vectors corresponding to the user’s probe images. A visual depiction of the LowKey pipeline can be found in Figure 2." }, { "heading": "3.2 THE LOWKEY ATTACK", "text": "LowKey manipulates potential gallery images so that they do not match probe images of the same person. LowKey does this by generating a perturbed image whose feature vector lies far away from the original image, while simultaneously minimizing a perceptual similarity loss between the original and perturbed image. Maximizing the distance in feature space prevents the image from matching other images of the individual, while the perceptual similarity loss prevents the image quality from degrading. In this section, we formulate the optimization problem, and describe a number of important details.\nLowKey is designed to evade proprietary FR systems that contain pre-processing steps and neural network backbones that are not publicly known. In order to improve the transferability of our attack to unknown facial recognition systems, LowKey simultaneously attacks an ensemble of models with various backbone architectures that are produced using different training algorithms. Additionally, for each model in the ensemble, the objective function considers the locations of feature vectors of the attacked image both with and without a Gaussian blur. We find that this technique improves both the appearance and transferability of attacked images. Experiments and ablations concerning ensembling and Gaussian smoothing can be found in Section 6. For perceptual similarity loss, we use LPIPS, a metric based on `2 distance in the feature space of an ImageNet-trained feature extractor (Zhang et al., 2018). LPIPS has been used effectively in the image classification setting to improve the image quality of adversarial examples (Laidlaw et al., 2020).\nFormally, the optimization problem we solve is\nmax x′\n1\n2n n∑ i=1\nnon-smoothed︷ ︸︸ ︷ ‖fi(A(x))− fi(A(x′))‖22 + smoothed︷ ︸︸ ︷ ‖fi(A(x))− fi(A(G(x′)))‖22\n‖fi(A(x))‖2 − αLPIPS(x, x′)︸ ︷︷ ︸\nperceptual loss\n, (1)\nwhere x is the original image, x′ is the perturbed image, fi denotes the ith model in our ensemble, G is the Gaussian smoothing function with fixed parameters, and A denotes face detection and extraction followed by 112× 112 resizing and alignment. The face detection step is an important part of the LowKey objective function, as commercial systems rely on face detection and extraction because probe images often contain a scene much larger than a face, or else contain a face who’s alignment is not compatible with the face recognition system.\nWe solve this maximization problem iteratively with signed gradient ascent, which is known to be highly effective for breaking common image classification systems (Madry et al., 2017). Namely, we iteratively update x′ by adding the sign of the gradient of the maximization objective (1) with respect to x′. By doing this, we move x′ and G(x′) far away from the original image x in the feature spaces of models fi used in the LowKey ensemble. The ensemble contains four feature extractors, IR-152 and ResNet-152 backbones trained with ArcFace and CosFace heads. More details can be found in the next section.\nAdditional details concerning attack hyperparameters can be found in Appendix 8.1." }, { "heading": "4 EXPERIMENTAL DESIGN", "text": "Our ensemble of models contains ArcFace and CosFace facial recognition systems (Deng et al., 2019; Wang et al., 2018). For each of these systems, we train ResNet-50, ResNet-152, IR-50, and IR-152 backbones on the MS-Celeb-1M dataset, which contains over five million images from over 85,000 identities (He et al., 2016; Deng et al., 2019; Guo et al., 2016). We use these models both in our ensemble to generate attacks and to perform controlled experiments in Section 6. Additional details on our models and their training routines can be found in Appendix 8.1.\nWe primarily test our attacks on the FaceScrub dataset, a standard identification benchmark from the MegaFace challenge, which contains over 100,000 images from 530 known identities as well as one million distractor images (Kemelmacher-Shlizerman et al., 2016). We discard near-duplicate images from the dataset as is common practice in the facial recognition literature (Zhang et al., 2020). We also perform experiments on the UMDFaces dataset, which can be found in Appendix 8.3 (Bansal et al., 2017). We treat one tenth of each identity’s images as probe images, and we insert the remaining images into the gallery. We randomly select 100 identities and apply LowKey to each of their gallery images. This setting simulates a small pool of LowKey users among a larger population of non-users. Then, in order to perform a single evaluation trial of identification, we randomly sample one probe image from a known identity and find its closest matches within the remainder of the FaceScrub dataset, according to the facial recognition model. Distance is measured in feature space of the model. If the FR model selects a match from the same identity, then the trial is a success.\nNote 1 (Rank-k Accuracy). For each probe image, we consider the model successful in the rank-k setting if the correct identity appears among the k closest gallery images in the model’s feature space. To test the transferability of our attack we compute rank-1 and rank-50 accuracy for attack, and test feature extractors from our set of trained FR models." }, { "heading": "5 BREAKING COMMERCIAL BLACK-BOX APIS", "text": "The ultimate test for our protection tool is against commercial systems. These systems are proprietary, and their exact specifications are not publicly available. We test LowKey in the black-box setting using two commercial facial recognition APIs: Amazon Rekognition and Microsoft Azure Face. We also compare against Fawkes. We generate Fawkes images using the authors’ own code and hyperparameters to ensure a fair comparison, and we use the highest protection setting their code offers.\nAmazon Rekognition Amazon Rekognition is a commercial tool for detecting and recognizing faces in photos. Rekognition works by matching probe images with uploaded gallery images that have known labels. Amazon does not describe how their algorithm works, but their approach seemingly does not involve training a model on uploaded images (at least not in a supervised manner). We test the Rekognition API using the FaceScrub dataset (including distractors) where 100 randomly selected identities have their images attacked as described in Section 4. We observe that LowKey is highly effective, and even in the setting of rank-50 accuracy, Rekognition can only recognize 2.4% of probe images belonging to users protected with LowKey. In contrast, Fawkes fails, with 77.5% of probe images belonging to its users recognized correctly in the rank-1 setting and 94.9% of these images recognized correctly when the 50 closest matches are considered. This is close to the performance of Amazon Rekognition on clean images.\nMicrosoft Azure Face We repeat a similar experiment on the Microsoft Azure Facial Recognition API. In contrast to Amazon’s API, Microsoft updates their model on the uploaded gallery of images. Therefore, only known identities can be used, so we only include images corresponding to the 530 known identities from FaceScrub and no distractors. The Azure system recognizes only 0.1% of probe images whose gallery images are under the protection of LowKey. Even though Fawkes is designed to perform data poisoning, and authors claim it is especially well suited to Microsoft Azure Face, in our experiments, Azure is still able to recognize more than 74% of probe images uploaded by users who employ Fawkes.\nWe conclude from these experiments that LowKey is both highly effective and transferable to even state-of-the-art industrial facial recognition systems. In the next section, we explore several components of our attack in order to uncover the tools of its success." }, { "heading": "6 ADDITIONAL EXPERIMENTS", "text": "The effectiveness of our protection tool hinges on several properties:\n1. The attack must transfer effectively to unseen models.\n2. Images must look acceptable to users.\n3. LowKey must run sufficiently fast so that run-time does not outweigh its protective benefits.\n4. Attacked images must remain effective after being saved in PNG and JPG formats.\n5. The algorithm must scale to images of any size.\nWe conduct extensive experiments in this section with a variety of facial recognition systems to interrogate these properties of LowKey." }, { "heading": "6.1 ENSEMBLES AND TRANSFERABILITY", "text": "In developing the ensemble of models used to compute our attack, we examine the extent to which attacks generated by one model are effective against another. By including an eclectic mix of models in our ensemble, we are able to ensure that LowKey produces images that fool a wide variety of facial recognition systems. To this end, we evaluate attacks on all pairs of source and victim models with ResNet-50, ResNet-152, IR-50, and IR-152 backbones, and both ArcFace and CosFace heads. For each victim model, we additionally measure performance on clean images, our ensembled attack, and Fawkes. See Table 2 for a comparison of the rank-50 performance of these combinations. Additional evaluations in the rank-1 setting and on the UMDFaces dataset can be found in Appendix 8.2 and 8.3 respectively. Note that entries for which the attacker and defender models are identical depict white-box performance, while entries for which these model differ depict black-box transferability.\nWe observe in these experiments that adversarial attacks generated by IR architectures transfer better to IR-based facial recognition systems, while attacks generated by ResNet architectures transfer better to other ResNet systems. In general, attacks computed on 152-layer backbones are more effective than attacks computed on 50-layer backbones, and deeper networks are also more difficult to fool. Moreover, attacks transfer better between models trained with the same head. An ensemble of models of all combinations of ResNet-152 and IR-152 backbones as well as ArcFace and CosFace heads generates attacks that transfer effectively to all models and fool models at only a slightly lower rate than white-box attacks." }, { "heading": "6.2 GAUSSIAN SMOOTHING", "text": "We incorporate Gaussian smoothing as a pre-processing step in our objective function (1) to make our perturbations smoother and more robust. Intuitively, this promotes the effectiveness of the attacked image even when a denoising filter is applied. The presence of blur forces the adversarial perturbation to rely on smoother/low-frequency image modifications rather than adversarial “noise.” Empirically, we find that attacks computed with this procedure produce slightly smoother and more aesthetically pleasing perturbations without sharp lines and high-frequency oscillations. See Figure 3 for a visual comparison of images produced with and without Gaussian smoothing in the LowKey pipeline.\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender\nWe additionally produce images both with and without smoothing in the attack pipeline. Before feeding them into facial recognition systems, we defend the system against our attacks by applying a Gaussian smoothing pre-processing step just before inference.\nWe find that facial recognition systems which use this pre-processing step perform equally well on rank-50 (but not rank-1) accuracy compared to performance without smoothing, and they are also able to defeat attacks which are not computed with Gaussian smoothing. On the other hand, attacks computed using Gaussian smoothing are able to counteract this defense and fool the facial recognition system (see Table 3). This suggests that attacks that use Gaussian smoothing in their pipeline are more robust and harder to defend against. See Appendix 8.6 for details regarding Gaussian smoothing hyperparameters." }, { "heading": "6.3 RUN-TIME", "text": "In order for users to be willing to use our tool, LowKey must run fast enough that it is not an inconvenience to use. Computing adversarial attacks is a computationally expensive task. We compare run-time to Fawkes as a baseline and test both attacks on a single NVIDIA GeForce RTX 2080 TI GPU. We attack one image at a time with no batching for fair comparison, and we average over runs on every full-size gallery image from each of five randomly selected identities from FaceScrub. While Fawkes averages 54 seconds per image, LowKey only averages 32 seconds per image. In addition to providing far superior protection, LowKey runs significantly faster than the existing method, providing users a smoother and more convenient experience.\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender" }, { "heading": "6.4 ROBUSTNESS TO IMAGE COMPRESSION", "text": "Since users may save their images in various formats after passing them through LowKey, the images we produce must provide protection even after being saved in common formats. Our baseline tests are conducted with images saved in uncompressed PNG format. To test performance under compression, we convert protected images to JPEG format and repeat our experiments on commercial APIs. While compression very slightly decreases performance, the attack is still very effective: Microsoft Azure Face is now able to recognize 0.2% of images compared to 0.1% when saved in the PNG format. Likewise, Amazon Rekognition now recognizes 3.8% of probe images compared to 2.4% previously." }, { "heading": "6.5 SCALABILITY TO ALL IMAGE SIZES (DISCLAIMER)", "text": "Many tools in deep learning require that inputs be of particular dimensions, but user images on social media sites come in all shapes and sizes. Therefore, LowKey must be flexible. Since the detection and alignment pipeline in our attack resizes images in a differentiable fashion, we can attack images of any size and aspect ratio. Additionally, we apply the LPIPS penalty to the entire original image, which prevents box-shaped artifacts from developing on the boundaries of the rectangle containing the face. Since LowKey does not have a fixed attack budget, perturbations may have different magnitudes on different images. Figure 4 shows the variability of LowKey perturbations on very large images; the image of Tom Hanks (first column) is one of the best looking examples of LowKey on large images, while the image of Tina Fey (last column) is one of the worst looking examples. Protecting very large images is a more challenging task than protecting small images because of the black-box detection, alignment, and re-scaling used in APIs which affect large images more significantly. These experiments indicate that users will receive stronger protection if they use LowKey on smaller images.\nWe test the effectiveness of LowKey on large images by protecting gallery images of 10 identities from Facescrub (with 17 images in the gallery on average) and using 20 probe images per person. We also vary the magnitude of the perturbation to find the smallest perturbation that is sufficient to protect images (Table 4). In this way, we find that users may trade off some protection in exchange for better looking images at their own discretion. Additionally, we find that LowKey works much better with smaller gallery sizes; when only 5 gallery images are used, the performance of Amazon Rekognition drops from 32.5% to 11% in the rank-50 setting. This observation suggests that users can upload new profile pictures less frequently in order to decrease the number of gallery images corresponding to their identity and thus enhance their protection. Finally, the quality of probe images is also important; when small probe images are used, like those which would occur in low resolution security camera footage, the accuracy of Amazon Rekognition drops from 32.5% to 19%." }, { "heading": "7 DISCUSSION", "text": "In this work, we develop a tool for protecting users from unauthorized facial recognition. Our tool adversarially pre-processes user images before they are uploaded to social media. These pre-processed images are useless for third-party organizations who collect them for facial recognition. While we have shown that LowKey is highly effective against commercial black-box APIs, it does not protect users 100% of the time and may be circumvented by specially engineered robust systems. Thus,\nwe hope that users will still remain cautious about publicly revealing personal information. One interesting future direction is to produce adversarial filters that are more aesthetically pleasing in order to promote wider use of this tool. However, it may be that there is no free lunch, and one cannot fool state-of-the-art facial recognition systems without visible perturbations. Facial recognition systems are not fragile, and other attacks that have attempted to break them have failed. Finally, we note that one of our goals in making this tool widely available is to promote broader awareness of facial recognition and the ethical issues it raises. Our webtool can be found at lowkey.umiacs.umd.edu." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the DARPA GARD and DARPA QED programs. Further support was provided by the AFOSR MURI program, and the National Science Foundation’s DMS division. Computation resources were funded by the Sloan Foundation." }, { "heading": "8 APPENDIX", "text": "" }, { "heading": "8.1 IMPLEMENTATION DETAILS", "text": "We train all of our feature extractors using focal loss (Lin et al., 2017) with a batch size of 512 for 120 epochs. We use an initial learning rate of 0.1 and decrease it by a factor of 10 at epochs 35, 65 and 95. For the optimizer, we use SGD with a momentum of 0.9 and weight decay of 5e-4.\nFor our adversarial attacks, we use 0.05 for the perceptual similarity penalty, σ = 3 and window size 7 for the Gaussian smoothing term. Attacks are computed using signed SGD for 50 epochs with a learning rate of 0.0025.\nFor face detection and aligning models as well as for training routines, we use the face.evoLVe.PyTorch github repository (Zhao, 2020)." }, { "heading": "8.2 RANK-1 ACCURACY ON FACESCRUB DATA", "text": "See Table 5.\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender" }, { "heading": "8.3 RESULTS ON UMDFACES DATASET", "text": "We repeat controlled experiments on the UMDFaces dataset which contains over 367,000 photos of 8,277 identities. For UMDFaces, we also choose 100 identities at random and attack their gallery images while keeping one-tenth of each identity’s photos as probe images. Experimental results are reported in Tables 6 and 7. It can be seen that the effectiveness of LowKey attacks on the UMDFaces dataset is slightly lower, which is likely a result of the much smaller gallery." }, { "heading": "8.4 CAN WE REDUCE THE SIZE OF OUR ATTACK?", "text": "In order to make our attacks more aesthetically pleasing, we try to reduce the size of perturbation by increasing the perceptual similarity penalty from 0.05 to 0.08. This attack is depicted in Figure 5 as a ”LowKey small attack”. Unfortunately, even a small decrease in the perturbation size results in a huge decrease in efficiency of the attack. In the rank-50 setting Amazon Rekognition is able to recognize 17.2% of probe images belonging to users protected with a LowKey small attack. Similarly, Microsoft\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender\nAzure Face recognizes 5.5% of probe images. Results of controlled experiments are reported in Tables 8 and 9." }, { "heading": "8.5 COMPARISON WITH FAWKES", "text": "By comparing a set of images protected with LowKey and Fawkes tools, we can see that both attacks are noticeable, but distort images in different ways. While Fawkes adds conspicuous artifacts on the face (such as mustaches or lines on the nose), LowKey attack mostly changes the textures and adds spots on a person’s skin. See Figure 5 for a visual comparison.\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender" }, { "heading": "8.6 GAUSSIAN SMOOTHING IN LOWKEY", "text": "For the parameters of the Gaussian smoothing term in the optimization problem (1), we use 3 for σ and 7 for window size. For the defensive Gaussian blur, we use σ = 2 and no window size.\nIR-50A IR-50C IR-152A IR-152C RN-50A RN-50C RN-152A RN-152C Defender" } ]
2,021
LOWKEY: LEVERAGING ADVERSARIAL ATTACKS TO PROTECT SOCIAL MEDIA USERS FROM FACIAL RECOGNITION
SP:174fe6b43a8516e5ea5323ce96a66d64b8745130
[ "The authors propose a new method for imitation learning from observation that attempts to estimate and leverage a notion of goal proximity in order to help the learning process. The authors provide a framework for computing this estimate, and a technique for using that estimate -- along with a measure of uncertainty -- to perform imitation learning from observation. Experimental results for several domains are presented in which the proposed technique achieves better performance than the comparison methods. " ]
Humans can effectively learn to estimate how close they are to completing a desired task simply by watching others fulfill the task. To solve the task, they can then take actions towards states with higher estimated proximity to the goal. From this intuition, we propose a simple yet effective method for imitation learning that learns a goal proximity function from expert demonstrations and online agent experience, and then uses the learned proximity to provide a dense reward signal for training a policy to solve the task. By predicting task progress as the temporal distance to the goal, the goal proximity function improves generalization to unseen states over methods that aim to directly imitate expert behaviors. We demonstrate that our proposed method efficiently learns a set of goal-driven tasks from state-only demonstrations in navigation, robotic arm manipulation, and locomotion tasks.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2004 }, { "authors": [ "Daniel Angelov", "Yordan Hristov", "Michael Burke", "Subramanian Ramamoorthy" ], "title": "Composing diverse policies for temporally extended tasks", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Daniel S. Brown", "Wonjoon Goo", "Prabhat Nagarajan", "Scott Niekum" ], "title": "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael Burke", "Katie Lu", "Daniel Angelov", "Artūras Straižys", "Craig Innes", "Kartic Subr", "Subramanian Ramamoorthy" ], "title": "Learning robotic ultrasound scanning using probabilistic temporal ranking", "venue": "arXiv preprint arXiv:2002.01240,", "year": 2020 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Ashley D. Edwards", "Charles L. Isbell" ], "title": "Perceptual values from observation", "venue": "arXiv preprint arXiv:1905.07861,", "year": 2019 }, { "authors": [ "Ashley D. Edwards", "Charles L. Isbell", "Atsuo Takanishi" ], "title": "Perceptual reward functions", "venue": "arXiv preprint arXiv:1608.03824,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Xin Yu Tan", "Yan Duan", "Trevor Darrell", "Sergey Levine", "Pieter Abbeel" ], "title": "Deep spatial autoencoders for visuomotor learning", "venue": "In IEEE International Conference on Robotics and Automation,", "year": 2016 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adverserial inverse reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dibya Ghosh", "Avi Singh", "Aravind Rajeswaran", "Vikash Kumar", "Sergey Levine" ], "title": "Divide and conquer reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ilya Kostrikov", "Kumar Krishna Agrawal", "Debidatta Dwibedi", "Sergey Levine", "Jonathan Tompson" ], "title": "Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum", "Jonathan Tompson" ], "title": "Imitation learning via off-policy distribution matching", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Youngwoon Lee", "Edward S. Hu", "Zhengyu Yang", "Joseph J. Lim" ], "title": "To follow or not to follow: Selective imitation learning from observations", "venue": "In Conference on Robot Learning,", "year": 2019 }, { "authors": [ "Youngwoon Lee", "Shao-Hua Sun", "Sriram Somasundaram", "Edward S Hu", "Joseph J Lim" ], "title": "Composing complex skills by learning transition policies", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "YuXuan Liu", "Abhishek Gupta", "Pieter Abbeel", "Sergey Levine" ], "title": "Imitation from observation: Learning to imitate behaviors from raw video via context translation", "venue": "In IEEE International Conference on Robotics and Automation,", "year": 2018 }, { "authors": [ "Maja J Mataric" ], "title": "Reward functions for accelerated learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "Andrew Y. Ng", "Stuart J. Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Scott Niekum", "Sarah Osentoski", "George Konidaris", "Sachin Chitta", "Bhaskara Marthi", "Andrew G Barto" ], "title": "Learning grounded finite-state representations from unstructured demonstrations", "venue": "The International Journal of Robotics Research,", "year": 2015 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Michael Luo", "Pulkit Agrawal", "Dian Chen", "Fred Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A. Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Dean A Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "Siddharth Reddy", "Anca D. Dragan", "Sergey Levine" ], "title": "SQIL: Imitation learning via reinforcement learning with sparse rewards", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Stefan Schaal" ], "title": "Learning from demonstration", "venue": "In Advances in Neural Information Processing Systems, pp. 1040–1046,", "year": 1997 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Pierre Sermanet", "Kelvin Xu", "Sergey Levine" ], "title": "Unsupervised perceptual rewards for imitation learning", "venue": "Robotics: Science and Systems,", "year": 2017 }, { "authors": [ "Richard Stuart Sutton" ], "title": "Temporal credit assignment in reinforcement learning", "venue": "PhD thesis, University of Massachusetts Amherst,", "year": 1984 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Behavioral cloning from observation", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Generative adversarial imitation from observation", "venue": "arXiv preprint arXiv:1807.06158,", "year": 2018 }, { "authors": [ "Chao Yang" ], "title": "Imitation learning from observations by minimizing inverse dynamics disagreement", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Association for the Advancement of Artificial Intelligence,", "year": 2008 }, { "authors": [ "Konrad Zolna" ], "title": "Task-relevant adversarial imitation learning", "venue": "arXiv preprint arXiv:1910.01077,", "year": 2019 }, { "authors": [ "Reddy" ], "title": "SQIL (Reddy et al., 2020), an imitation learning approach which demonstrates higher sample efficiency with off-policy RL. SQIL modifies the replay buffer with the expert demonstrations which are assigned reward +1 while all agent experience is assigned reward 0. We use Soft Actor-Critic", "venue": "(Haarnoja et al.,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans are capable of effectively leveraging demonstrations from experts to solve a variety of tasks. Specifically, by watching others performing a task, we can learn to infer how close we are to completing the task, and then take actions towards states closer to the goal of the task. For example, after watching a few tutorial videos for chair assembly, we learn to infer how close an intermediate configuration of a chair is to completion. With the guidance of such a task progress estimate, we can efficiently learn to assemble the chair to progressively get closer to and eventually reach, the fully assembled chair.\nCan machines likewise first learn an estimate of progress towards a goal from demonstrations and then use this estimate as guidance to move closer to and eventually reach the goal? Typical learning from demonstration (LfD) approaches (Pomerleau, 1989; Pathak et al., 2018; Finn et al., 2016) greedily imitate the expert policy and therefore suffer from accumulated errors causing a drift away from states seen in the demonstrations. On the other hand, adversarial imitation learning approaches (Ho & Ermon, 2016; Fu et al., 2018) encourage the agent to imitate expert trajectories with a learned reward that distinguishes agent and expert behaviors. However, such adversarially learned reward functions often overfit to the expert demonstrations and do not generalize to states not covered in the demonstrations (Zolna et al., 2019), leading to unsuccessful policy learning.\nInspired by how humans leverage demonstrations to measure progress and complete tasks, we devise an imitation learning from observation (LfO) method which learns a task progress estimator and uses the task progress estimate as a dense reward signal for training a policy as illustrated in Figure 1. To measure the progress of a goal-driven task, we define goal proximity as an estimate of temporal distance to the goal (i.e., the number of actions required to reach the goal). In contrast to prior adversarial imitation learning algorithms, by having additional supervision of task progress and learning to predict it, the goal proximity function can acquire more structured task-relevant information, and hence generalize better to unseen states and provide better reward signals.\nHowever, the goal proximity function can still output inaccurate predictions on states not in demonstrations, which results in unstable policy training. To improve the accuracy of the goal proximity function, we continually update the proximity function with trajectories both from expert and agent. In addition, we penalize trajectories with the uncertainty of the goal proximity prediction, which prevents the policy from exploiting high proximity estimates with high uncertainty. As a result, by leveraging the agent experience and predicting the proximity function uncertainty, our method can achieve more efficient and stable policy learning.\nThe main contributions of this paper include (1) an algorithm for imitation from observation that uses estimated goal proximity to inform an agent of the task progress; (2) modeling uncertainty of goal proximity estimation to prevent exploiting uncertain predictions; and (3) a joint training algorithm of the goal proximity function and policy. We show that the policy learned with our proposed goal proximity function is more effective and generalizes better than the state-of-the-art LfO algorithms on various domains, such as navigation, robot manipulation, and locomotion. Moreover, our method demonstrates comparable results with GAIL (Ho & Ermon, 2016), which learns from expert actions." }, { "heading": "2 RELATED WORK", "text": "Imitation learning (Schaal, 1997) aims to leverage expert demonstrations to acquire skills. While behavioral cloning (Pomerleau, 1989) is simple but effective with a large number of demonstrations, it suffers from compounding errors caused by the distributional drift (Ross et al., 2011). On the other hand, inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ziebart et al., 2008) estimates the underlying reward from demonstrations and learns a policy through reinforcement learning with this reward, which can better handle the compounding errors. Specifically, generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016) and its variants (Fu et al., 2018; Kostrikov et al., 2020) shows improved demonstration efficiency by training a discriminator to distinguish expert and agent transitions and using the discriminator output as a reward for policy training.\nWhile most imitation learning algorithms require expert actions, imitation learning from observation (LfO) approaches learn from state-only demonstrations. This enables the LfO methods to learn from diverse sources of demonstrations, such as human videos, demonstrations with different controllers, and other robots. To imitate demonstrations without expert actions, inverse dynamics models (Niekum et al., 2015; Torabi et al., 2018a; Pathak et al., 2018) or learned reward functions (Edwards et al., 2016; Sermanet et al., 2017; 2018; Liu et al., 2018; Lee et al., 2019a) can be used to train the policy. However, these methods require large amounts of data to train inverse dynamics models or representations. On the other hand, state-only adversarial imitation learning (Torabi et al., 2018b; Yang et al., 2019) can imitate an expert with few demonstrations, similar to GAIL. In addition to discriminating expert and agent trajectories, our method proposes to also estimate the proximity to the goal, which can provide more informed reward signals and generalize better.\nClosely related works to our approach are reinforcement learning algorithms that learn a value function or proximity estimator from successful trajectories and use them as an auxiliary reward (Mataric, 1994; Edwards & Isbell, 2019; Lee et al., 2019b). While these value function and proximity estimator are similar to our proposed goal proximity function, these works require environment reward signals, and do not incorporate adversarial online training and uncertainty estimates.\nMoreover, demonstrating the value of learning a proximity estimate for imitation learning, Angelov et al. (2020) utilizes the learned proximity to choose a proper sub-policy but does not train a policy\nfrom the learned proximity. Similar to our method, Burke et al. (2020) proposes to learn a reward function using a ranking model and use it for policy optimization, demonstrating the advantage of using goal proximity as a reward for training a policy. However, they learn the proximity function from demonstrations alone and solely provide proximity as a reward. This hinders agent learning when the proximity function fails to generalize to agent experience, allowing the agent to exploit inaccurate proximity predictions for reward. By incorporating the online update, uncertainty estimates, and difference-based proximity reward, our method can robustly imitate state-only demonstrations to solve goal-driven tasks without access to the true environment reward." }, { "heading": "3 METHOD", "text": "In this paper, we address the problem of learning from observations for goal-driven tasks. Adversarial imitation learning methods (Torabi et al., 2018b; Yang et al., 2019) suggest learning a reward function that penalizes an agent state transition off the expert trajectories. However, these learned reward functions often overfit to expert demonstrations and do not generalize to states which are not covered in the demonstrations, leading to unsuccessful policy learning.\nTo acquire a more structured and generalizable reward function from demonstrations, we propose to learn a goal proximity function that estimates proximity to the goal distribution in terms of temporal distance (i.e., number of actions required to reach the goal). Then, a policy learns to reach states with higher proximity (i.e., that are closer to the goal) predicted by the goal proximity function. Moreover, during policy training, we propose to measure the uncertainty of the goal proximity function which prevents the policy from exploiting over-optimistic proximity predictions and yielding undesired behaviors. In Section 3.2, we describe the goal proximity function in detail. Then, in Section 3.3, we elaborate how the policy is jointly trained with the goal proximity function." }, { "heading": "3.1 PRELIMINARIES", "text": "We formulate our learning problem as a Markov decision process (Sutton, 1984) defined through a tuple (S,A, R, P, ρ0, γ) for the state space S, action space A, reward function R(s, a), transition distribution P (s′|s, a), initial state distribution ρ0, and discounting factor γ. We define a policy π(a|s) that maps from states to actions and correspondingly moves an agent to a new state according to the transition probabilities. The policy is trained to maximize the expected sum of discounted rewards, E(s,a)∼π [∑Ti t=0 γ tR(st, at) ] , where Ti represents the variable length of episode i.\nIn imitation learning, the learner receives a fixed set of expert demonstrations, De = {τe1 , . . . , τeN}. In this paper, we specifically consider the learning from observation (LfO) setup where each demonstration τei is a sequence of states. Moreover, we assume that all expert demonstrations are successful; therefore, the final state of an expert trajectory reaches the task goal." }, { "heading": "3.2 LEARNING GOAL PROXIMITY FUNCTION", "text": "In goal-driven tasks, an estimate of how close an agent is to the goal can be utilized as a direct learning signal. Therefore, instead of learning to discriminate agent and expert trajectories (Ho & Ermon, 2016; Torabi et al., 2018b), we propose a goal proximity function, f : S → R, that learns how close states are to the goal distribution. Specifically, we define goal proximity as a proximity that is discounted based on its temporal distance to the goal (i.e., inversely proportional to the number of actions required to reach the goal). Note that the goal proximity function measures the temporal distance, not the spatial distance, between the current and goal states. Therefore, a single proximity value can entail all information about the task, goal, and any roadblocks.\nIn this paper, we define goal proximity of a state st as the linearly discounted proximity f(st) = 1− δ · (Ti − t), where δ ∈ (0, 1) is a discounting factor and Ti is the episode horizon. In this paper, we set δ to 1/H to evenly distribute the proximity between 0 and 1, where H is the maximum task horizon. Note that we use the maximum episode length H , instead of the variable episode length Ti, to define a fixed δ for the temporal discounting to be consistent between episodes. We use mean squared error as the objective for training the goal proximity function fφ parameterized by φ:\nLφ = Eτei ∼De,st∼τei [fφ(st)− (1− δ · (Ti − t))] 2 . (1)\nAlgorithm 1 Imitation learning with goal proximity function Require: Expert demonstrations De = {τe1 , . . . , τeN}\n1: Initialize weights of goal proximity function fφ and policy πθ 2: for i = 0, 1, ...,M do 3: Sample expert demonstration τe ∼ De . Offline proximity function training 4: Update goal proximity function fφ with τe to minimize Equation 1 5: end for 6: for i = 0, 1, ..., L do 7: Rollout trajectories τi = (s0, . . . , sTi) with πθ . Policy training 8: Compute proximity reward Rφ(st, st+1) for (st, st+1) ∼ τi using Equation 5 9: Update πθ using any RL algorithm\n10: Update fφ with τi and τe ∼ De to minimize Equation 4 11: end for\nThere are alternative ways to represent and learn goal proximity, such as exponentially discounted proximity and ranking-based proximity (Brown et al., 2019). But, in our experiments, linearly discounted proximity consistently performed better than alternatives; therefore, the linearly discounted proximity is used throughout this paper (see Figure 5b and Figure 11).\nBy learning to predict the goal proximity, the goal proximity function not only learns to discriminate agent and expert trajectories (i.e., predict 0 proximity for an agent trajectory and positive proximity for an expert trajectory with Equation 4), but also acquires the task information about temporal progress entailed in the trajectories. From this additional supervision, the proximity function provides more informative learning signals to the policy and generalizes better to unseen states as empirically shown in Section 4." }, { "heading": "3.3 TRAINING POLICY WITH PROXIMITY REWARD", "text": "In a goal-driven task, a policy πθ aims to get close to and eventually reach the goal. We can formalize this objective as maximizing the goal proximity at the final state fφ(sTi), which can be used as a sparse proximity reward. In addition, to encourage the agent to make consistent progress towards the goal, we devise a dense proximity reward based on the increase in proximity, fφ(st+1)− fφ(st), at every timestep. By combining the sparse and dense proximity rewards, our total proximity reward can be defined as\nRφ(st, st+1) = { fφ(st+1)− fφ(st) t 6= Ti − 1 2 · fφ(sTi)− fφ(st) t = Ti − 1 . (2)\nGiven the proximity reward, the policy is trained to maximize the expected discounted return:\nE(s0,...,sTi )∼πθ [ γT fφ(sTi) + Ti−1∑ t=0 γt(fφ(st+1)− fφ(st)) ] . (3)\nHowever, a policy trained with the proximity reward can sometimes perform undesired behaviors by exploiting over-optimistic proximity predictions on states not seen in the expert demonstrations. This becomes critical when the expert demonstrations are limited and cannot cover the state space sufficiently. To avoid inaccurate predictions leading an agent to undesired states, we propose to (1) fine-tune the goal proximity function with online agent experience to reduce optimistic proximity evaluations; and (2) penalize agent trajectories with high uncertainty in goal proximity predictions.\nFirst, we set the target proximity of states in agent trajectories to 0, similar to adversarial imitation learning methods (Ho & Ermon, 2016), and train the proximity function with both expert demonstrations and agent experience by minimizing the following loss:\nLφ = Eτei ∼De,st∼τei [fφ(st)− (1− δ · (Ti − t))] 2 + Eτ∼πθ,st∼τ [fφ(st)] 2 . (4)\nAlthough successful agent experience is also used as negative examples for training the proximity function, in practice, this is not problematic since the proximity function ideally converges to the average of expert and agent labels (e.g., 1/2 − δ · (Ti − t)/2 for ours and 0.5 for GAIL). Early stopping and learning rate decay can be used to further ease this problem (Zolna et al., 2019). Also,\nthe online training of the goal proximity function can lower the goal proximity estimate in a local optimum, which helps the policy escape from such local optima.\nTo alleviate the effect of inaccurate proximity estimation in policy training, we discourage the policy from visiting states with uncertain proximity estimates. Specifically, we model the uncertainty U(st) as the disagreement in terms of standard deviation of an ensemble of proximity functions (Osband et al., 2016; Lakshminarayanan et al., 2017). Then we use the estimated uncertainty to penalize exploration of states with uncertain proximity estimates. The proximity estimate f(st) is the average prediction of the ensemble. With the uncertainty penalty, the modified reward can be written as:\nRφ(st, st+1) = { fφ(st+1)− fφ(st)− λ · U(st+1) t 6= Ti − 1 2 · fφ(sTi)− fφ(st)− λ · U(sTi) t = Ti − 1 , (5)\nwhere λ is a tunable hyperparameter to balance the proximity reward and uncertainty penalty. A larger λ results in more conservative exploration outside the states covered by the expert demonstrations\nIn summary, we propose to learn a goal proximity function that estimates how close the current state is to the goal distribution, and train a policy to maximize the goal proximity while avoiding states with inaccurate proximity estimates using the uncertainty measure. We jointly train the proximity function and policy as described in Figure 1 and Algorithm 1." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments, we aim to answer the following questions: (1) How does our method’s efficiency and final performance compare against prior work in imitation from observation and imitation learning with expert actions? (2) Does our method lead to policies that generalize better to states unseen in the demonstrations? (3) What factors contribute to the performance of our method? To answer these questions we consider diverse goal-driven tasks: navigation, robot manipulation, and locomotion.\nTo demonstrate the improved generalization capabilities of policies trained with the proximity reward, we benchmark our method under two different setups: expert demonstrations are collected from (1) only a fraction of the possible initial states (e.g., 25%, 50%, 75% coverage) and (2) initial states with smaller amounts of noise. (1) measures the ability of an agent to interpolate between states covered by the demonstrations while (2) evaluates extrapolating beyond the demonstrations to added noise during agent learning. In both setups, our method shows superior generalization capability and thus, achieves higher final rewards than LfO baselines. Moreover, our method achieves comparable results with LfD methods that use expert actions.\nThese generalization experiments serve to mimic the reality that expert demonstrations may be collected in a different setting from agent learning. For instance, due to the cost of demonstration collection, the demonstrations may poorly cover the state space. An agent would then have to learn in an area of the state space not covered by the demonstrations. We measure this in the experimental setup (1), where the demonstrations cover a fraction of the possible learner starting and goal states. Likewise, demonstrations may be collected in controlled circumstances with little environment noise. Then, an agent learning in an actual environment would encounter more noise than presented in\nthe demonstrations. We quantify this in the experimental setup (2), where less noise is applied to demonstration starting states." }, { "heading": "4.1 BASELINES", "text": "We compare our method to the state-of-the-art prior works in both imitation learning from observations and standard imitation learning with actions, which are listed below:\n• BCO (Torabi et al., 2018a) learns an inverse model from environment interaction to provide action labels in demonstrations for behavioral cloning. • GAIfO (Torabi et al., 2018b) is a variant of GAIL (Ho & Ermon, 2016) which trains a discrimi-\nnator to discriminate state transitions (s, s′) instead of state-action pairs (s, a). • GAIfO-s, as compared to in Yang et al. (2019), learns a discriminator based off only a single\nstate, not a state transition as with GAIfO. • BC (Pomerleau, 1989) fits a policy to the demonstration actions with supervised learning. This\nmethod requires expert action labels while our method does not. • GAIL (Ho & Ermon, 2016) uses adversarial imitation learning with a discriminator trained on\nstate-action pairs (s, a). This method also uses actions whereas ours does not.\nAlso, we study several variations of our method to evaluate the importance of different design choices:\n• Ours (No Uncert): Removes the uncertainty penalty from the reward function. • Ours (No Online): Learns the proximity function offline from the demonstrations and does not\nrefine it using agent experience during policy learning. This approach may fail as the proximity function will not learn outside of the demonstrations and thus provide a poor reward signal. • Ours (No Offline): Does not pre-train the proximity function. This should be less efficient than\nour method, which pre-trains the proximity function using the demonstrations. • Ours (Exp): Uses the exponentially discounted goal proximity f(st) = δ(T−t)." }, { "heading": "4.2 EXPERIMENTAL SETUP", "text": "By default, our primary method uses the linearly discounted version of the proximity function as this empirically lead to the best results (see details in Figure 11) and set the discounting factor as δ = 1/H , whereH is the task horizon length. For modeling uncertainty, we use an ensemble of size 5 across all tasks. For all tasks, we pre-train the proximity function for 5 epochs on the demonstrations. During online training (i.e., policy learning), we sample a batch of 128 elements from the expert and agent experience buffers. The mean and standard deviation of outputs from the ensemble networks are used as the proximity prediction and uncertainty, respectively.\nThe same network architecture is used for proximity function, discriminator (for the baselines), and policy. Details of the network architecture can be found in Section G.2. Any reinforcement learning algorithm can be used for policy optimization, but we choose to use PPO (Schulman et al., 2017) and the hyperparameters of PPO are tuned appropriately for each method and task (see Table 2). Each baseline implementation is verified against the results reported in its original paper. We train each task with 5 different random seeds and report mean and standard deviation divided by 2." }, { "heading": "4.3 NAVIGATION", "text": "In the first set of experiments, we examine the NAVIGATION task between four rooms shown in Figure 2a. The purpose of this environment is to show the benefits of our method in a simple setting where we can easily visualize and verify the learned goal proximity function. The agent start and goal positions are randomly sampled and the agent has 100 steps to navigate to the goal. We provide 250 expert demonstrations obtained using a shortest path algorithm. During demonstration collection, we hold out 50% of the possible agent start and goal positions determined by uniform random sampling. In contrast, during agent learning and evaluation, start and goal positions are sampled from all possible positions.\nAs can be seen in Figure 3a, our method achieves near 100% success rate in 3M environment steps, while all GAIL variants fail to learn the task. Although BC and BCO could achieve the goal for about 60% and 30% cases respectively, they show limited generalization to unseen configurations. This\nresult proves that learning with the goal proximity reward is effective and the learned goal proximity function generalizes well to unseen configurations.\nTo verify whether the proximity function learns meaningful information of proximity to the goal, we visualize the proximity from all agent positions in Figure 4d. Our proximity function predicts a qualitatively meaningful goal proximity: high estimated proximity near the goal and lower estimated proximity when the agent is farther away from the goal. The corners of the rooms show low goal proximity since less expert trajectories pass over those regions compared to the center of each room.\nFinally, we investigate our hypothesis that the goal proximity function allows for greater generalization, which results in better task performance with smaller demonstration coverage. To test this hypothesis, we compare the cases where extreme (25% coverage), moderate, and no generalization (100% coverage) are required. Figure 4 demonstrates that our method consistently achieves 100% success rates in 3M steps with 50%-100% demonstration coverages and is not as affected by increasingly difficult generalization as baselines. In contrast, GAIL and all LfO baselines fail to learn the NAVIGATION task when expert demonstrations do not cover all configurations. This supports our hypothesis that the goal proximity function is able to capture the task structure and therefore, generalize better to unseen configurations." }, { "heading": "4.4 ROBOT MANIPULATION", "text": "We further evaluate our method in two continuous control tasks: FETCH PICK and FETCH PUSH from Plappert et al. (2018). In the FETCH PICK task shown in Figure 2b, a Fetch robotic arm must\ngrasp and move a block to a target position. In FETCH PUSH, the Fetch arm pushes a block to a target position, as shown in Figure 2c. Both the initial position of the block and target are randomly initialized. For each, we provide 1k demonstrations, consisting of 33k and 28k transitions for FETCH PICK and FETCH PUSH respectively, generated using a hand-engineered policy . We create a 50% holdout set of starting states for agent learning by splitting the continuous state space into a 4 by 4 grid and holding out two cells per row to sample the block and target starting positions from.\nIn the FETCH PICK task, our method achieves more than 80% success rate while the success rates of GAIfO and GAIfO-s are upper-bounded by 50% due to the limited coverage of expert demonstrations (see Figure 3b). Our method learns slower than GAIL but achieves comparable final performance even though GAIL learns from expert actions. The FETCH PUSH task is more challenging than FETCH PICK due to the more complicated contact dynamics for pushing interactions. In Figure 3c, the demonstrations are collected with full coverage but the policy is trained in a version of the environment with 2 times larger noise applied to the starting state. All methods fail to learn diagonal pushing movements but our method still learns horizontal pushing faster and achieves higher performance than all other baselines. We evaluate both FETCH tasks under two different generalization setups, different demonstration coverages (Figure 8) and different amounts of noise (Figure 9), and the results consistently show that our proximity function is able to accelerate policy learning in continuous control environments with superior generalization capability." }, { "heading": "4.5 ANT LOCOMOTION", "text": "We used the ANT REACH environment proposed in Ghosh et al. (2018), simulated in the MuJoCo physics engine (Todorov et al., 2012). In this task, the quadruped ant is tasked to reach a randomly generated goal, which is along the perimeter of a half circle of radius 5m centered around the ant (see Figure 2d). We provide 1k demonstrations, which contain 25k transitions in total. When demonstrations are collected, no noise is added to the initial pose of the ant whereas random noise is added during policy learning, which requires the reward functions to generalize to unseen states.\nIn Figure 3d, with 0.03 added noise, our method achieves 35% success rate while BCO, GAIfO, and GAIfO-s achieve 1%, 2%, and 7%, respectively. This result illustrates the importance of learning proximity over learning to discriminate expert and agent states for generalization to unseen states. The performance of GAIfO and GAIfO-s drops drastically with larger joint angle randomness as shown in Figure 9. As the ANT REACH task is not as sensitive to noise in actions compared to other tasks, BC and GAIL show superior results but our method still achieves comparable performance.\nWe also ablate the various aspects of our method in Figure 5. First, we verify the effect of the uncertainty penalty used in the proximity reward. The learning curves with different λ are plotted in Figure 5a and demonstrate that our method works the best with λ = 0.1. Both too low and too high uncertainty penalties degrade the performance. Figure 5b shows the linearly discounted proximity function learns marginally faster than the exponentially discounted proximity function. In Figure 5c, we test the importance of online and offline training of the proximity function. The result shows that the agent fails to learn the task without online updates using agent trajectories. Meanwhile, no proximity function pre-training lowers performance." }, { "heading": "4.6 ABLATION STUDY", "text": "Finally, we analyze the contribution of the proximity function, reward formulation, and uncertainty to our method’s performance in Figure 6. Adding uncertainty to GAIfO-s (GAIfO-s+Uncert) produced a 5.8% boost in average success rate compared to regular GAIfO-s, which is not a significant improvement. Proximity supervision, without the uncertainty penalty, resulted in a 28.1% increase in average performance over GAIfO-s with the difference-based reward R(st, st+1) = f(st+1)− f(st) (Prox+Diff) and 15.9% with the absolute rewardR(st) = f(st) (Prox+Abs). This higher performance means modeling proximity is more important than using the uncertainty penalty for our method.\nWe also found that the uncertainty penalty and proximity function have a synergistic interaction. Combining both the proximity and uncertainty gives a 43.3% increase with the difference-based reward (Prox+Diff+Uncert) and 33.0% increase with the absolute reward (Prox+Abs+Uncert). We can observe that the difference-based reward consistently outperforms the absolute reward except on ANT REACH, where the bias of the absolute reward Kostrikov et al. (2019) helps the agent survive\nlonger and reach the goal. Firstly, this shows the uncertainty penalty is more important for the proximity function as it models fine-grain temporal information where inaccuracies can be exploited, as opposed to the binary classification given by other adversarial imitation learning discriminators. Secondly, both with and without the uncertainty penalty, the difference-based proximity reward performs better than the absolute proximity reward. In conclusion, all three components of proximity, uncertainty, and difference-based reward are crucial for our method.\nIn Figure 6, we also evaluate the advantage of the additional sparse proximity reward given at the final time step. Compared to our method without this final reward, it results in a minor 0.9% average performance improvement, meaning this component is not critical to our method." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose a learning from observation (LfO) method inspired by how humans acquire skills by watching others performing tasks. Specifically, we propose to learn a goal proximity function from demonstrations which provides an estimate of temporal distance to the goal. Then, we utilize this learned proximity to encourage the policy to progressively move to states with higher proximity and eventually reach the goal. The experimental results on navigation, robotic manipulation, and locomotion tasks demonstrate that our goal proximity function improves the generalization capability to unseen states, which results in better learning efficiency and superior performance of our method compared to the state-of-the-art LfO approaches. Moreover, our method achieves comparable performance with LfD approaches." }, { "heading": "A OVERVIEW", "text": "As supplementary material for enhancing the main paper, we provide the following information:\n• Further analysis: We first provide analyses of how our model and baselines generalize to unseen states. We also include qualitative results demonstrating what the proximity function learns. Finally, we show ablation experiments on additional environments.\n• Implementation details: We describe the environments and provide training hyperparameters and architectures for our models.\n• Code: Complete code of our model, environments, and experiments can be found in the code directory. The code/README.md file documents the installation process, how to download the expert datasets, and commands for running experiments.\n• Video: We provide the results.mp4 file to present qualitative results of our method and baseline methods." }, { "heading": "B COMPARISON WITH GAIL", "text": "Our method shares a similar adversarial training process with GAIL. The following steps describe how to achieve our method starting from GAIL. Firstly, like the discriminator in GAIfO-s, our proximity function takes only states as input. Next, rather than training the discriminator to classify expert from agent, we train the proximity function to regress to the proximity labels which are 0 for agent and the time discounted value between 0 and 1 for expert. Our reward formulation also differs from GAIL approaches which give a log probability reward based on the discriminator output. We instead incorporate a proximity estimation uncertainty penalty, a difference-based reward, and a sparse reward given for the proximity of the final state, as shown in Equation 2." }, { "heading": "C ANALYSIS ON GENERALIZATION OF OUR METHOD AND BASELINES", "text": "By learning to predict the goal proximity, the proximity function not only learns to discriminate expert and agent states but also models task progress, which provides more information about the task. With additional supervision on learning goal proximity, we expect the proximity function to provide a more informative learning signal to the policy and generalize better to unseen states than baselines which overfit the reward function to expert demonstrations. To analyze how well our method and the baselines can generalize to unseen states, we vary the difference between the states encountered in expert demonstrations and agent learning.\nOne way, we vary the difference between expert demonstrations and agent learning is by restricting the expert demonstrations to only cover parts of the state space. For NAVIGATION, FETCH PICK and FETCH PUSH we show results for demonstrations that cover 100% 75%, 50% and 25% of the state\nspace. For the discrete state space in NAVIGATION we restricted expert demonstrations to the fraction of possible agent start and goal configurations. For the two continuous state FETCH tasks, we break the 2D continuous sampling region a 4 by 4 grid and hold out one cell per row for 75% coverage and three cells per row for 25% coverage to sample the block and target starting positions from.\nLikewise, we also measured generalization by varying the difference between expert demonstrations and agent learning by increasing the initial state noise during agent learning. On FETCH PICK, FETCH PUSH and ANT REACH we show results for four different noise settings for the two FETCH tasks, the 2D sampling region is scaled by the noise factor. For ANT REACH, uniform noise, scaled by the noise factor, is added to the initial joint angles, whereas the demonstrations have no noise. If our method allows for greater generalization from the expert demonstrations, our method should perform well even under states different than those in the expert demonstrations.\nThe results of our method and baselines across varying degrees of generalization are shown in Figure 8 and Figure 9. Note that the results in the main paper are for 50% coverage in FETCH PICK, 2x noise in FETCH PUSH, and 0.05 noise in ANT REACH. Across both harder and easier generalization, our method demonstrates more consistent performance. While GAIfO-s performs well on high coverage, which requires little generalization in agent learning, its performance deteriorates as the expert demonstration coverage decreases." }, { "heading": "D QUALITATIVE RESULTS", "text": "In this section, we aim to qualitatively verify if the learned goal proximity function gives a meaningful measure progress towards the goal for the agent. It is important for agent learning, that this proximity function gives higher values for states which are temporally closer to the goal. To verify these intuitions, we visualize the proximity values predicted by the proximity function in a successful episode from agent learning in Figure 10.\nFrom the qualitative results of FETCH PICK and FETCH PUSH in Figure 10, we can observe that as the agent moves the block closer to the goal, the predicted proximity increases. This provides an example of the proximity function generalizing to agent experience and providing a meaningful reward signal for agent learning. We notice that while the predictions increase as the agent nears the goal, the proximity prediction values are often low (<0.1) as in Figure 10a. We hypothesize this is due to the adversarial training which labels agent experience with 0 proximity and lowers the average proximity predicted across states. For videos of more qualitative evaluations for our method and baselines, refer to results.mp4, also included in the submission." }, { "heading": "E FURTHER ABLATIONS", "text": "We include additional ablations to further highlight the advantages of our main proposed method over its variants. We evaluate against the same ablations proposed in the main paper, but across all environments. We present all these experiments in Figure 11.\nWe also attempted to compare to an ablation which learns the proximity function through a ranking based loss from Brown et al. (2019). However, we empirically found it to be ineffective and therefore did not include it. This ranking based loss uses the criterion that for two states from an expert trajectory st1 , st2 , the proximities should obey f(st1) < f(st2) if t1 < t2. We therefore\ntrain the proximity function with the cross entropy loss −∑ti<tj log exp fφ(stj )exp fφ(sti )+exp fφ(stj ) . We incorporate agent experience by adding an additional loss which ranks expert states above agent states for randomly sampled pairs of expert and agent states (se, sa), through the cross entropy loss −∑sa∼De,se∼πθ log exp fφ(se)exp fφ(sa)+exp fφ(se) . Unlike the discounting factor in the discounting based proximity function, the ranking based training requires no hyperparameters. However, the lack of supervision on ground truth proximity scores could result in less meaningful predicted proximities and a worse learning signal for the agent, which could explain the observed poor performance.\nAs seen from the additional ablation analysis in Figure 11, our method has the best performance in the majority of environments. In each task, incorporating uncertainty and online updates is crucial. Our main method using the linear proximity function always performs slightly better than the exponential proximity function. Using offline updates is helpful in all environments except NAVIGATION." }, { "heading": "F FURTHER BASELINES", "text": "In Figure 12, we compare to SQIL (Reddy et al., 2020), an imitation learning approach which demonstrates higher sample efficiency with off-policy RL. SQIL modifies the replay buffer with the expert demonstrations which are assigned reward +1 while all agent experience is assigned reward 0. We use Soft Actor-Critic (Haarnoja et al., 2018) using the same hyperparameters as in Reddy et al. (2020).\nOur method consistently outperforms SQIL in the Fetch tasks, despite SQIL using actions whereas our method does not. This can be because the Fetch tasks require precise actions (i.e., it is difficult to reverse moving the block in the wrong direction) which results in covariate shift that negatively impacts BC and SQIL (a form of regularized BC). On the other hand, SQIL learns the Ant Reach task well and performs similar to the behavioral cloning (BC) baseline since Ant Reach is not sensitive to small errors (i.e., the agent can recover from bad actions and change heading). We observed unstable SQIL results in Figure 12c, which can also be observed in Figure 2,3 of (Reddy et al., 2020), further training or hyperparameter tuning could stabilize training.\nG IMPLEMENTATION DETAILS\nG.1 ENVIRONMENT DETAILS\nThe implementations of NAVIGATION, FETCH (which includes FETCH PICK and FETCH PUSH), and ANT REACH environments are based on Chevalier-Boisvert et al. (2018), Plappert et al. (2018), and Ghosh et al. (2018), respectively. The actions in the FETCH experiments use end effector position control and continuous control for the gripper. A 15-dimensional state in FETCH tasks consists of\nthe relative position of the goal from the object, relative position of the end effector to the object, and robot state. We found that not including the velocity information was beneficial for all learning from observation approaches. Meanwhile, in NAVIGATION the state consists of a one-hot vector for each grid cell encoding wall, empty space, agent or goal. Finally, in ANT REACH the state consists of velocity, force and the relative goal position with the action space consisting of joint control. Each environment also randomly initializes the starting state and goal. The details of observation spaces, action spaces, and episode lengths are described in Table 1. All units in this section are in meters unless otherwise specified.\nFor NAVIGATION we collect 250 expert demonstrations and for all other environments we collect 1000 expert demonstrations. In NAVIGATION we use BFS search to collect expert demonstrations. In FETCH PICK we generate demonstrations by hard coding the Sawyer Robot to first reach above the object, then reach down and grasp, and finally move to the target position. Similarly, in FETCH PUSH, we reach behind the object and then execute a planar push forward towards to the goal. For ANT REACH, we collect demonstrations using an expert policy trained using PPO (Schulman et al., 2017) based on the reward function R(s, a) = 1− 0.2 · ||pant − pgoal||2 − 0.005 · ||a||22, where pant and pgoal are (x, y)-positions of the ant and goal, respectively, and a is an action. Please refer to the code for more details.\nTo evaluate the generalization capability of our method and baselines, we constrain the coverage of expert demonstrations or add additional starting state noise during agent learning as discussed in Section C.\nG.2 NETWORK ARCHITECTURES\nActor and critic networks: We use the same architecture for actor and critic networks except for the output layer where the actor network outputs an action distribution while the critic network outputs a critic value. For NAVIGATION, the actor and critic network consists of CONV (3, 2, 16)−ReLU − MaxPool(2, 2)−CONV (3, 2, 32)−ReLU −CONV (3, 2, 64) followed by two fully-connected layers with hidden layer size 64, where CONV (k, s, c) represents a c-channel convolutional layer with kernel size k and stride s. For other tasks, we model the actor and critic networks as a two separate 3-layer MLP with hidden layer size 256. For the continuous control tasks, the layer of the actor MLP is two-headed to output the mean and standard deviation of the action distribution.\nGoal proximity function and discriminator: The goal proximity function and discriminator use a CNN encoder followed by a hidden layer of size 64 for NAVIGATION and a 3-layer MLP with a hidden layer size of size 64 for other tasks.\nG.3 TRAINING DETAILS\nFor our method and all baselines except BC (Pomerleau, 1989) and BCO (Torabi et al., 2018a), we train policies using PPO. The hyperparameters for policy training are shown in Table 2, while the hyperparameters for the proximity and discriminator function are shown in Table 3. For our method, in some tasks, we also found it slightly helpful to scale the reward by a constant factor and these values are also included in Table 2.\nIn BC, the demonstrations were split into 80% training data and 20% validation data. The policy was trained on the training data until the validation loss stopped decreasing. The policy is then evaluated for 1,000 episodes to get an average success rate. In GAIfO-s and GAIL the AIRL reward from Fu et al. (2018) is used." } ]
2,020
null
SP:c0bbcd2b046db616816cf717a30e2547b501378a
[ "This paper shows that adaptive learning rates are beneficial for finding critical points of finite-sum optimization problems. In particular, with appropriate learning rates, a variant of adagrad can find a epsilon critical point in \\tilde O(1/epsilon^2) iterations. This improves upon previous results of either O(1/\\epsilon^3) or O(1/\\epsion^4) in various situations. The key new assumption is a “consistency condition” that bounds how big individual example gradients can be when the overall gradient is small." ]
Adaptive gradient methods have been shown to outperform SGD in many tasks of training neural networks. However, the acceleration effect is yet to be explained in the non-convex setting since the best convergence rate of adaptive gradient methods is worse than that of SGD in literature. In this paper, we prove that adaptive gradient methods exhibit an Õ(T 1/2)-convergence rate for finding firstorder stationary points under the strong growth condition, which improves previous best convergence results of adaptive gradient methods and random shuffling SGD by factors of O(T 1/4) and O(T 1/6), respectively. In particular, we study two variants of AdaGrad with random shuffling for finite sum minimization. Our analysis suggests that the combination of random shuffling and adaptive learning rates gives rise to better convergence.
[ { "affiliations": [], "name": "RANDOM SHUFFLING" } ]
[ { "authors": [ "Naman Agarwal", "Brian Bullins", "Xinyi Chen", "Elad Hazan", "Karan Singh", "Cyril Zhang", "Yi Zhang" ], "title": "Efficient full-matrix adaptive regularization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu. Natasha" ], "title": "Faster non-convex optimization than sgd", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Robert Mansel Gower", "Nicolas Loizou", "Xun Qian", "Alibek Sailanbayev", "Egor Shulgin", "Peter Richtarik" ], "title": "Sgd: General analysis and improved rates, 2019", "venue": null, "year": 2019 }, { "authors": [ "Nathan Halko", "Per-Gunnar Martinsson", "Joel A Tropp" ], "title": "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions", "venue": "SIAM review,", "year": 2011 }, { "authors": [ "Jeff Haochen", "Suvrit Sra" ], "title": "Random shuffling beats SGD after finite epochs", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Tosio Kato" ], "title": "Perturbation theory for linear operators; 2nd ed", "venue": "Grundlehren der mathematischen Wissenschaften A Series of Comprehensive Studies in Mathematics. Springer, Berlin,", "year": 1976 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Tristan Milne" ], "title": "Piecewise strong convexity of neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Lam M Nguyen", "Quoc Tran-Dinh", "Dzung T Phan", "Phuong Ha Nguyen", "Marten van Dijk" ], "title": "A unified convergence analysis for shuffling-type gradient methods", "venue": "arXiv preprint arXiv:2002.08246,", "year": 2020 }, { "authors": [ "B. Polyak" ], "title": "Gradient methods for the minimisation of functionals", "venue": "Ussr Computational Mathematics and Mathematical Physics,", "year": 1963 }, { "authors": [ "Sashank J. Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabas Poczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization, 2016", "venue": null, "year": 2016 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux" ], "title": "Fast convergence of stochastic gradient descent under a strong growth", "venue": null, "year": 2013 }, { "authors": [ "Bernhard A. Schmitt" ], "title": "Perturbation bounds for matrix square roots and pythagorean sums", "venue": "Linear Algebra and its Applications,", "year": 1992 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2015 }, { "authors": [ "Matthew Staib", "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar", "Suvrit Sra" ], "title": "Escaping saddle points with adaptive gradient methods", "venue": null, "year": 1901 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need, 2017", "venue": null, "year": 2017 }, { "authors": [ "Sharan Vaswani", "Francis Bach", "Mark Schmidt" ], "title": "Fast and faster convergence of sgd for overparameterized models and an accelerated perceptron, 2019", "venue": null, "year": 2019 }, { "authors": [ "Rachel Ward", "Xiaoxia Wu", "Leon Bottou" ], "title": "Adagrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization", "venue": "arXiv preprint arXiv:1806.01811,", "year": 2018 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks, 2017", "venue": null, "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training bert in 76 minutes", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization, 2017", "venue": null, "year": 2017 }, { "authors": [ "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ], "title": "On the convergence of adaptive gradient methods for nonconvex optimization", "venue": "arXiv preprint arXiv:1808.05671,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "We consider the finite sum minimization problem in stochastic optimization:\nmin x2Rd\nf(x) = 1\nn\nnX\ni=1\nfi(x), (1)\nwhere f is the objective function and its component functions fi : Rd ! R are smooth and possibly non-convex. This formulation has been used extensively in training neural networks today. Stochastic gradient descend (SGD) and its variants have shown to be quite effective for solving this problem, whereas recent works demonstrate another prominent line of gradient-based algorithms by introducing adaptive step sizes to automatically adjust the learning rate (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2014).\nDespite the superior performance of adaptive gradient methods in many tasks (Devlin et al., 2019; Vaswani et al., 2017), their theoretical convergence remains the same or even worse for non-convex objectives, compared to SGD. In general non-convex settings, it is often impractical to discuss optimal solutions. Therefore, the attention of analysis turns to stationary points instead. Many works have been proposed to study first-order (Chen et al., 2019; Zhou et al., 2018; Zaheer et al., 2018; Ward et al., 2018; Zhou et al., 2018) and second-order (Allen-Zhu, 2018; Staib et al., 2019) stationary points. Table 1 summarized some previous best-known results for finding first-order stationary points. One might notice that the best dependence on the total iteration number T of adaptive gradient methods matches that of vanilla SGD. In addition, with the introduction of incremental sampling techniques, an even better convergence of SGD can be obtained (Haochen & Sra, 2019; Nguyen et al., 2020).\nThis gap between theory and practice of adaptive gradient methods has been an open problem that we aim to solve in this paper. Motivated by the analysis of sampling techniques, we rigorously prove that adaptive gradient methods exhibit a faster non-asymptotic convergence rate that matches the best result on SGD. In particular, we make the following contributions:\n• Our main contribution (Theorem 1,2,3) is to prove that two variants of AdaGrad can find Õ(T 1/2)-approximate first-order stationary points under the strong growth condition assumption (Schmidt & Roux, 2013). This improves previous best convergence results of adaptive gradient methods and shuffling SGD by factors of O(T 1/4) and O(T 1/6) ,\nOur analysis points out two key components that lead to better convergence results of adaptive gradient methods: the epoch-wise analysis of random shuffling can incorporate the benefit of full gradients; the adaptive learning rates along with the strong growth condition provide better improvement of objective value in consecutive epochs.\nFinite Sum Minimization vs. Expectation Minimization The comparison in Table 1 shows the convergence rates in the non-convex setting with respect to first-order stationary points. The results in the first two categories apply to the general expectation minimization problem with f(x) = Ezf(x, z). Whereas the convergences for expectation minimization naturally transform to finite sum minimization, the statements remain asymptotic, meaning Ekrf(x)k ⇠ O(T↵), where the expectation is taken to compensate the stochastic gradients. Many efforts have been made to reduce variance in finite sum minimization (Johnson & Zhang, 2013; Reddi et al., 2016; Haochen & Sra, 2019). In particular, non-asymptotic results can be gained using random shuffling, under which the dependency on sample size n seems to be unavoidable (Haochen & Sra, 2019)." }, { "heading": "2 PRELIMINARIES", "text": "A typical setting of machine learning using gradient methods is the finite sum minimization in equation (1). In this problem, the number of samples n is usually very large, rendering the evaluation of full gradients expensive. Therefore, a mini-batch gradient is introduced to approximate the full\ngradient. Mini-batch gradient descent is often carried out in epochs, where each epoch includes several iterations of parameter updates. This epoch-wise implementation can easily incorporate shuffling techniques, which have proven to be effective for SGD both in theory and practice.\nWe aim to analyze the convergence rate of adaptive gradient methods under this framework, where the objective can be non-convex. Throughout this paper, we restrict the discussions of convergence to achieving ✏-approximate first-order stationary point defined as x satisfying krf(x)k ✏. We leave for future work analysis related to saddle points and second-order stationary points. We want to show that adaptive gradient methods can find x such that krf(x)k = Õ(T 1/2) in T epochs.\nNotations. v2 denotes the matrix vv> and kvk is the l2-norm of vector v; diag(V ), kV k, min(V ) and max(V ) are the diagonal matrix, the spectral norm, the largest and smallest non-zero eigenvalues of the matirx V , respectively. For alphabets with subscripts, vi:j denotes the collection of {vi,vi+1, ...,vj} and v: denotes the entire set of v·; similar notations are used for alphabets with double subscripts. Let [n] = {1, ..., n}, O(·), Õ(·) be the standard asymptotic notations. Denote ei as the unit vector with its i-th component being 1 and e the all-one vector whose dimension depend on the context. As a clarification, we use T to denote the number of epochs (instead of the number of iterations in Table 1) starting from this section.\nAdaGrad-type methods. As opposed to SGD, adaptive gradient methods assign a coordinate-wise adaptive learning rate to the stochastic gradient. We formulate the generic AdaGrad-type optimizers, including their full and diagonal versions, as follows. At the i-th iteration of epoch t, the parameter is updated by:\nxt,i+1 = xt,i ⌘tV 1/2t,i gt,i, (full version)\nxt,i+1 = xt,i ⌘tdiag(Vt,i) 1/2gt,i, (diagonal version)\nwhere gt,i is the mini-batch gradient of the objective at xt,i, the matrix Vt,i contains second-moment calculated using all the past stochastic gradients and ⌘t is the step size of epoch t. The initial parameter of epoch t + 1 is taken to be the parameter updated by epoch t, i.e. xt+1,1 = xt,m+1, where we have m iterations in each epoch. The full version is impractical for high-dimensional x. Thus the diagonal version is often preferred in literature. As an example, the second-moment matrix in AdaGrad is taken to be Vt,i = ( P t 1 s=1 P m j=1 g 2 s,j + P i j=1 g 2 t,j )/t. SGD can also be written into this general form where we set Vt,i to be the identity matrix.\nSampling Strategy. Random shuffling, also known as sampling without replacement, is an oftenused technique to accelerate the convergence of SGD. The idea is to sample a random permutation of function indices [n] for each epoch and slide through this permutation to get the mini-batch gradients for the iterations in this epoch. Some implementations shuffle the set [n] uniformly independently for each epoch while others shuffle the set once during initialization and use the same permutation for all epochs. Generally speaking, suppose we have a permutation = ( 1, ..., n) at epoch t, we define the set Bt,i = { j : (i 1) nm < j i n m } where m is the number of iterations in one epoch. Then\nthe mini-batch gradient is taken to be gt,i = m/n · P\nj2Bt,i rfj(xt,i).\nThis sampling method of mini-batches benefits the theoretical analysis of SGD by providing a bounded error between the full gradient and the aggregation of mini-batch gradients in one epoch (Haochen & Sra, 2019; Nguyen et al., 2020). A naive observation that backups this point can be made by assuming xt,1 = ... = xt,m, since [mi=1Bt,i = [n], we would have P m\ni=1 gt,i = rf(xt,1). Then full gradient can be used to obtain convergence better than plain SGD.\nRandom shuffling for AdaGrad. Unlike SGD, in adaptive methods, it is hard to approximate the full gradient with the aggregation of mini-batch gradient updates in one epoch due to the presence of the second moments. As we will show in experiments, the simple shuffling variant that only changes the sampling method of mini batches in AdaGrad does not lead to better convergence. The major difficulty hampering the analysis of this variant is that the second-moment matrix uses all the gradient information in history without distinguishment. Thus to be able to leverage the benefit of the full gradient, we propose to study a slight modification of AdaGrad. Formally, we shuffle the set [n] once at initialization and obtain the mini-batch gradients in a random shuffling manner. We update the\nparameters by the same rules of AdaGrad-type methods described above where the second-moment matrix is taken to be:\nVt,i = mX\nj=i+1\ng 2 t 1,j +\niX\nj=1\ng 2 t,j . (AdaGrad-window)\nThe difference between AdaGrad-window and AdaGrad is that the former only use the latest m mini-batch gradients instead of an epoch-wise average of all the mini-batch gradients in history. The step size is ⌘t = ⌘/ p t where ⌘ is a constant for both methods. The updates of AdaGrad-window is also very similar to the GGT method (Agarwal et al., 2019) without momentum. However, GGT uses the full matrix inversion, where our analysis applies to both full and diagonal versions." }, { "heading": "3 MAIN RESULTS", "text": "We will show that AdaGrad-window has the convergence rate of Õ(T 1/2) for non-convex problems under some mild assumptions. This is a significant improvement compared with previous best convergence results of adaptive gradient methods and random shuffling SGD, which are of order O(T 1/4) and Õ(T 1/3) respectively. The key towards our convergence rate improvement is twofold: the epoch-wise analysis of random shuffling enables us to leverage the benefit of full gradients; the adaptive learning rates and the strong growth condition endow a better improvement of objective value in consecutive epochs.\nIn order to achieve this better convergence, we first state the assumptions and important concepts used in the proof. Apart from the general assumptions (A1) and (A2) used in previous analysis (Fang et al., 2018; Zaheer et al., 2018; Ward et al., 2018), we pose another assumption described below in (A3) to characterize the consistency between individual gradients and the full gradient.\nAssumptions. We assume the following for AdaGrad-window:\n(A1) The objective function is lower bounded and component-wise L-smooth, i.e. 9f⇤ 2 R s.t. f(x) f⇤ > 1, 8x and krfi(x) rfi(y)k L kx yk , 8x,y, i.\n(A2) The mini-batch gradients in the algorithm are uniformly upper bounded, i.e. 9G 2 R s.t. kgt,ik G, 8t, i.\n(A3) The objective function satisfies the strong growth condition with constant r2, i.e. 8x, 1 n P n i=1 krfi(x)k2 r2krf(x)k2.\nThe strong growth condition assumption is essentially enforcing the norms of individual gradients to be at the same scale as the norm of the full gradient. This condition was originally used to derive a faster convergence for SGD in the context of convex finite sum minimization (Schmidt & Roux, 2013). It was further explored to analyze SGD-type methods after showing its closed relationship with interpolation (Ma et al., 2018; Vaswani et al., 2019; Gower et al., 2019). Built upon these previous analysis, we will give a more in-depth discussion for this assumption in section 6.\nUnder these assumptions, the following theorems show our convergence result for the full and diagonal versions of AdaGrad-window.\nTheorem 1 (The convergence rate of full AdaGrad-window). For any T > 4, set ⌘ = m 5/4, denote C1 = m5/4 p 11/3 + 8 · r2 (f(x1,1) f⇤ +G) / p 2 and C2 = 5m5/4 p 11/3 + 8 · r2L/ p 2 as constants independent of T . We have:\nmin 1tT krf(xt,1)k 1p T (C1 + C2 lnT ). (2)\nTheorem 2 (The convergence rate of diagonal AdaGrad-window). For any T > 4, set ⌘ = m 5/4, denote C 01 = m5/4 p 11/3 + 8 · r2 ⇣ f(x1,1) f⇤ +G p d ⌘ / p 2 and C 02 = 5m5/4 p 11/3 + 8 · r2d3/2L/ p 2 as constants independent of T . We have:\nmin 1tT krf(xt,1)k 1p T (C 01 + C 0 2 lnT ). (3)\nThe interpretation of these two theorems is that we are able to find an approximate first-order stationary point such that krf(x)k = Õ(T 1/2) within T epochs using both versions. We notice that the convergence rate of AdaGrad-window matches that of GD when m = 1, which denotes that our results are relatively tight with respect to T . The complete proof is included in the appendix. We will give the intuition and key lemmas explaining how to utilize random shuffling and second moments to obtain these results in the next section.\nIn addition, we also prove for another variant of AdaGrad, namely, AdaGrad-truncation with secondmoment matrix defined as V1,i = m · I and Vt,i = mk P m\nj=1 gt 1,jk2 · I when t > 1 . This second-moment matrix is very similar to the norm version of AdaGrad (Ward et al., 2018) whereas we use the aggregation of mini-batch gradients in the previous epoch as the coefficient. AdaGradtruncation is beneficial since the formulation leads to a fast and simple implementation without needing to discuss the full and diagonal versions. Due to the space limitation, we list the result below and defer the discussions to the appendix. Theorem 3 (The convergence rate of AdaGrad-truncation). For any T > 4, set ⌘ =p 3/(10m1/2Lr) · p f(x1,1 f⇤ +G2)/ p L+ 2, denote C = 80m(Lr+2) p f(x1,1) f⇤ +G2 as constants independent of T . We have:\nmin 1tT krf(xt,1)k Cp T . (4)" }, { "heading": "4 OVERVIEW OF ANALYSIS", "text": "The goal of this section is to give the key intuition for proving Theorem 1 and Theorem 2. In the following, we use the full version as an example where similar results can be obtained for the diagonal version by adding a dependency on dimension d. Proof details of both versions are included in the appendix. The key towards proving Theorem 1 is to establish the following lemma. Lemma 1. For any t > 1, in the full version of AdaGrad-window, denote c1 = ⌘/ p 11/3 + 8 · r2, c2 = 5⌘2m2L/2 + 5⌘2m5/2L/⇡ as constants independent of t. We have either: 1p t · c1krf(xt,1)k f(xt,1) f(xt,m+1) + 1 t · c2, (5)\nor krf(xt,1)k 1p t · (⌘mL). In addition, there is always 0 f(xt,1) f(xt,m+1) + 1t · c2.\nThe deduction of convergence rate is straightforward based on this lemma. Either there is krf(xt,1)k (⌘mL/ p 2)/ p T for some T/2 t T , leading directly to Theorem 1, or we can sum up equation (5) for T/2 t T : the coefficients are approximately p T on the left and lnT on the right thus leading to Theorem 1. Therefore, we turn to the proof of this lemma instead. Under the L-smooth assumption, we have the standard descent result for one epoch rf(xt,1)>(xt,1 xt,m+1) f(xt,1) f(xt,m+1) + L/2 · kxt,m+1 xt,1k2 (we refer the proof of to (Nesterov, 2018)). Rewrite the equation by replacing xt,m+1 xt,1 on the left with the AdaGrad-window updates:\n⌘p t ·rf>(xt,1)V 1/2t,m (\nmX\ni=1\ngt,i)\n| {z } S1\nf(xt,1) f(xt,m+1) + L\n2 kxt,m+1 xt,1k2\n+ ⌘p t ·rf(xt,1)>\n\" V\n1/2 t,m\nmX\ni=1\ngt,i mX\ni=1\nV 1/2 t,i gt,i\n#\n| {z } S2\n.\nThe idea behind this decomposition is to split out the term S1 that behaves similarly to the full gradient and control the remaining terms. Similar to other analysis of adaptive methods, the term kxt,m+1 xt,1k2 on the right can be upper bound by a constant times 1/t (we refer scalars that do not depend on t to constants). This can be done by simply plugging in the update rules, the details of which are showed in the appendix. Next, we show how to bound term S1 and S2 in order to prove Lemma 1." }, { "heading": "4.1 LOWER BOUND OF S1", "text": "To obtain the lower bound of S1, we need two steps. The first step stems from the idea in random shuffling SGD (Haochen & Sra, 2019), which is to bound the difference between the full gradient and the aggregation of mini-batch gradients. Formally, we have the following lemma. Lemma 2. For any t > 0, in the full version of AdaGrad-window, denote constant c3 = ⌘(m 1)L/2, we have:\nkrf(xt,1) 1\nm\nmX\ni=1\ngt,ik 1p t · c3. (6)\nBuilding on top of this lemma, term S1 can be written into a constant combination of 1/ p t and ( P m\ni=1 gt,i) > V 1/2 t,m\n( P m\ni=1 gt,i), which leads to our second step. In the second step, we utilize the strong growth assumption to obtain a lower bound formulated as below. Lemma 3. For any t > 0, in the full version of AdaGrad-window, we have either:\n( mX\ni=1\ngt,i) > V 1/2 t,m\n( mX\ni=1\ngt,i) 1p\n11/3 + 8 · r2 k\nmX\ni=1\ngt,ik, (7)\nor krf(xt,1)k 1p t · (⌘mL). In addition, there is always (\nP m\ni=1 gt,i) > V 1/2 t,m\n( P m\ni=1 gt,i) 0.\nThis lemma shows that ( P m\ni=1 gt,i) > V 1/2 t,m\n( P m i=1 gt,i) can be lower bound by k P m\ni=1 gt,ik times a constant. Therefore, we are able to derive a constant combination of k P m i=1 gt,ik and 1/ p t as the lower bound for S1, which is desired for Lemma 1.\nWe emphasize that the essential element of the convergence rate improvement lies in Lemma 3. For SGD, the matrix Vt,m is the identity matrix leading to a lower bound of k P m\ni=1 gt,ik2 instead of k P m\ni=1 gt,ik. This lower order leads to a greater decrease of objective value between consecutive epochs as shown in Lemma 1. The reason that we are able to lower the order on k P m\ni=1 gt,ik is due to the presence of the second moments canceling out the order." }, { "heading": "4.2 UPPER BOUND OF S2", "text": "To obtain the upper bound of S2, we can write V 1/2 t,m\nP m i=1 gt,i into P m i=1 V 1/2 t,m\ngt,i. Therefore, we only need to take care of the second-moment matrices Vt,m and Vt,i. As a matter of fact, we have the following lemma. Lemma 4. For any t > 1 and 1 i m, in the full version of AdaGrad-window, denote c4 = 6⌘(m i 1)(m i)L/⇡ + 4⌘(m i)(m + i + 1)L/⇡ as constants independent of t, we have:\nkV 1/2 t,m V 1/2 t,i k 1p t · c4. (8)\nBased on this lemma, we can obtain an upper bound of S2 using the result below. Lemma 5. For any t > 1, in the full version of AdaGrad-window, denote c5 = 5⌘m5/2/⇡L+ ⌘m2L as constants independent of t, we have:\nrf(xt,1)> \" mV 1/2 t,m rf(xt,1) mX\ni=1\nV 1/2 t,i gt,i\n# 1p\nt · c5. (9)\nWith this upper bound derived, we are able to prove Lemma 1, which is the key intermediate result towards the convergence result in the Theorem 1." }, { "heading": "5 COMPLEXITY ANALYSIS", "text": "Based on Theorem 1 and 2, we discuss the computational complexity for two versions of AdaGradwindow. We compare the total complexity between AdaGrad-window and random shuffling SGD to demonstrate that this adaptive gradient method can be faster than SGD after finite epochs in theory.\nCorollary 1 (The computational complexity of full-version AdaGrad-window). Let {xt,1}Tt=1 be the sequence generated by AdaGrad-window. For given tolerance ✏, to guarantee that min1tT krf(xt,1)k ✏, the total number of epochs is nearly (ignoring logarithm) O(m5/2✏ 2). Therefore, the total number of gradient evaluations is nearly (ignoring logarithm) O m5/2n✏ 2 .\nCompared with the full version, their diagonal version is more practical in modern neural network training. Fortunately, a similar result can be derived for the diagonal version. Corollary 2 (The computational complexity of diagonal-version AdaGrad-window). Let {xt,1}Tt=1 be the sequence generated by AdaGrad-window. For given tolerance ✏, to guarantee min1tT krf(xt,1)k ✏, the total number of epochs is nearly (ignoring logarithm) O(m5/2d3✏ 2). Therefore, the total number of gradient evaluations is nearly (ignoring logarithm) O m5/2nd3✏ 2 .\nFor achieving the ✏-approximate first-order stationary point, the total number of gradient evaluation required by random shuffling SGD is O(nd✏ 3) (Haochen & Sra, 2019; Nguyen et al., 2020). In a rough comparison, AdaGrad-window has advantages over random shuffling SGD when ✏ ⇡ O(m 5/2). Therefore, in theory, AdaGrad-window is more efficient when the number of iterations in one epoch m is small. Recent works in deep neural net training (You et al., 2017; 2020) have shown that in large batch scenarios, adaptive methods tend to converge faster in training. Since m is the number of iterations in one epoch, meaning that small m gives a large batch size, our theory supports these previous findings. However, it seems that choosing m = 1, which amounts to full gradient calculation every step, attains the fastest result theoretically. We point out here that our bound on the m might not be strict and encourage future work to improve upon that in order to get a sense of how to choose mini-batch sizes in practice." }, { "heading": "6 THE STRONG GROWTH CONDITION ASSUMPTION", "text": "This section aims to provide answers regarding the strong growth condition assumption (A3). In particular, when is this assumption satisfied? Can it be used to improve the convergence rate of other methods? As answers to the first question, we first summarize previous results based on its closed relation with interpolation and the weak growth condition (Vaswani et al., 2019), then we extend to show that two general types of objective functions satisfy the strong growth condition in some cases. We discuss the convergence rate of other gradient-based methods under the strong growth condition for the second question." }, { "heading": "6.1 FUNCTIONS SATISFYING THE STRONG GROWTH CONDITION ASSUMPTION", "text": "The strong growth condition in (A3) requires krfi(x)k = 0 for all the individual functions at local stationary point x with krf(x)k = 0. In the sense of fitting model to data, where each fi represents a data point, this translates to x, as the parameter, interpolating all the data points. The interpolation property has been utilize to study the convergence and mini-batch size of SGD for convex loss functions (Ma et al., 2018). Formally, function f is said to satisfy the interpolation property, if for any x: rf(x) = 0 ) rfi(x) = 0, 8i. (10) This property has been observed to hold for expressive models such as over-parameterized neural networks (Zhang et al., 2017).\nAnother concept closely related to the strong growth condition is the weak growth condition. Defined by Vaswani et al. (2019), function f is said to satisfy the weak growth condition with constant ⇢, if for any x:\n1\nn\nnX\ni=1\nkrfi(x)k2 2⇢L[f(x) f(x⇤)], (11)\nwhere L is the smoothness constant and x⇤ is a minima of f , assuming existence. The weak growth condition is a relaxed version of the strong growth condition in that the latter implies the former. Vaswani et al. (2019) also showed that functions satisfying both the weak growth condition and Polyak-Lojasiewicz (PL) inequality (Polyak, 1963) must satisfy the strong growth condition.\nFurthermore, they proved that convex functions with interpolation property must satisfy the weak growth condition. The following diagram summarized the above results.\nSince the PL inequality holds for strongly convex functions, strongly convex functions with interpolation property must satisfy the strong growth condition. Results in (Milne, 2019) demonstrate that the quadratic loss function with l2-penalty for a feed-forward neural network with ReLU activation is piecewise strongly convex, with domains of convexity determined by the network architecture, penalty factor, and data. Based on this fact, we may have the following lemma for the strong growth condition. Lemma 6. Let fi(x) be the l2-penalized quadratic loss function at data point (ai, oi) for a feedforward neural network with ReLU activation and scalar outputs, i.e.\nfi(x) = lx(ai, oi) + 2 kxk2,\nwhere x is the parameter of the neural network and is the penalty factor. Define a nonempty open set containing 0 to be:\nU = {x : f(x)1/2kxkH 1 2 p 2H(H + 1)c },\nwhere H is the number of hidden layers and c kaik, 8i. If fi has a uniform minimizer x⇤ for i = 1, ..., n and f is L-smooth with constant L, then almost all points in U satisfy the strong growth condition with constant 2L , i.e. 9U 0 ⇢ U such that U\\U 0 has no inner point and:\n1\nn\nnX\ni=1\nkrfi(x)k2 2L krf(x)k2, 8x 2 U 0.\nTherefore, for feed-forward neural networks with ReLU activation, e.g., VGG networks (Simonyan & Zisserman, 2015), we are able to show that the strong growth condition holds for x in the neighborhood of 0, if there exists a uniform minimizer for all fi. As a result, we might have the strong growth condition for all xt,i generated by the algorithm if the initialization and step sizes are appropriately chosen.\nFor linear models, we are able to derive much stronger results such that the strong growth condition holds for all x without any constraint on the global minimizer. In particular, Vaswani et al. (2019) showed that the squared-hinge loss, for linearly separable data with a margin ⌧ and finite support c, satisfies the strong growth condition with constant c 2\n⌧2 . We extend this result to cross-entropy loss and\nobtain the same result. Lemma 7. For linear separable data with margin ⌧ and finite support of size c, the cross-entropy loss satisfies the strong growth condition with constant c 2\n⌧2 ." }, { "heading": "6.2 CONVERGENCE RATE UNDER THE STRONG GROWTH CONDITION ASSUMPTION", "text": "Under the strong growth condition assumption, previous results have shown that SGD can achieve a better convergence rate in many settings. In the context of expectation minimization and global optimality, SGD has linear convergence for strongly convex problems and sublinear O(T 1) convergence for convex problems (Schmidt & Roux, 2013). For non-convex problems, Vaswani et al. (2019) proved that constant step-size SGD can obtain first-order stationary points in an O(T 1/2) rate asymptotically. However, to the best of our knowledge, non-asymptotic results under the strong growth condition have yet to be explored. Our main theorem shows that adaptive gradient methods can achieve a non-asymptotic convergence rate of Ô(T 1/2)." }, { "heading": "7 EXPERIMENTS", "text": "To investigate the effect of adaptive step size and random shuffling, we compare the empirical performances of four different methods on MNIST and CIFAR-10 to show the acceleration effect. We include SGD and AdaGrad to confirm the existing phenomenon that adaptive step size accelerates the convergence of training. We also show results of the modified counterparts, SGD-shuffle and AdaGrad-window, to demonstrate the additional benefits of shuffling in training. Both adaptive methods are taken to be the more practical diagonal version.\nFor our first experiment, we compare the results of four methods for logistic regression on MNIST. To further examine the performance on non-convex problems, we train ResNet-18 (He et al., 2015) for the classification problem on CIFAR-10. To back up our theoretical results in the last section where the convergence on the minimum of gradients across epochs is established, we report the best train loss and best test accuracy up-to-current-epoch in figure 2. We can see that adaptive methods perform better than SGD methods at the end in both training and testing. For the first few epochs of ResNet-18 training, we argue that the comparison seems contrary to the theory because of the constant effect in the convergence rate where this effect dies out when the epoch number increases. We can also see that SGD-shuffle and AdaGrad-window exhibit better convergence than their counterparts in training. The details for the experiments are in the appendix." }, { "heading": "8 CONCLUSION", "text": "In this paper, we provide a novel analysis to demonstrate that adaptive gradient methods can be faster than SGD after finite epochs in non-convex and random shuffling settings. We prove that AdaGrad-window and AdaGrad-truncation obtain a convergence rate of Õ(T 1/2) for first-order stationary points, a significant improvement compared with existing works. One key element is the strong growth condition, which is common in over-parameterized models. We also investigate the computational complexity and show that our theory supports recent findings on training with large batch sizes. We believe that this paper is a good start that could lead to analysis and practice in more general settings." } ]
2,020
null
SP:58854dfcef81cc8a791a3b79939046ccfbf9053e
[ "The paper extends soft actor-critic (SAC) to the batch RL setting, replacing the policy entropy in the objective function with the KL divergence from the behavioral policy. The temperature parameter tau weighting the reward agains the KL term is annealed towards zero during the optimization process, which corresponds to starting with behavioral cloning for high values of tau and ending up with the standard reward maximization RL objective for tau=0. Theoretical analysis and experiments confirm the advantages of the proposed method." ]
Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions. Policy optimization under this setting is extremely challenging as: 1) the geometry of the objective function is hard to optimize efficiently; 2) the shift of data distributions causes high noise in the value estimation. In this work, we propose a simple yet effective policy iteration approach to batch RL using global optimization techniques known as continuation. By constraining the difference between the learned policy and the behavior policy that generates the fixed trajectories, and continuously relaxing the constraint, our method 1) helps the agent escape local optima; 2) reduces the error in policy evaluation in the optimization procedure. We present results on a variety of control tasks, game environments and a recommendation task to empirically demonstrate the efficacy of our proposed method.
[ { "affiliations": [], "name": "Yijie Guo" }, { "affiliations": [], "name": "Shengyu Feng" }, { "affiliations": [], "name": "Nicolas Le Roux" }, { "affiliations": [], "name": "Minmin Chen" } ]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Striving for simplicity in offpolicy deep reinforcement learning", "venue": "arXiv preprint arXiv:1907.04543,", "year": 2019 }, { "authors": [ "Zafarali Ahmed", "Nicolas Le Roux", "Mohammad Norouzi", "Dale Schuurmans" ], "title": "Understanding the impact of entropy on policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Olivier Chapelle", "Mingmin Chi", "Alexander Zien" ], "title": "A continuation method for semisupervised svms", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Minmin Chen", "Alex Beutel", "Paul Covington", "Sagar Jain", "Francois Belletti", "Ed H Chi" ], "title": "Topk off-policy correction for a reinforce recommender system", "venue": "In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining,", "year": 2019 }, { "authors": [ "Minmin Chen", "Ramki Gummadi", "Chris Harris", "Dale Schuurmans" ], "title": "Surrogate objectives for batch policy optimization in one-step decision making", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Richard Cheng", "Abhinav Verma", "Gabor Orosz", "Swarat Chaudhuri", "Yisong Yue", "Joel W Burdick" ], "title": "Control regularization for reduced variance reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Paul Covington", "Jay Adams", "Emre Sargin" ], "title": "Deep neural networks for youtube recommendations", "venue": "In Proceedings of the 10th ACM conference on recommender systems,", "year": 2016 }, { "authors": [ "Damien Ernst", "Pierre Geurts", "Louis Wehenkel" ], "title": "Tree-based batch mode reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "arXiv preprint arXiv:1812.02900,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Edoardo Conti", "Mohammad Ghavamzadeh", "Joelle Pineau" ], "title": "Benchmarking batch deep reinforcement learning algorithms", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jordi Grau-Moya", "Felix Leibfried", "Peter Vrancx" ], "title": "Soft q-learning with mutual-information regularization. 2018", "venue": null, "year": 2021 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Elaine T Hale", "Wotao Yin", "Yin Zhang" ], "title": "A fixed-point continuation method for l1regularized minimization with applications to compressed sensing", "venue": "CAAM TR07-07, Rice University,", "year": 2007 }, { "authors": [ "Natasha Jaques", "Asma Ghandeharioun", "Judy Hanwen Shen", "Craig Ferguson", "Agata Lapedriza", "Noah Jones", "Shixiang Gu", "Rosalind Picard" ], "title": "Way off-policy batch deep reinforcement learning of implicit human preferences in dialog", "venue": "arXiv preprint arXiv:1907.00456,", "year": 2019 }, { "authors": [ "Sham Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In ICML,", "year": 2002 }, { "authors": [ "Shivaram Kalyanakrishnan", "Peter Stone" ], "title": "Batch reinforcement learning in a complex domain", "venue": "In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems,", "year": 2007 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing offpolicy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement learning,", "year": 2012 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Jiaqi Ma", "Zhe Zhao", "Xinyang Yi", "Ji Yang", "Minmin Chen", "Jiaxi Tang", "Lichan Hong", "Ed H Chi" ], "title": "Off-policy learning in two-stage recommender systems", "venue": "In Proceedings of The Web Conference", "year": 2020 }, { "authors": [ "Jincheng Mei", "Chenjun Xiao", "Csaba Szepesvari", "Dale Schuurmans" ], "title": "On the global convergence rates of softmax policy gradient methods", "venue": "arXiv preprint arXiv:2005.06392,", "year": 2020 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Noah Y Siegel", "Jost Tobias Springenberg", "Felix Berkenkamp", "Abbas Abdolmaleki", "Michael Neunert", "Thomas Lampe", "Roland Hafner", "Martin Riedmiller" ], "title": "Keep doing what worked: Behavioral modelling priors for offline reinforcement learning", "venue": "arXiv preprint arXiv:2002.08396,", "year": 2020 }, { "authors": [ "Alex Strehl", "John Langford", "Lihong Li", "Sham M Kakade" ], "title": "Learning from logged implicit exploration data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Batch learning from logged bandit feedback through counterfactual risk minimization", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Alexander Novikov", "Konrad Żołna", "Jost Tobias Springenberg", "Scott Reed", "Bobak Shahriari", "Noah Siegel", "Josh Merel", "Caglar Gulcehre", "Nicolas Heess" ], "title": "Critic regularized regression", "venue": "arXiv preprint arXiv:2006.15134,", "year": 2020 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "arXiv preprint arXiv:1911.11361,", "year": 2019 }, { "authors": [ "Zhijun Wu" ], "title": "The effective energy transformation scheme as a special continuation approach to global optimization with application to molecular conformation", "venue": "SIAM Journal on Optimization,", "year": 1996 }, { "authors": [ "Fisher Yu", "Wenqi Xian", "Yingying Chen", "Fangchen Liu", "Mike Liao", "Vashisht Madhavan", "Trevor Darrell" ], "title": "Bdd100k: A diverse driving video database with scalable annotation tooling", "venue": "arXiv preprint arXiv:1805.04687,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "While RL is fundamentally an online learning paradigm, many practical applications of RL algorithms, e.g., recommender systems [5, 7] or autonomous driving [36], fall under the batch RL setup. Under this setting, the agent is asked to learn its policy from a fixed set of interactions collected by a different (and possibly unknown) policy commonly referred to as the behavior policy, without the flexibility to gather new interactions. Realizing the interactive nature of online RL has been hindering its wider adoptions, researchers strive to bring these techniques offline [24, 11, 20, 23, 31, 12, 21, 2, 32, 8]. We focus on policy optimization under batch RL setup. As pointed out in [3, 26], even with access to the exact gradient, the loss surface of the objective function maximizing the expected return is difficult to optimize, leading to slow convergence. Chen et al. [8] show that the objective function of expected return exhibits sub-optimal plateaus and exponentially many local optima in the worst case. Batch setup makes the learning even harder as it adds large variance to the gradient estimate, especially when the learned policy differs from the behavior policy used to generate the fixed trajectories. Recent works propose to constrain the size of the policy update [27, 28] or the distance between the learned policy and the behavior policy [14, 21]. The strength of that constraint is a critical hyperparameter that can be hard to tune [28], as a loose constraint does not alleviate the distribution shift while a strict one results in conservative updates.\nHere we propose to address the challenges using continuation methods [35, 6, 17]. Continuation methods attempt to solve the global optimization problem by progressively solving a sequence of new objectives that can be optimized more efficiently and then trace back the solutions to the original one. We change the objective function of policy optimization by including an additional term penalizing the KL divergence between the parameterized policy ⇡✓ and the behavior policy. We then gradually decrease the weight of that penalty, eventually converging to optimizing the expected return. With this additional constraint, we benefit from more accurate policy evaluation in the early stage of training as the target policy is constrained to be close to the behavior policy. As training continues, we relax the constraint and allow for more aggressive improvement over the behavior policy as long as the policy evaluation is still stable and relatively reliable, i.e. with a small enough variance. By doing so, the proposed method exhaustively exploits the information in the collected trajectories while avoiding the overestimation of state-action pairs that lack support.\nThe contributions of this paper are as follows: (1) We propose a soft policy iteration approach to batch RL through the continuation method. (2) We theoretically verify that in the tabular setting with exact gradients, maximizing KL regularized expected return leads to faster convergence than optimizing the expected return alone. Also, our method converges to the globally optimal policy if there are sufficient data samples for accurate value estimation. (3) We demonstrate the effectiveness of our method in reducing errors in value estimation using visualization; (4) We empirically verify the advantages of our method over existing batch RL methods on various complex tasks." }, { "heading": "2 RELATED WORK", "text": "Batch Reinforcement Learning. Off-policy reinforcement learning has been extensively studied [11, 20, 30, 23, 31], with many works [12, 21, 2] focusing on variants of Q-learning. Fujimoto et al. [12], Kumar et al. [21] investigated the extrapolation error in batch RL resulting from the mismatch of state-action visitation distribution between the fixed dataset and the current policy, and proposed to address it by constraining the action distribution of the current policy from deviating much from the training dataset distribution. Recent works [29, 33] studied policy iteration under batch RL. The Q function is estimated in the policy evaluation step without special treatment while the policy updates are regularized to remain close to the prior policy with a fixed constraint. To further reduce uncertainty in Q learning, an ensemble of Q networks [21, 29] and distributional Q-function [2, 33] are introduced for the value estimation. [34, 18] use the KL divergence between the the target policy and the behavior policy as a regularization term in the policy update and/or value estimation. The constraint is controlled by a fixed weight of the KL regularization or a fixed threshold for the KL divergence. While all of these works apply a fixed constraint determined by a sensitive hyperparameter to control the distance between the behavior/prior policy and the target policy, we focus on gradually relaxed constraints.\nConstrained Policy Updates. Several works [27, 1, 15] studied constrained policy updates in online settings. Kakade & Langford [19] show that large policy updates can be destructive, and propose a conservative policy iteration algorithm to find an approximately optimal policy. Schulman et al. [27] constrain the KL divergence between the old policy and new policy to guarantee policy improvement in each update. Grau-Moya et al. [15] force the policy to stay close to a learned prior distribution over actions, deriving a mutual-information regularization between state and action. Cheng et al. [9] propose to regularize in the function space. Again these methods focused on a fixed constraint while we are interested in continuing relaxing the constraint to maximize the expected return eventually. Also none of these methods have been extensively tested for batch RL with fixed training data.\nContinuation Method. Continuation method [35] is a global optimization technique. The main idea is to transform a nonlinear and highly non-convex objective function to a series of smoother and easier to optimize objective functions. The optimization procedure is successively applied to the new functions that are progressively more complex and closer to the original non-context problem, to trace their solutions back to the original objective function. Chapelle et al. [6] use the continuation method to optimize the objective function of semi-supervised SVMs and reach lower test error compared with algorithms directly minimizing the original objective. Hale et al. [17] apply the continuation method to l1-regularized problems and demonstrate better performance for compressed sensing problems. Inspired by prior works, we employ the continuation method to transform the objective of batch RL problems by adding regularization. We gradually decrease the regularization weight to trace the solution back to the original problem." }, { "heading": "3 METHOD", "text": "In classical RL, an agent interacts with the environment while updating its policy. At each step t, the agent observes a state st 2 S , selects an action at 2 A according to its policy to receive a reward rt = r(st, at) : S ⇥ A ! R and transitions to the next state st+1 ⇠ P(·|st, at). The state value of a policy ⇡ at a state s is V ⇡(s) = Es0=s,at⇠⇡(·|st),st+1⇠P(·|st,at) [ P1 t=0\ntr(st, at)]. 2 [0, 1] is the discounting factor. At each step, the agent updates the policy ⇡ so that the expected return V ⇡(⇢) = Es⇠⇢[V ⇡(s)] (where ⇢ is the initial state distribution) is maximized.\nIn batch RL, the agent is not allowed to interact with the environment during policy learning. Instead it has access to a fixed set of trajectories sampled from the environment according to a behavior policy1. A trajectory {(s0, a0, r0), (s1, a1, r1), · · · , (sT , aT , rT )} is generated by sampling s0 from the initial state distribution ⇢, sampling the action at ⇠ (·|st) at the state st and moving to st+1 ⇠ P(·|st, at) for each step t 2 [0, 1, · · · , T ]. The length T can vary among trajectories. We then convert the generated trajectories to a dataset D = {(si, ai, ri, s0i)}Ni=1, where s0i is the next state after si in a trajectory.\nThe goal of batch RL is to learn a parameterized policy ⇡✓ with the provided dataset to maximize the expected return V ⇡(⇢). In Sec. 3.1, we will first introduce a new objective function Ṽ ⇡,⌧ (⇢), i.e. the expected return of policy ⇡ with KL regularization term and the regularization weight ⌧ . With exact gradients, Ṽ ⇡,⌧ (⇢) can be optimized more efficiently than the original objective V ⇡(⇢). With the\n1If the behavior policy is not known in advance, it can be fitted from the data [30, 7].\ncontinuation method, solving a sequence of optimization problems for Ṽ ⇡,⌧ (⇢) with decaying value of ⌧ converges toward optimizing V ⇡(⇢) and makes the optimization easier. In Sec. 3.2, we derive soft policy iteration with KL regularization to optimize Ṽ ⇡,⌧ (⇢), without the assumption of exact gradients. Finally, in Sec. 3.3, we propose a practical batch RL algorithm with value estimation for target policy based on this theory." }, { "heading": "3.1 OPTIMIZING EXPECTED RETURN WITH KL REGULARIZATION", "text": "In batch RL, the distribution of the trajectories generated by the behavior policy can be very different from that of the learned policy. We thus restrict the learned policy to stay close to the behavior policy via the regularization of KL divergence. Define the soft state value of a policy ⇡ at a state s as\nṼ ⇡,⌧ (s) = Es0=s,at⇠⇡(·|st),st+1⇠P(·|st,at)\n\" 1X\nt=0\nt ✓ r(st, at) ⌧ log\n⇡(at|st) (at|st)\n◆# , (1)\nwhere the temperature parameter ⌧ controls the deviation from . The new objective function becomes Ṽ ⇡,⌧ (⇢) = Es⇠⇢[Ṽ ⇡,⌧ (s)]. This KL regularized objective differs from the original objective V ⇡(⇢), which however can be recovered as ⌧ ! 0. As pointed out in [3], even with exact gradients, the objective function V ⇡(⇢) is still difficult to optimize due to its highly non-smooth landscape. Mei et al. [26] further prove that, in a tabular setting with softmax parameterized policy and exact gradients, the vanilla policy gradient method (i.e. directly updating the parameters of policy ⇡ to maximize V ⇡(⇢) with gradient descent) converges to the global optimal policy at a convergence rate O(1/t), while the entropy-regularized policy gradient enjoys a significantly faster linear convergence rate O(e t). Motivated by this line of work, we investigate the convergence rate of optimizing Ṽ ⇡,⌧ (⇢) with the exact gradient descent and compare it with the vanilla policy gradient method. We study the smoothness and Łojasiewicz inequality for the function Ṽ ⇡,⌧ (⇢) to prove the convergence rate, similar to [26]. The detailed proofs of all following theorems are provided in the appendix. Theorem 1. In the tabular setting with softmax parameterized policy ⇡✓, maximizing Ṽ ⇡,⌧ (⇢) using policy gradient with the learning rate ⌘ = (1 ) 3\n(8M+⌧(4+8 logA)) , for all t > 1, we have\nṼ ⇡ ⇤ ⌧ ,⌧ (⇢) Ṽ ⇡✓t ,⌧ (⇢) C · e C⌧ (t 1) · M + ⌧ logA\n(1 )2 where ⇡⇤\n⌧ is the optimal policy maximizing Ṽ ⇡,⌧ (⇢), M is the bound of the absolute value of r(s, a)+\n⌧ log (a|s), A is the size of action space, S is the size of state space, C⌧ / (1 ) 4\n(8M/⌧+4+8 logA)·S ,\nand C is a constant independent with t and ⌧ .\nTheorem 1 states that KL regularized expected return Ṽ ⇡,⌧ (⇢) can be optimized with a convergence rate O(e t) rather than the O(1/t), the convergence rate of vanilla policy gradient for expected return alone. The faster convergence inspires us to optimize Ṽ ⇡,⌧ (⇢) to reach policy ⇡⇤\n⌧ , then use ⇡⇤ ⌧\nas initialization, gradually decrease the temperature ⌧ towards 0, and eventually move from ⇡⇤ ⌧ to ⇡⇤ = argmax⇡ V ⇡(⇢). With a reasonable value of ⌧ , we enjoy a linear convergence rate toward ⇡⇤⌧ from the randomly initialized policy ⇡✓. As ⌧ decreases, ⇡⇤⌧ gets closer to ⇡⇤. The final optimization of V ⇡✓ (⇢) from ⇡⇤\n⌧ can be much faster than from a randomly initialized ⇡✓.\n(a)\n0 2000 4000 6000 8000 10000 iteration\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nV π θ\ni (ρ )\nvanilla continuation\n(b)\nFigure 1: (a) A grid world with sparse rewards. (b) Learning curve of the value of learned policy ⇡✓i . We conduct a hyper-parameter search for the learning rate 5,1,0.5,0.1,0.05,0.01,0.005,0.001 and report the best performance for each method.\nWe construct a toy example to illustrate this motivation. In the grid world (Fig. 1a), the start state, annotated with ‘S’, is in the center and the terminal states are marked in yellow. There are only two states with positive rewards (0.9 and 1). There are four actions {up, down, left, right}. A badly initialized policy ⇡✓0 is shown as arrows in Fig. 1a). The initialization results in a poor policy, having high tendency to go right toward a terminal state with zero reward. The vanilla policy gradient method (i.e. maximizing V ⇡(⇢) with true gradient) starting from this initial point takes more than 7000 iterations to escape a sub-optimal solution (Fig. 1b). In contrast, we escape the sub-optimal solution much faster when ap-\nplying the continuation method to update the policy with the gradients of Ṽ ⇡,⌧ (⇢), where the behavior policy (·|s) = [u1, u2, u3, u4] with ui, i = 1, 2, 3, 4 randomly sampled from U [0, 1] and\nAlgorithm 1 Soft Policy Iteration through Continuation Method 1: Initialize: actor network ⇡✓, ensemble critic network {Q (1) , Q (2) , · · · , Q (K)}, behavior pol-\nicy network , penalty coefficient ⌧ , decay rate , number of iterations I for each ⌧ 2: Input: training dataset D = {(si, ai, ri, s0i)}Ni=0 3: for update j = 0, 1, · · · do 4: Sample batch of data {(si, ai, ri)}Bi=1 from D 5: # Learn the behavior policy with behavior cloning objective 6: Update to maximize 1\nB\nP B\ni=1 log (ai|si) 7: # Train the critic network 8: Update (k) to minimize the temporal difference 1\nB\nP B\ni=1 ri + V (s0i) Q (k)(si, ai) 2 9: where V (s) = 1\nK\nP K\nk=1 Ea⇠⇡✓(·|s)(Q (k)(s, a)) ⌧KL(⇡✓(·|s)| (·|s)) 10: # Train the actor network 11: Update ✓ to maximize 1\nB\nP B\ni=1 h 1 K P K k=1 Ea⇠⇡✓(·|si)(Q (k)(si, a)) ⌧KL(⇡✓(·|si)| (·|si)) i\n12: # Decay the weight of KL regularization ⌧ for every I updates 13: if j mod I = 0 then 14: ⌧ ⌧ ⇤ 15: end if 16: end for\nnormalized for each state s. In Fig. 1b, as we decrease ⌧ , the value of learned policy ⇡✓i for each iteration i quickly converges to the optimal value. In other words, optimizing a sequence of objective functions Ṽ ⇡,⌧ (⇢) can reach the optimal solution for V ⇡(⇢) significantly faster." }, { "heading": "3.2 SOFT POLICY ITERATION WITH KL REGULARIZATION", "text": "As explained in the previous section, we focus on the new objective function Ṽ ⇡,⌧ (⇢), which can be optimized more efficiently, and use continuation method to relax toward optimizing V ⇡(⇢). Batch RL adds the complexity of estimating the gradient of Ṽ ⇡,⌧ (⇢) with respect to ⇡ from a fixed set of trajectories. We propose to adapt soft actor-critic[16], a general algorithm to learn optimal maximum entropy policies in batch RL for our use case. We change the entropy regularization to KL regularization and derive the soft policy iteration to learn KL regularized optimal policy. For a policy ⇡ and temperature ⌧ , the soft state value is defined in Eq. 1 and soft Q function is defined as:\nQ̃⇡,⌧ (s, a) = r(s, a) + Es0⇠P(·|s,a)Ṽ ⇡,⌧ (s0) (2)\nIn the step of soft policy evaluation, we aim to compute the value of policy ⇡ according to the minimum KL divergence objective Ṽ ⇡,⌧ (⇢) = Es⇠⇢[Ṽ ⇡,⌧ (s)]. According to Lemma 1 in Appendix, the soft Q value can be computed by repeatedly applying the soft bellman backup operator. T ⇡,⌧Q(s, a) = r(s, a)+ Es0⇠P(·|s,a)(V (s0)), where V (s) = Ea⇠⇡(·|s) Q(s, a) ⌧ log ⇡(a|s)\n(a|s)\n.\nIn the step of policy improvement, we maximize the expected return based on Q-value evaluation with the KL divergence regularization. The following policy update can be guaranteed to result in an improved policy in terms of its soft value (Lemma 2 in Appendix).\n⇡new(·|s) = argmax ⇡2⇧\nh Ea⇠⇡(·|s) ⇣ Q̃⇡old,⌧ (s, a) ⌘ ⌧KL(⇡(·|s)| (·|s)) i (3)\nwhere KL(⇡(·|s)| (·|s)) = Ea⇠⇡(·|s) log\n⇡(a|s) (a|s)\nThe soft policy iteration algorithm alternates between the soft policy evaluation and soft policy improvement, and it will provably converge to the optimal policy maximizing the objective Ṽ ⇡,⌧ (⇢).\nTheorem 2. Repeated application of soft policy evaluation and soft policy improvement converges to a policy ⇡⇤ ⌧ such that Q̃⇡ ⇤ ⌧ ,⌧ (s, a) Q̃⇡,⌧ (s, a) for any ⇡ 2 ⇧ and (s, a) 2 S ⇥A.\nThe soft policy iteration finds a policy ⇡⇤ ⌧ with optimal soft Q value for each state-action pair and hence gets the optimal value of Ṽ ⇡,⌧ (⇢). Here we propose to use the soft policy iteration to solve objectives Ṽ ⇡,⌧ (⇢) with decreasing value of ⌧ and move back to the objective V ⇡(⇢) as ⌧ = 0. The method is guaranteed to asymptotically converge to the optimal policy ⇡⇤ for the objective V ⇡(⇢).\nTheorem 3. Let ⇡⇤ ⌧ (a|s) be the optimal policy from soft policy iteration with fixed temperature ⌧ . We have ⇡⇤ ⌧ (a|s) / exp ⇣ Q̃ ⇡⇤⌧ ,⌧ (s,a) ⌧ ⌘ (a|s). As ⌧ ! 0, ⇡⇤ ⌧ (a|s) will take the optimal action a⇤\nwith optimal Q value for state s." }, { "heading": "3.3 ERROR IN VALUE ESTIMATE", "text": "In the previous section, we show that the soft policy iteration with the continuation method provably converges to the global optimal policy maximizing expected return. However, in batch RL with a fixed dataset and limited samples, we cannot perform the soft policy iteration with KL regularization in its exact form. Specifically, in the policy evaluation step, when the learned policy ⇡ deviates for the behavior policy , and chooses the state-action pair (s, a) rarely visited by , the estimation of target r(s, a) + Es0⇠P(·|s,a)(V (s0)) can be very noisy. The error in the value estimate Q(s, a) will be further propagated to other state-action pairs through the bellman update. Finally, inaccurate value estimation will cause errors in the policy improvement step, resulting in a worse policy. On the other hand, if we constrain the learned policy ⇡ to be very close to the behavior policy , we can expect the policy evaluation to be reliable and safely update the learned policy. The tight constraint however prevents ⇡ to be much better than due to the conservative update.\nOn the grid world, we study this problem of value estimation with different values of ⌧ . Figure 2 visualizes the propagation of Q value estimation errors and the learned policies. We assume a mediocre behavior policy tending to move left and down. For the rarely visited states in the upper right part of the grid, there are errors in the value estimation of Q̃⇡,⌧ (s, a), i.e. |Q(s, a) Q̃⇡,⌧ (s, a)| > 0 where Q(s, a) is the Q value we learn during training and Q̃⇡,⌧ (s, a) is the ground truth soft Q value. Because the bad initial policy (Fig. 1a) tends to move towards the right part, without a strong KL regularization, the policy evaluation can be problematic due to the errors of value estimation in the right part of the grid world. In Fig. 2, with a small KL regularization weight ⌧ = 0.001, the first row shows that errors even propagate to the frequently visited states by the behavior policy. On the other hand, when we set a large value of ⌧ = 1 (second row), the error |Q(s, a) Q̃⇡,⌧ (s, a)| is smaller. Yet the performance of the learned policy is not much better than the behavior policy. Our continuation method gradually moves the policy update between these two spectra. The value estimation benefits from the gradually relaxed KL regularization and the errors remain small. The last column of Fig. 2 visualizes the learned policy in these methods. With constant ⌧ = 0.001, the wrong value estimates in some states mislead the agent. It fails to visit any terminal state and gets stuck at the state in dark orange. With constant ⌧ = 1, the tight constraint of KL divergence makes the learned policy close to the behavior policy, mostly visiting the left bottom part of the environment. With continuation method, the agent learns to always take the optimal path moving left directly and obtains the highest expected return. More details of this example are provided in the appendix.\nIn the toy example, gradually relaxing KL regularization towards zero alleviates the propagation of errors in the soft Q estimate and helps the agent converge to the optimal policy. In more complicated domains, we find that as ⌧ decays close to 0, the policy evaluation is still erroneous. To mitigate this issue, we introduce an ensemble of critic networks {Q (1) , Q (2) , · · · , Q (K)} to approximate the soft Q value, and monitor the variance of value estimation in different critic networks to measure the uncertainty. Given a batch of data samples {si}Bi=1 ⇢ D, var(Q⇡) = 1B P B\ni=1 Ea⇠⇡(·|si)var(Q (1)(si, a), Q (2)(si, a), · · · , Q (k)(si, a)) indicates whether the current policy ⇡ tends to take actions with highly noisy value estimation.\nOur method is summarized in Algorithm 1. Instead of running the soft policy evaluation and policy improvement until convergence, we alternate between optimizing the critic network and actor network with stochastic gradient descent. We set ⌧ to large value initially and let the KL divergence term dominate the objective, thus performing behavior cloning. We record a moving average of the Q value estimation variance var(Q⇡,⌧0) over 1000 updates at the end of the phase. After that, we decay the temperature gradually with = 0.9 every I steps. When the moving average of the Q value estimation variance var(Q⇡,⌧ ) is large compared with the initial value var(Q⇡,⌧0) (i.e. at the end of behavior cloning), we no longer trust the value estimate under the current temperature ⌧ and take the policy checkpointed before the temperature decays to this ⌧ as our solution." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 MUJOCO", "text": "We evaluate our method with several baselines on continuous control tasks. We train a Proximal Policy Optimization agent [28] with entropy regularization for 1000 million steps in the environments. We parameterize the policy using Gaussian policies where the mean is a linear function of the agent’s state ✓T s and variance is an identity matrix to keep the policy simple as introduced in [3]. To generate training datasets D with varying quality, we construct the behavior policy by mixing the well-trained policy N (✓T\nopt s, 0.5I), i.e. checkpoint with the highest score during training, and a poor\npolicy N (✓T0 s, 0.5I) , i.e. checkpoint at the beginning of the training, with the weight ↵. Then the behavior policy (·|s) is N (((1 ↵)✓opt +↵✓0)T s, 0.5I). We generate trajectories and store a total of one million data samples from the mixed behavior for different values of the coefficient ↵.\nThe architecture of the target policy is the same as the behavior policy. We consider six baseline approaches: BCQ [14], BEAR [21], ABM+SVG [29], CRR [33], CQL [22], BRAC [34]. For a fair comparison, the architectures of the ensemble critic network and the policy network are the same in the baselines and our method, except BCQ which has no policy network. To evaluate and compare the methods, we run the learned policy in the environments for 100 episodes and report the average episode reward in Fig 3. As for the continuation method, we report the score of policy checkpointed last with reasonable value estimation variance, as explained in Section 3.3. For the baselines, we report the score of the final policy when we terminate the training at 1.5M updates.\nTab. 1 shows that our method outperforms all the baselines on 5 settings. On the dataset with relatively reasonable quality (i.e. ↵ = 0.2, 0.4, 0.6), ours performs comparable or better than the baselines. With ↵ = 0.2, i.e., close to optimal behavior policy, all the methods perform similarly and\none can achieve a good return by simply cloning the behavior policy. With ↵ = 0.8, i.e., low-quality behavior policy, there are few good trajectories in the dataset for any methods to learn. The advantage of our method is most obvious when ↵ = 0.6 (Fig. 3), as the dataset contains trajectories of both high and low cumulative rewards. Our method can learn from the relatively large number of good trajectories and at the same time deviate from the behavior policy to avoid those bad trajectories and achieve higher rewards. In Fig. 3, ‘Constant’ is the method of optimizing KL regularized expected reward with constant value of ⌧ . We search several values of ⌧ and report the best result. We can see that gradually relaxing constraint performs better than the fixed constraint. In Fig. 3(left), as ⌧ decays to close to 0, the learned policy can degrade due to errors in the Q estimation, the stopping condition explained in Section 3.3 is however able to identify a good policy before the degenerating point. More experimental details are in the Appendix." }, { "heading": "4.2 ATARI", "text": "We further study our method on several Atari games from the Arcade Learning Environment (ALE) [4]. The rich observation space requires more complicated policies and makes policy optimization even more challenging. We focus on eight games and generate the datasets as discussed in Fujimoto et al. [13]. We use a mediocre DQN agent, trained online for 10 million timesteps (40 million frames). The performance of the DQN agent is shown as ‘Online DQN’ in Fig. 4. We add exploratory noise on the DQN agent (at 10 million timesteps) to gather a new set of 10 million transitions, similar to [13]. The line ”Behavior” in Fig. 4 shows the average of trajectory reward in the dataset D. The dataset D is used to train each offline RL agent. We compare with BCQ [13], REM [2] and CQL [22] because they are recently proposed offline RL algorithms and work well on Atari domain. For evaluation, we run 10 episodes on the Atari games with the learned policies and record the average episode reward (Fig. 4). Tab. 2 summarizes the performance of BCQ, REM and CQL after 6M updates. For our method, we report the score before the variance of Q estimate becomes too high.\nOur approach achieves higher scores than the baselines on 7 out of 8 games, and perform comparably on the other one. Agarwal et al. [2] reports that REM performs well in the dataset consisting of the entire replay experiences collected in the online training of the DQN agent for 50M timesteps (200M frames). We hypothesize that learning on the entire replay experience makes the setup easier as the training dataset contains more exploratory and higher quality trajectories. With the dataset of much smaller size and worse quality, REM performs poorly in this single behavioral policy setting. We use the same architecture of the critic network for both our method and BCQ with ensemble of 4 Q networks. As mentioned in [13], BCQ only matches the performance of the online DQN on most games. In contrast, ours is able to outperform online DQN significantly on several games. As presented in [22], on the Atari dataset, CQL performs better than REM, while our method outperforms CQL in 7 out of 8 datasets." }, { "heading": "4.3 RECOMMENDER", "text": "We also showcase our proposed method for building a softmax recommender agent. We use a publicly available dataset MovieLens-1M, a popular benchmark for recommender system. There are 1 million ratings of 3,900 movies (with the title and genre features) from 6,040 users (with demographic features). The problem of recommending movies for each user can be converted to a contextual bandit problem, where we aim to learn a target policy ⇡✓(a|s) selecting the proper action (movie) a for each state (user) s to get a high reward (rating) r in a single step. The ratings of 5-score are converted to binary rewards using a cutoff of 4. To evaluate whether a learned target policy works well, ideally we should run the learned policy in real recommendation environments. However, such environments for online test are rarely publicly available. Thus, we use the online simulation method. We train a simulator to predict the immediate binary feedback from user and movie features, and the well-trained simulator can serve as a proxy of the real online environments, because it outputs the feedback for any user-movie pair. Similar to [25], we train the simulator with all records of logged feedback in MovieLens-1M dataset. The behavior policy is trained with partial data in MovieLens-1M. We then construct the bandit datasets D = {si, ai, ri, (ai|si)}Ni=1 of different size and quality, by using different behavior policies to select movies ai for users si and getting the binary feedback ri from the well-trained simulator. We train offline RL agents on the generated dataset D and use the simulator to evaluate the learned policies on a held-out test set of users.\nWe compare our method with two baselines, as they are commonly used in current industrial recommender systems [10, 7]. (1) Cross-Entropy: a supervised learning method for the softmax recommender where the learning objective is the cross-entropy loss JCE(✓) = 1N P N\ni=1 ri log ⇡✓(ai|si) (2) IPS: the off-policy policy gradient method introduced in [7] with the learning objective JIPS(✓) = 1N P N i=1 sg(⇡✓(ai|si)) (si,ai)\nri log ⇡✓(ai|si) where sg indicates a stop-gradient operation. JIPS(✓) produces the same gradient as that of the function - 1N P N i=1 ri ⇡✓(ai|si) (ai|si) . Thus, minimizing the loss JIPS(✓) is to maximizing the expected return with importance sampling. (3) Ours: in the bandit setting, we simply perform IPS with gradually decaying KL regularization since estimating the soft Q from bellman update is not needed. Tab. 3 clearly demonstrates the advantage of our proposed method over the baselines. IPS can be viewed as vanilla policy gradient with importance sampling to correct the distribution shift. Our method clearly outperforms it across datasets collected using different behavior policies." }, { "heading": "5 CONCLUSION", "text": "We propose a simple yet effective approach, soft policy iteration algorithm through continuation method to alleviate two challenges in policy optimization under batch reinforcement learning: (1) highly non-smooth objective function which is difficult to optimize (2) high variance in value estimates. We provide theoretical ground and visualization tools to help understand this technique. We demonstrate its efficacy on multiple complex tasks." } ]
2,021
BATCH REINFORCEMENT LEARNING THROUGH CONTINUATION METHOD
SP:d5a996a81845a53ae405b4aac0a9f5342129d43c
[ "The authors propose a method to mitigate the bias towards either texture or shape, in convolutional network training. The method follows the idea from Geirhos et al (2019), but use images randomly sampled from the same dataset, instead of style transfer from paintings. Then, depending on a manually selected hyperparameter, the weights of conflicting labels are blended by weighted average of the one-hot encoding." ]
Shape and texture are two prominent and complementary cues for recognizing objects. Nonetheless, Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset. Our ablation shows that such bias degenerates model performance. Motivated by this observation, we develop a simple algorithm for shape-texture debiased learning. To prevent models from exclusively attending on a single cue in representation learning, we augment training data with images with conflicting shape and texture information (e.g., an image of chimpanzee shape but with lemon texture) and, most importantly, provide the corresponding supervisions from shape and texture simultaneously. Experiments show that our method successfully improves model performance on several image recognition benchmarks and adversarial robustness. For example, by training on ImageNet, it helps ResNet-152 achieve substantial improvements on ImageNet (+1.2%), ImageNet-A (+5.2%), ImageNet-C (+8.3%) and Stylized-ImageNet (+11.1%), and on defending against FGSM adversarial attacker on ImageNet (+14.4%). Our method also claims to be compatible to other advanced data augmentation strategies, e.g., Mixup and CutMix. The code is available here: https://github.com/LiYingwei/ ShapeTextureDebiasedTraining.
[ { "affiliations": [], "name": "Yingwei Li" }, { "affiliations": [], "name": "Qihang Yu" }, { "affiliations": [], "name": "Mingxing Tan" }, { "affiliations": [], "name": "Jieru Mei" }, { "affiliations": [], "name": "Peng Tang" }, { "affiliations": [], "name": "Wei Shen" }, { "affiliations": [], "name": "Alan Yuille" }, { "affiliations": [], "name": "Cihang Xie" } ]
[ { "authors": [ "Serge Belongie", "Jitendra Malik", "Jan Puzicha" ], "title": "Shape matching and object recognition using shape contexts", "venue": null, "year": 2002 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Semantic image segmentation with deep convolutional nets and fully connected CRFs", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": "arXiv preprint arXiv:1706.05587,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Mark Schmidt" ], "title": "Fast patch-based style transfer of arbitrary style", "venue": "NeurIPS Workshop,", "year": 2016 }, { "authors": [ "Xiangning Chen", "Cihang Xie", "Mingxing Tan", "Li Zhang", "Cho-Jui Hsieh", "Boqing Gong" ], "title": "Robust and accurate object detection via adversarial learning", "venue": null, "year": 2021 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": null, "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "In CVPR Workshops,", "year": 2020 }, { "authors": [ "Alexei A Efros", "William T Freeman" ], "title": "Image quilting for texture synthesis and transfer", "venue": "In Proceedings of the 28th annual conference on Computer graphics and interactive techniques,", "year": 2001 }, { "authors": [ "Alexei A Efros", "Thomas K Leung" ], "title": "Texture synthesis by non-parametric sampling", "venue": "In ICCV,", "year": 1999 }, { "authors": [ "Michael Elad", "Peyman Milanfar" ], "title": "Style transfer via texture synthesis", "venue": null, "year": 2017 }, { "authors": [ "M. Everingham", "L. Van Gool", "C.K.I. Williams", "J. Winn", "A. Zisserman" ], "title": "The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results", "venue": null, "year": 2012 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": null, "year": 2016 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "arXiv preprint arXiv:2004.07780,", "year": 2020 }, { "authors": [ "Golnaz Ghiasi", "Honglak Lee", "Manjunath Kudlur", "Vincent Dumoulin", "Jonathon Shlens" ], "title": "Exploring the structure of a real-time, arbitrary neural artistic stylization", "venue": null, "year": 2017 }, { "authors": [ "Ross Girshick" ], "title": "Fast R-CNN", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Bharath Hariharan", "Pablo Arbelaez", "Lubomir Bourdev", "Subhransu Maji", "Jitendra Malik" ], "title": "Semantic contours from inverse detectors", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": "arXiv preprint arXiv:2006.16241,", "year": 2020 }, { "authors": [ "Xun Huang", "Serge Belongie" ], "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoff Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Boyi Li", "Felix Wu", "Ser-Nam Lim", "Serge Belongie", "Kilian Q Weinberger" ], "title": "On feature normalization and data augmentation", "venue": "In CVPR,", "year": 2021 }, { "authors": [ "Yijun Li", "Chen Fang", "Jimei Yang", "Zhaowen Wang", "Xin Lu", "Ming-Hsuan Yang" ], "title": "Universal style transfer via feature transforms", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Haibin Ling", "David W Jacobs" ], "title": "Shape classification using the inner-distance", "venue": null, "year": 2007 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Jitendra Malik", "Serge J. Belongie", "Thomas K. Leung", "Jianbo Shi" ], "title": "Contour and texture analysis for image segmentation", "venue": null, "year": 2001 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster R-CNN: Towards real-time object detection with region proposal networks", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Lavanya Sharan", "Ruth Rosenholtz", "Edward H. Adelson" ], "title": "Accuracy and speed of material categorization in real-world images", "venue": "Journal of Vision,", "year": 2014 }, { "authors": [ "Baifeng Shi", "Dinghuai Zhang", "Qi Dai", "Zhanxing Zhu", "Yadong Mu", "Jingdong Wang" ], "title": "Informative dropout for robust representation learning: A shape-bias perspective", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Jamie Shotton", "John Winn", "Carsten Rother", "Antonio Criminisi" ], "title": "Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture", "venue": "layout, and context. IJCV,", "year": 2009 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David LopezPaz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": null, "year": 2019 }, { "authors": [ "Haohan Wang", "Songwei Ge", "Zachary Lipton", "Eric P Xing" ], "title": "Learning robust global representations by penalizing local predictive power", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Cihang Xie", "Alan Yuille" ], "title": "Intriguing properties of adversarial training at scale", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Jiang Wang", "Alan Yuille", "Quoc V Le" ], "title": "Adversarial examples improve image recognition", "venue": null, "year": 2020 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": null, "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Songfeng Zheng", "Zhuowen Tu", "Alan L. Yuille" ], "title": "Detecting object boundaries using low-, mid-, and high-level information", "venue": "In CVPR,", "year": 2007 }, { "authors": [ "Zhun Zhong", "Liang Zheng", "Guoliang Kang", "Shaozi Li", "Yi Yang" ], "title": "Random erasing data augmentation", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "It is known that both shape and texture serve as essential cues for object recognition. A decade ago, computer vision researchers had explicitly designed a variety of hand-crafted features, either based on shape (e.g., shape context (Belongie et al., 2002) and inner distance shape context (Ling & Jacobs, 2007)) or texture (e.g., textons (Malik et al., 2001)), for object recognition. Moreover, researchers found that properly combining shape and texture can further recognition performance (Shotton et al., 2009; Zheng et al., 2007), demonstrating the superiority of possessing both features.\nNowadays, as popularized by Convolutional Neural Networks (CNNs) (Krizhevsky et al., 2012), the features used for object recognition are automatically learned, rather than manually designed. This change not only eases human efforts on feature engineering, but also yields much better performance on a wide range of visual benchmarks (Simonyan & Zisserman, 2015; He et al., 2016; Girshick et al., 2014; Girshick, 2015; Ren et al., 2015; Long et al., 2015; Chen et al., 2015). But interestingly, as pointed by Geirhos et al. (2019), the features learned by CNNs tend to bias toward either shape or texture, depending on the training dataset.\nWe verify that such biased representation learning (towards either shape or texture) weakens CNNs’ performance.1 Nonetheless, surprisingly, we also find (1) the model with shape-biased representations and the model with texture-biased representations are highly complementary to each other, e.g., they focus on completely different cues for predictions (an example is provided in Figure 1); and (2) being biased towards either cue may inevitably limit model performance, e.g., models may not be able to tell the difference between a lemon and an orange without texture information. These observations altogether deliver a promising message—biased models (e.g., ImageNet trained (texturebiased) CNNs (Geirhos et al., 2019) or (shape-biased) CNNs (Shi et al., 2020)) are improvable.\n1Biased models are acquired similar to Geirhos et al. (2019), see Section 2 for details.\nTo this end, we hereby develop a shape-texture debiased neural network training framework to guide CNNs for learning better representations. Our method is a data-driven approach, which let CNNs automatically figure out how to avoid being biased towards either shape or texture from their training samples. Specifically, we apply style transfer to generate cue conflict images, which breaks the correlation between shape and texture, for augmenting the original training data. The most important recipe of training a successful shape-texture debiased model is that we need to provide supervision from both shape and texture on these generated cue conflict images, otherwise models will remain being biased.\nExperiments show that our proposed shape-texture debiased neural network training significantly improves recognition models. For example, on the challenging ImageNet dataset (Russakovsky et al., 2015), our method helps ResNet-152 gain an absolute improvement of 1.2%, achieving 79.8% top-1 accuracy. Additionally, compared to its vanilla counterpart, this debiased ResNet-152 shows better generalization on ImageNet-A (Hendrycks et al., 2019) (+5.2%), ImageNet-C (Hendrycks & Dietterich, 2019) (+8.3%) and Stylized ImageNet (Geirhos et al., 2019) (+11.1%), and stronger robustness on defending against FGSM adversarial attacker on ImageNet (+14.4%). Our shape-texture debiased neural network training is orthogonal to other advanced data augmentation strategies, e.g., it further boosts CutMix-ResNeXt-101 (Yun et al., 2019) by 0.7% on ImageNet, achieving 81.2% top-1 accuracy." }, { "heading": "2 SHAPE/TEXTURE BIASED NEURAL NETWORKS", "text": "The biased feature representation of CNNs mainly stems from the training dataset, e.g., Geirhos et al. (2019) point out that models will be biased towards shape if trained on Stylized-ImageNet dataset. Following Geirhos et al. (2019), we hereby present a similar training pipeline to acquire shapebiased models or texture-biased models. By evaluating these two kinds of models, we observe the necessity of possessing both shape and texture representations for CNNs to better recognize objects." }, { "heading": "2.1 MODEL ACQUISITION", "text": "Data generation. Similar to Geirhos et al. (2019), we apply images with conflicting shape and texture information as training samples to obtain shape-biased or texture-biased models. But different from Geirhos et al. (2019), an important change in our cue conflict image generation procedure is that we override the original texture information with the informative texture patterns from another\n…\nChimpanzee\nLemon\n…\nOriginal Data\nChimpanzee\nLemon\nChimpanzee & Lemon\n(a) Shape-biased Model\n(b) Texture-biased Model\n(c) Debiased Model\nTraining Images Label Assignment & Results\n…\nChimpanzee\nLemon\n…\nOriginal Data\nChimpanzee\nLemon\nChimpanzee & Lemon\n(a) Shape-biased Model\n(b) Texture-biased Model\n(c) Debiased Model\nTraining Images Label Assignment & Results\nFigure 2: Illustration of the our training pipeline for acquiring (a) a shape-biased model, (b) a texture-biased model, and (c) a shape-texture debiased model. Specifically, these models share the same training samples, i.e. images with conflicting texture and shape information, generated by style transfer between two randomly selected images; but apply distinct labelling strategies: in (a) & (b), labels are determined by the images that provides shape (or texture) information in style transfer, for guiding models to learn more shape (or texture) representations; in (c), labels are jointly determined by the pair of images in style transfer, for avoiding bias in representation learning.\nrandomly selected image, rather than with the uninformative style of randomly selected artistic paintings. That being said, to create a new training sample, we need to first select a pair of images from the training set uniformly at random, and then apply style transfer to blend their shape and texture information. Such a generated example is shown in Figure 2, i.e., the image of chimpanzee shape but with lemon texture.\nLabel assignment. The way of assigning labels to cue conflict images controls the bias of learned models. Without loss of generality, we show the case of learning a texture-biased model. To guide the model to attend more on texture, the labels assigned to the cue conflict images here will be exclusively based on the texture information, e.g., the image of chimpanzee shape but with lemon texture will be labelled as lemon, shown in Figure 2(b). By this way, the texture information is highly related to the “ground-truth” while the shape information only serves as a nuisance factor during learning. Similarly, to learn a shape-biased model, the label assignment of cue conflict images will be based on shape only, e.g., the image of chimpanzee shape but with lemon texture now will be labelled as chimpanzee, shown in Figure 2(a)." }, { "heading": "2.2 EVALUATION AND OBSERVATION", "text": "To reduce the computational overhead in this ablation, all models are trained and evaluated on ImageNet-200, which is a 200 classes subset of the original ImageNet, including 100,000 images (500 images per class) for training and 10,000 images (50 images per class) for validation. Akin to Geirhos et al. (2019), we observe that the models with biased feature representations tend to have inferior accuracy than their vanilla counterparts. For example, our shape-biased ResNet-18 only achieves 73.9% top-5 ImageNet-200 accuracy, which is much lower than the vanilla ResNet-18 with 88.2% top-5 ImageNet-200 accuracy.\nThough biased representations weaken the overall classification accuracy, surprisingly, we find they are highly complementary to each other. We first visualize the attended image regions of biased models, via Class Activation Mapping (Zhou et al., 2016), in Figure 3. As we can see here, the shape-biased model and the texture-biased model concentrate on different cues for predictions. For\nEgyptian Cat ✕ Tabby Cat ✓\nCl as\nsif\nie\nd\nAs\nOrange ✕ Lemon ✓\nLion ✓ Tabby Cat ✕\nAlsatian ✓ Chihuahua ✕\nShape-biased ModelMore Accurate Less Accurate\nObelisk ATM Bucket, pail Fur CoatPotpieCliff, Drop\n…\nBrain Coral Expresso Wooden Spoon TrolleybusTriumphal ArchBarn\ninstance, on the leftmost tabby cat image, the shape-biased model mainly focuses on the cat head, while the texture-biased model mainly focuses on the lower body and the front legs of the cat. Such attention mechanisms are correlated to their learned representations—the shape-biased model extracts the shape of the cat head as an important signal for predictions, while the texture-biased model relies on the texture information of cat fur for predictions.\nAs distinct cues are picked by shape-biased/texture-biased models, a more concrete observation is they are good/bad at classifying quite different object categories. As showed in Figure 4, the shapebiased model is good at recognizing objects with representative shape structure like obelisk, but is bad at recognizing objects whose shape is uninformative or almost indistinguishable from others like fur coat. Similarly, the texture-biased model can effectively recognize objects with unique texture patterns like brain coral but may fail to recognize objects with unpredictable texture like trolleybus (as its side body can be painted with different advertisements). Besides, biased models may inevitably perform poorly on certain categories as insufficient cues are applied. For examples, it is challenging to distinguish between a lemon and an orange if texture information cannot be utilized, or to distinguish between an lion and a tabby cat without shape information.\nGiven the analysis above, we can conclude that biased representations limit models’ recognition ability. But meanwhile, our ablation delivers a promising message—the features learned by biased models are highly complementary to each other. This observation indicates the current training framework is improvable (as the resulted models are biased towards texture (Geirhos et al., 2019) or shape (Shi et al., 2020)), and offers a potential direction for building a stronger one—we should train models to properly acquire both shape and texture feature representations. We will introduce a simple method for doing so next." }, { "heading": "3 SHAPE-TEXTURE DEBIASED NEURAL NETWORK TRAINING", "text": "Recall that when obtaining a biased model, the strategy of label assignment is pivot—when the labels are exclusively determined by the images that provide shape (or texture) information in style transfer, we will obtain a shape-biased (or texture-biased) model. Therefore, to guide models for leveraging both shape and texture for predictions, we hereby propose a simple way, which is inspired by Mixup (Zhang et al., 2018), to softly construct labels during training. In other words, given the one-hot label of the shape-source image ys and the one-hot label of the texture-source image yt, the new label that we assigned to the cue conflict image is\nỹ = γ ∗ ys + (1− γ) ∗ yt, (1)\nwhere γ ∈ [0, 1] is a manually selected hyperparameter to control the relative importance between shape and texture. By ranging the shape-texture coefficient γ from 0 to 1, we obtain a path to evolve the model from being a texture-biased one (i.e., γ = 0) to being a shape-biased one (i.e., γ = 1). Although the two extreme ends lead to biased models with inferior performance, we empirically show that there exist a sweet point along this interpolation path, i.e., the learned models can properly acquires both shape and texture feature representations and achieve superior performance on a wide range of image recognition benchmarks.\nWe name this simple method as shape-texture debiased neural network training, and illustrate the training pipeline in Figure 2(c). It is worth to mention that, although Figure 2 only shows the procedure of applying our method to the image classification task, this training framework is general and has the potential to be extended to other computer vision tasks, e.g., a simple showcase on semantic segmentation is presented in Section 4.4." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTS SETUP", "text": "Datasets. We evaluate models on ImageNet classification and PASCAL VOC semantic segmentation. ImageNet dataset (Russakovsky et al., 2015) consists of 1.2 million images for training, and 50,000 for validation, from 1,000 classes. PASCAL VOC 2012 segmentation dataset (Everingham et al., 2012) with extra annotated images from (Hariharan et al., 2011) involves 20 foreground object classes and one background class, including 10,582 training images and 1,449 validation images.\nGoing beyond the standard benchmarks, we further evaluate models’ generalization on ImageNetA, ImageNet-C and Stylized-ImageNet, and robustness by defending against FGSM adversarial attacker on ImageNet. ImageNet-C (Hendrycks & Dietterich, 2019) is a benckmark dataset that measures models’ corruption robustness. It is constructed by applying 75 common visual corruptions to the ImageNet validation set. ImageNet-A (Hendrycks et al., 2019) includes 7,500 natural adversarial examples that successfully attacks unseen classifiers. These examples are much harder than original ImageNet validation images due to scene complications encountered in the long tail of scene configurations and by exploiting classifier blind spots (Hendrycks et al., 2019). StylizedImageNet (Geirhos et al., 2019) is a stylized version of ImageNet that constructed by re-rendering the original images by AdaIN stylizer (Huang & Belongie, 2017). The generated images keep the original global shape information but removes the local texture information. FGSM (Goodfellow et al., 2015) is a widely used adversarial attacker to evaluate model robustness. We set the maximum perturbation change per pixel = 16/255 for FGSM.\nImplementation details. We choose ResNet (He et al., 2016) as the default architecture. For image classification tasks, our implementation is based on the publicly available framework in PyTorch2. To generate cue conflict images, we follow Geirhos et al. (2019) to use Adaptive Instance Normalization (Huang & Belongie, 2017) in style transfer, and set stylization coefficient α = 0.5. Importantly, to increase the diversity of training samples, we generate these cue conflict images on-the-fly during training. We choose the shape-texture coefficient γ = 0.8 when assigning labels.\nWhen training shape-biased, texture-biased and our shape-texture debiased models, we always apply the auxiliary batch normalization (BN) design (Xie et al., 2020; Xie & Yuille, 2020; Chen et al.,\n2https://github.com/bearpaw/pytorch-classification\n2021) to bridge the domain gap between the original data and the augmented data, i.e., the main BN is exclusively running on original ImageNet images and the auxiliary BN is exclusively running on cue conflict images. We follow Xie et al. (2020) to always apply the main BN for performance evaluation. Besides, since our biased models and debiased models are all trained with both the original data and the augmented data (i.e., 2× data are used in training), we also consider a stronger baseline (i.e., 2× epochs training) which doubles the schedule of the vanilla training baseline, for the purpose of matching the total training cost." }, { "heading": "4.2 RESULTS", "text": "Model accuracy. Table 1 shows the results on ImageNet. For all ResNet models, the proposed shape-texture debiased neural network training consistently outperforms the vanilla training baseline. For example, it helps ResNet-50 achieve 76.9% top-1 accuracy, beating its vanilla counterpart by 0.5%. Our method works better for larger models, e.g., it further improves the vanilla ResNet-152 by 1.2%, achieving 79.8% top-1 accuracy.\nWe then compare our shape-texture debiased training to the 2× epochs training baseline. We find that simply doubling the schedule of the vanilla training baseline cannot effectively lead to improvements like ours. For examples, compared to the vanilla ResNet-101, this 2× epochs training fails to provide additional improvements, while ours furthers the top-1 accuracy by 1.0%. This result suggests that it is non-trivial to improve performance even if more computational budgets are given.\nLastly, we compare ours to the biased training methods. Though the only difference between our method and the biased training methods is the strategy of label assignment (as shown in Figure 2), it imperatively affects model performance. For example, compared to the vanilla baseline, both the shape-biased training and the texture-biased training fail to improve (sometimes even slightly hurt) the model accuracy, while our shape-texture debiased neural network training successfully leads to consistent and substantial accuracy improvements.\nModel robustness. Next, we evaluate models’ generalization on ImageNet-A, ImageNet-C and Stylized-ImageNet, and robustness on defending against FGSM on ImageNet. We note these tasks are much more challenging than the original ImageNet classification, e.g., the ImageNet trained ResNet-50 only achieves 2.0% accuracy on ImageNet-A, 75.0% mCE on ImageNet-C, 7.4% accuracy on Stylized-ImageNet, and 17.1% accuracy on defending against FGSM adversarial attacker. As shown in Table 2, our shape-texture debiased neural network training beats the vanilla training baseline by a large margin on all tasks for all ResNet models. For example, it substantially boosts ResNet-152’s performance on ImageNet-A (+5.2%, from 7.4% to 12.6%), ImageNet-C (−8.3%, from 67.2% to 58.9%, the lower the better) and Stylized-ImageNet (+11.1%, from 11.3% to 22.4%), and on defending against FGSM on ImageNet (+14.4%, from 25.2% to 39.6%). These results altogether suggest that our shape-texture debiased neural network training is an effective way to mitigate the issue of shortcut learning (Geirhos et al., 2020).\nComparing to SoTAs. We further compare our shape-texture debiased model with the SoTA on ImageNet and ImageNet-A (CutMix + MoEx (Li et al., 2021)), the SoTA on ImageNet-C (DeepAugment + AugMix (Hendrycks et al., 2020)), and the SoTA on Stylized-ImageNet (SIN (Geirhos et al., 2019)). Interestingly, we note the improvements of all these SoTAs are not consistent across different benchmarks. For example, as shown in Table 3, SIN significantly improves the results on Stylized-ImageNet, but at the cost of huge performance drop on ImageNet (-16.2%) and ImageNetC (-2.3%). Our shape-texture debiased training stands as the only method that can improve the vanilla training baseline holistically." }, { "heading": "4.3 ABLATIONS", "text": "Comparing to model ensembles. An alternative but naı̈ve way for obtaining the model with both shape and texture information is to ensemble a shape-biased model and a texture-biased model. We note this ensemble strategy yields a model of on-par performance with our shape-texture debiased model on ImageNet (77.2% vs. 76.9%). Nonetheless, interestingly, when measuring model robustness, such model ensemble strategy is inferior than ours. For example, compared to our proposed debiased training, this ensemble strategy is 1.5% worse on ImageNet-A (2.0% vs. 3.5%), 1.1% worse on ImageNet-C (68.6 mCE vs. 67.5 mCE), 1.1% worse on Stylized-ImageNet (16.3% vs. 17.4%), and 7.0% worse on defending against FGSM (20.4% vs. 27.4%). Moreover, due to model ensemble, this strategy is 2× expensive at the inference stage. These evidences clearly demonstrate the effectiveness and efficiency of the proposed shape-texture debiased training.\nDoes our method help models to learn debiased shape-texture representations? Here we take a close look at whether our method indeed prevents models from being biased toward shape or texture during learning. We evaluate models in Section 4.2 on two kinds of datasets: (1) ImageNetSketch dataset (Wang et al., 2019) and ImageNet-R (Hendrycks et al., 2020) for examining how well models can capture shape; and (2) Kylberg Texture dataset (Kylberg, 2011) and Flicker Material dataset (Sharan et al., 2014) for examining how well models can capture texture. Specifically, since object categories from two texture datasets are not compatible to that from ImageNet dataset, we retrain the last fc-layer (while keeping all other layers untouched) of all models on Kylberg Texture dataset or Flicker Material dataset for 5 epochs. The results are shown in Table 4.\nWe first analyze results on ImageNet-Sketch dataset. We observe our shape-texture debiased models are as good as the shape-biased models, and significantly outperforms the texture-biased models and the vanilla training models. For instance, using ResNet-50, our shape-texture debiased training and\nshape-biased training achieve 28.4% top-1 accuracy and 27.9% top-1 accuracy, while texture-biased training and vanilla training only get 24.3% top-1 accuracy and 23.8% top-1 accuracy. A similar observation can be seen from ImageNet-R. These results support that our method helps models acquire stronger shape representations than the vanilla training.\nWe next analyze results on Kylberg Texture dataset. Similarly, we observe that our debiased model are comparable to the texture-biased model and the vanilla training model, and get better performance than the shape-biased model. On Flicker Material dataset, we observe that our debiased models are better than the vanilla training model and the shape-biased model. This phenomenon suggests texture information is effectively caught by our shape-texture debiased training. As a side note, it is expected that vanilla training are better than shape-biased training on these texture datasets, as Geirhos et al. (2019) point out that ImageNet trained models (i.e., vanilla training) also tend to be biased towards texture.\nWith the analysis above, we conclude that, compared to vanilla training, our shape-texture debiased training successfully helps networks effectively acquire both shape and texture representations.\nCombining with other data augmentation methods. Our shape-texture debiased neural network training can be viewed as a data augmentation method, which trains models on cue conflict images. Nonetheless, our method specifically guides the model to learn debiased shape and texture representations, which could potentially serve as a complementary feature to other data augmentation methods. To validate this argument, we train models using a combination of our method and an existing data augmentation method (i.e., Mixup (Zhang et al., 2018) or CutMix (Yun et al., 2019)).\nWe choose ResNeXt-101 (Xie et al., 2017) as the backbone network, which reports the best top-1 ImageNet accuracy in both the Mixup paper, i.e., 79.9%, and the CutMix paper, i.e., 80.5%. Though building upon very strong baselines, our shape-texture debiased neural network training still leads to substantial improvements, e.g., it furthers ResNeXt-101-Mixup’s accuracy to 80.5% (+0.6%), and ResNeXt-101-CutMix’s accuracy to 81.2% (+0.7%). Meanwhile, models’ generalization also get greatly improved. For example, by combining CutMix and our method, ResNeXt-101 gets additional improvements on ImageNet-A (+1.4%), ImageNet-C (-5.9%, the lower the better) and Stylized ImageNet (+7.5%). These results support that our shape-texture debiased neural network training is compatible to existing data augmentation methods.\nShape-texture coefficient γ. We set γ = 0.8 in our shape-texture debiased training. This value is found via the grid search over ImageNet-200 using ResNet-18. We now ablate its sensitivity on ImageNet using ResNet-50, where γ is linearly interpolated between 0.0 and 1.0. By increasing the value of γ, we observe that the corresponding accuracy on ImageNet first monotonically goes up, and then monotonically goes down. The sweet point can be reached by setting γ = 0.7, where ResNet-50 achieves 77.0% top-1 ImageNet accuracy. Besides, we note that by setting γ ∈ [0.5, 0.9] can always lead to performance improvements over the vanilla baseline. These results demonstrate the robustness of our shape-texture debiased neural network training w.r.t. the coefficient γ." }, { "heading": "4.4 SEMANTIC SEGMENTATION RESULTS", "text": "We extend our shape-texture debiased neural network training to the segmentation task. We select DeepLabv3-ResNet-101 (Chen et al., 2017) as our backbone. To better incorporate our method\nwith the segmentation task, the following changes are made when generating cue conflict images: (1) unlike in the classification task where the whole image is used as the texture source, we use a specific object (which can cropped from the background using the segmentation ground-truth) to provide texture information in style transfer; (2) when composing the soft label for the cue conflict image, we set the label mask from texture source as the full image (since the pattern from the texture source will fill the whole image after style transfer); and (3) we set stylization coefficient α = 0.2 and shape-texture coefficient γ = 0.95 to prevent object boundaries from being overly blurred in style transfer. Figure 5 shows an illustration of our data preparation pipeline.\nResults. Our shape-texture debiased training can also effectively improve segmentation models. For example, our method helps DeepLabv3-ResNet-101 achieve 77.6% mIOU, significantly beating its vanilla counterpart by 1.1%. Our method still shows advantages when compared to the 2× epochs training baseline. Doubling the learning schedule of the vanilla training can only lead to an improvement of 0.2%, which is still 0.9% worse than our shape-texture debiased training. These results demonstrate the potential of our methods in helping recognition tasks in general." }, { "heading": "5 RELATED WORK", "text": "Data augmentation. Data augmentation is essential for the success of deep learning (LeCun et al., 1998; Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Zhong et al., 2020; Cubuk et al., 2019; Lim et al., 2019; Cubuk et al., 2020). Our shape-texture debiased neural network training is related to a specific family of data augmentation, called Mixup (Zhang et al., 2018), which blends pairs of images and their labels in a convex manner, either at pixel-level (Zhang et al., 2018; Yun et al., 2019) or feature-level (Verma et al., 2019; Li et al., 2021). Our method can be interpreted as a special instantiation of Mixup which blends pairs of images at the abstraction level—images’ texture information and shape information are mixed. Our method successfully guides CNNs to learn better shape and texture representations, which is an important but missing piece in existing data argumentation methods.\nStyle transfer. Style transfer, closely related to texture synthesis and transfer, means generating a stylized image by combining a shape-source image and a texture-source image (Efros & Leung, 1999; Efros & Freeman, 2001; Elad & Milanfar, 2017). The seminal work (Gatys et al., 2016) demonstrate impressive style transfer results by matching feature statistics in convolutional layers of a CNN. Later follow-ups further improve the generation quality and speed (Huang & Belongie, 2017; Chen & Schmidt, 2016; Ghiasi et al., 2017; Li et al., 2017). In this work, we follow Geirhos et al. (2019) to use AdaIN (Huang & Belongie, 2017) to generate stylized images. Nonetheless, instead of applying style transfer between an image and an artistic paintings as in Geirhos et al. (2019), we directly apply style transfer on a pair of images to generate cue conflict images. This change is vital as it enables us to provide supervisions from both shape and texture during training." }, { "heading": "6 CONCLUSION", "text": "There is a long-time debate about which cue dominates the object recognition. By carefully ablate the shape-biased model and the texture-biased model, we found though biased feature representations lead to performance degradation, they are complementary to each other and are both necessary for image recognition. To this end, we propose shape-texture debiased neural network training for guiding CNNs to learn better feature representations. The key in our method is that we should not only augment training set with cue conflict images, but also provide supervisions from both shape and texture. We empirically demonstrate the advantages of our shape-texture debiased neural network training on boosting both accuracy and robustness. Our method is conceptually simple and is generalizable to different image recognition tasks. We hope our work will shed light on understanding and improving convolutional neural networks." }, { "heading": "ACKNOWLEDGEMENT", "text": "This project is partially supported by ONR N00014-18-1-2119 and ONR N00014-20-1-2206. Cihang Xie is supported by the Facebook PhD Fellowship and a gift grant from Open Philanthropy. Yingwei Li thanks Zhiwen Wang for suggestions on figures." } ]
2,021
SHAPE-TEXTURE DEBIASED NEURAL NETWORK TRAINING
SP:ef8a9ec9f2c482ffacdf56b7a36e1fa567b6ba29
[ "This paper presents an approach to accelerating NAS with 'petri-dish' networks, which hope to mimic the response of original networks at a fraction of training time cost. The key idea is to evaluate an architectural setting on a miniaturized network as opposed to the original network. With this approach computational effort is saved by eschewing expensive 'ground truth' original network evaluations." ]
Neural Architecture Search (NAS) explores a large space of architectural motifs – a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands or more data samples. Inspired by how biological motifs such as cells are sometimes extracted from their natural environment and studied in an artificial Petri dish setting, this paper proposes the Synthetic Petri Dish model for evaluating architectural motifs. In the Synthetic Petri Dish, architectural motifs are instantiated in very small networks and evaluated using very few learned synthetic data samples (to effectively approximate performance in the full problem). The relative performance of motifs in the Synthetic Petri Dish can substitute for their ground-truth performance, thus accelerating the most expensive step of NAS. Unlike other neural network-based prediction models that parse the structure of the motif to estimate its performance, the Synthetic Petri Dish predicts motif performance by training the actual motif in an artificial setting, thus deriving predictions from its true intrinsic properties. Experiments in this paper demonstrate that the Synthetic Petri Dish can therefore predict the performance of new motifs with significantly higher accuracy, especially when insufficient ground truth data is available. Our hope is that this work can inspire a new research direction in studying the performance of extracted components of models in a synthetic diagnostic setting optimized to provide informative evaluations.
[]
[ { "authors": [ "D. Adhya", "E. Annuario", "M.A. Lancaster", "J. Price", "S. Baron-Cohen", "D.P. Srivastava" ], "title": "Understanding the role of steroids in typical and atypical brain development: Advantages of using a “brain in a dish", "venue": "approach. Journal of Neuroendocrinology,", "year": 2018 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning. 2017", "venue": "URL https://arxiv.org/abs/1611.01578", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": null, "year": 2018 }, { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc V. Le", "Alexey Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": null, "year": 2017 }, { "authors": [ "Masanori Suganuma", "Shinichi Shirakawa", "Tomoharu Nagao" ], "title": "A genetic programming approach to designing convolutional neural network architectures", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference,", "year": 2017 }, { "authors": [ "Risto Miikkulainen", "Jason Liang", "Elliot Meyerson", "Aditya Rawal", "Dan Fink", "Olivier Francon", "Bala Raju", "Hormoz Shahrzad", "Arshak Navruzyan", "Nigel Duffy", "Babak Hodjat" ], "title": "Evolving deep neural networks", "venue": "Artificial Intelligence in the Age of Neural Networks and Brain Computing. Amsterdam: Elsevier,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Efficient multi-objective neural architecture search via lamarckian evolution", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "B. Baker", "O. Gupta", "N. Naik", "R. Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Bowen Baker", "Otkrist Gupta", "Ramesh Raskar", "Nikhil Naik" ], "title": "Accelerating neural architecture search using performance prediction", "venue": "arXiv preprint arXiv:1705.10823,", "year": 2017 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Learning curve prediction with bayesian neural networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric P Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Shengcao Cao", "Xiaofang Wang", "Kris M. Kitani" ], "title": "Learnable embedding space for efficient neural architecture compression", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hanzhang Hu", "John Langford", "Rich Caruana", "Saurajit Mukherjee", "Eric Horvitz", "Debadeepta Dey" ], "title": "Efficient forward architecture search", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Felipe Petroski Such", "Aditya Rawal", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data, 2020", "venue": "URL https://openreview.net/forum?id=HJg_ECEKDr", "year": 2020 }, { "authors": [ "M.P. Marcus", "M.A. Marcinkiewicz", "B. Santorini" ], "title": "Building a large annotated corpus of english: The penn treebank", "venue": "Computational linguistics,", "year": 1993 }, { "authors": [ "P. Langley" ], "title": "Crafting papers on machine learning", "venue": "In Pat Langley, editor, Proceedings of the 17th International Conference on Machine Learning (ICML", "year": 2000 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc Le" ], "title": "Searching for activation", "venue": "URL https://arxiv.org/pdf/1710.05941.pdf", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The architecture of deep neural networks (NNs) is critical to their performance. This fact motivates neural architecture search (NAS), wherein the choice of architecture is often framed as an automated search for effective motifs, i.e. the design of a repeating recurrent cell or activation function that is repeated often in a larger NN blueprint. However, evaluating a candidate architecture’s ground-truth performance in a task of interest depends upon training the architecture to convergence. Complicating efficient search, the performance of an architectural motif nearly always benefits from increased computation (i.e. larger NNs trained with more data). The implication is that the best architectures often require training near the bounds of what computational resources are available, rendering naive NAS (i.e. where each candidate architecture is trained to convergence) exorbitantly expensive.\nTo reduce the cost of NAS, methods often exploit heuristic surrogates of true performance. For example, motif performance can be evaluated after a few epochs of training or with scaled-down architectural blueprints, which is often still expensive (because maintaining reasonable fidelity between ground-truth and surrogate performance precludes aggressive scaling-down of training). Another approach learns models of the search space (e.g. Gaussian processes models used within Bayesian optimization), which improve as more ground-truth models are trained, but cannot generalize well beyond the examples seen. This paper explores whether the computational efficiency of NAS can be improved by creating a new kind of surrogate, one that can benefit from miniaturized training and still generalize beyond the observed distribution of ground-truth evaluations. To do so, we take inspiration from an idea in biology, bringing to machine learning the application of a Synthetic Petri Dish microcosm that aims to identify high-performing architectural motifs.\nThe overall motivation behind “in vitro” (test-tube) experiments in biology is to investigate in a simpler and controlled environment the key factors that explain a phenomenon of interest in a messier\nand more complex system. For example, to understand causes of atypical mental development, scientists extract individual neuronal cells taken from brains of those demonstrating typical and atypical behavior and study them in a Petri dish (Adhya et al., 2018). The approach proposed in this paper attempts to algorithmically recreate this kind of scientific process for the purpose of finding better neural network motifs. The main insight is that biological Petri dish experiments often leverage both (1) key aspects of a system’s dynamics (e.g. the behavior of a single cell taken from a larger organism) and (2) a human-designed intervention (e.g. a measure of a test imposed on the test-tube). In an analogy to NAS, (1) the dynamics of learning through backpropagation are likely important to understanding the potential of a new architectural motif, and (2) compact synthetic datasets can illuminate an architecture’s response to learning. That is, we can use machine learning to learn data such that training an architectural motif on the learned data results in performance indicative of the motif’s ground-truth performance.\nIn the proposed approach, motifs are extracted from their ground-truth evaluation setting (i.e. from large-scale NNs trained on the full dataset of the underlying domain of interest, e.g. MNIST), instantiated into very small networks (called motif-networks), and evaluated on learned synthetic data samples. These synthetic data samples are trained such that the performance ordering of motifs in this Petri dish setting (i.e. a miniaturized network trained on a few synthetic data samples) matches their ground-truth performance ordering. Because the relative performance of motifs is sufficient to distinguish good motifs from bad ones, the Petri dish evaluations of motifs can be a surrogate for ground-truth evaluations in NAS. Training the Synthetic Petri Dish is also computationally inexpensive, requiring only a few ground-truth evaluations, and once trained it enables extremely rapid evaluations of new motifs.\nA key motivating hypothesis is that because the Synthetic Petri Dish evaluates the motif by actually using it in a simple experiment (e.g. training it with SGD and then evaluating it), its predictions can generalize better than other neural network (NN) based models that predict motif performance based on only observing the motif’s structure and resulting performance (Liu et al., 2018a; Luo et al., 2018). For example, consider the demonstration problem of predicting the ground-truth performance of a two-layer feedforward MNIST network with sigmoidal non-linearity. The blue points in Figure 1 shows how the ground-truth performance of the MNIST network varies when the slope of its sigmoid activations (the term c in the sigmoid formula 1/(1 + e−cx)) is varied in the range of 0.01− 2.01. The MNIST network performance peaks near a slope-value of 0.23. Similarly to the NN-based model previously developed in Liu et al. (2018a); Luo et al. (2018), one can try to train a neural network that predicts the performance of the corresponding MNIST network given the sigmoid slope value\nas input (Section 4.1 provides full details). When training points (tuples of sigmoid slope value and its corresponding MNIST network performance) are restricted to an area to the right of the peak (Figure 1, blue-shaded region), the NN-based prediction model (Figure 1, red diamonds) generalizes poorly to the test points on the left side of the peak (c < 0.23). However, unlike such a conventional prediction model, the prediction of the Synthetic Petri Dish generalizes to test points left of the peak (despite their behavior being drastically different than what would be expected solely based on the points in the blue shaded region). That occurs because the Synthetic Petri Dish trains and evaluates the actual candidate motifs, rather than just making predictions about their performance based on data from past trials.\nBeyond this explanatory experiment, the promise of the Synthetic Petri Dish is further demonstrated on a challenging and compute-intensive language modelling task that serves as a popular NAS benchmark. The main result is that Petri dish obtains highly competitive results even in a limitedcompute setting. Interestingly, these results suggest that it is indeed possible to extract a motif from a larger setting and create a controlled setting (through learning synthetic data) where the instrumental factor in the performance of the motif can be isolated and tested quickly, just as scientists use Petri dishes to test specific hypothesis to isolate and understand causal factors in biological systems." }, { "heading": "2 RELATED WORK", "text": "NAS methods have discovered novel architectures that significantly outperform hand-designed solutions (Zoph and Le, 2017; Elsken et al., 2018; Real et al., 2017). These methods commonly explore the architecture search space with either evolutionary algorithms (Suganuma et al., 2017; Miikkulainen et al., 2018; Real et al., 2019; Elsken et al., 2019) or reinforcement learning (Baker et al., 2016; Zoph and Le, 2017). Because running NAS with full ground-truth evaluations can be extremely expensive (i.e. requiring many thousands of GPU hours), more efficient methods have been proposed. For example, instead of evaluating new architectures with full-scale training, heuristic evaluation can leverage training with reduced data (e.g. sub-sampled from the domain of interest) or for fewer epochs (Baker et al., 2017; Klein et al., 2017).\nMore recent NAS methods such as DARTS (Liu et al., 2018b) and ENAS (Pham et al., 2018) exploit sharing weights across architectures during training to circumvent full ground-truth evaluations. However, a significant drawback of such weight sharing approaches is that they constrain the architecture search space and therefore limit the discovery of novel architectures.\nAnother approach to accelerate NAS is to train a NN-based performance prediction model that estimates architecture performance based on its structure (Liu et al., 2018a). Building on this idea, Neural Architecture Optimization (NAO) trains a LSTM model to simultaneously predict architecture performance as well as to learn an embedding of architectures. Search is then performed by taking gradient ascent steps in the embedding space to generate better architectures. NAO is used as a baseline for comparison in Experiment 4.2.\nBayesian optimization (BO) based NAS methods have also shown promising results (Kandasamy et al., 2018; Cao et al., 2019). BO models the architecture space using a Gaussian process (GP), although its behavior is sensitive to the choice of a kernel function that models the similarity between any two architectures. Another recent NAS method presents a technique to progressively grow an architecture by adding skip connections, and is named similarly (“Petridish”) to the method proposed here (Hu et al., 2019). However unlike the Synthetic Petri Dish introduced here, which is a learned surrogate for NAS, Petridish (Hu et al., 2019) is instead an incremental growth method.\nGenerative teaching networks (GTNs) also learn synthetic data to accelerate NAS (Such et al., 2020). However, learned data in GTNs helps to more quickly train full-scale networks to evaluate their potential on real validation data. In the Petri dish, synthetic training and validation instead enables a surrogate microcosm training environment for much smaller extracted motif-networks. Additionally, GTNs are not explicitly trained to differentiate between different networks (or network motifs). In contrast, the Synthetic Petri Dish is optimized to find synthetic input data on which the performance of various architectural motifs is different." }, { "heading": "3 METHODS", "text": "Recall that the aim of the Synthetic Petri Dish is to create a microcosm training environment such that the performance of a small-scale motif trained within it well-predicts performance of the fullyexpanded motif in the ground-truth evaluation. First, a few initial ground-truth evaluations of motifs are needed to create training data for the Petri dish. In particular, consider N motifs for which ground-truth validation loss values (Litrue, where i ∈ 1, 2, ...N ) have already been pre-computed by training each motif in the ground-truth setting. The next section details how these initial evaluations are leveraged to train the Synthetic Petri Dish." }, { "heading": "3.1 TRAINING THE SYNTHETIC PETRI DISH", "text": "To train the Synthetic Petri Dish first requires extracting the N motifs from their ground-truth setting and instantiating each of them in miniature as separate motif-networks. For the experiments performed in this paper, the ground-truth network and the motif-network have the same overall blueprint and differ only in the width of their layers. For example, Figure 2a shows a ground-truth network’s size reduced from a 2-layer, 100-neuron wide MLP to a motif-network that is a 2-layer MLP with a single neuron per layer.\nGiven such a collection of extracted motif-networks, a small number of synthetic training and validation data samples are then learned that can respectively be used to train and evaluate the motifnetworks. The learning objective is that the validation loss of motifs trained in the Petri dish resemble the validation loss of the motif’s ground-truth evaluation (Litrue). Note that this training process requires two nested optimization loops: an inner-loop that trains and evaluates the motif-networks on the synthetic data and an outer-loop that trains the synthetic data itself.\nInitializing the Synthetic Petri Dish: Before training the Petri dish, the motif-networks and synthetic data must be initialized. Once the motifs have been extracted into separate motif-networks, each motif-network is assigned the same initial random weights (θinit). This constraint reduces confound-\ning factors by ensuring that the motif-networks differ from each other only in their instantiated motifs. At the start of Synthetic Petri Dish training, synthetic training data (Strain = (xtrain, ytrain)) and validation data samples (Svalid = (xvalid, yvalid)) are randomly initialized. Note that these learned training and validation data can play distinct and complementary roles, e.g. the validation data can learn to test out-of-distribution generalization from a learned training set. Empirically, setting the training and validation data to be the same initially (i.e. Strain = Svalid) benefited optimization at the beginning of outer-loop training; over iterations of outer-loop training, the synthetic training and validation data then diverge. The size of the motif-network and the number of synthetic data samples are chosen through the hyperparameter selection procedure described in Appendix A.2.\nInner-loop training: The inner optimization loop is where the performance of motif-networks is evaluated by training each such network independently with synthetic data. This training reveals a sense of the quality of the motifs themselves.\nIn each inner-loop, the motif-networks are independently trained with SGD using the synthetic training data (Strain). The motif-networks take synthetic training inputs (xtrain) and produce their respective output predictions (ŷtrain). For each motif-network, a binary cross-entropy (BCE) loss is computed between the output predictions (ŷtrain) and the synthetic training labels (ytrain). Because the Petri dish is an artificial setting, the choice of BCE as the inner-loop loss (Linner) is independent of the actual domain loss (used for ground-truth training), and other losses like regression loss could instead be used. The gradients of the BCE loss w.r.t. the motif-network weights inform weight updates (as in regular SGD).\nθit+1 = θ i t − α∇Liinner_train(Strain, θit) i ∈ 1, 2, .., N (1)\nwhere α is the inner-loop learning rate and θi0 = θinit. Inner-loop training proceeds until individual BCE losses converge. Once trained, each motif-network is independently evaluated using the synthetic validation data (Svalid) to obtain individual validation loss values (Liinner_valid). These inner-loop validation losses then enable calculating an outer-loop loss to optimize the synthetic data, which is described next.\nOuter-loop training: Recall that an initial sampling of candidate motifs evaluated in the ground-truth setting serve as a training signal for crafting the Petri dish’s synthetic data. That is, in the outer loop, synthetic training data is optimized to encourage motif-networks trained upon it to become accurate surrogates for the performance of full networks built with that motif evaluated in the ground-truth setting. The idea is that training motif-networks on the right (small) set of synthetic training data can potentially isolate the key properties of candidate motifs that makes them effective.\nTo frame the outer-loop loss function, what is desired is for the validation loss of the motif-network to induce the same relative ordering as the validation loss of the ground-truth networks; such relative ordering is all that is needed to decide which new motif is likely to be best. One way to design such an outer-loop loss with this property is to penalize differences between normalized loss values in the Petri dish and ground-truth setting1. To this end, the motif-network (inner-loop) loss values and their respective ground-truth loss values are first independently normalized to have zero-mean and unit-variance. Then, for each motif, a mean squared error (MSE) loss is computed between the normalized inner-loop validation loss (L̂iinner_valid) and the normalized ground-truth validation loss (L̂itrue). The MSE loss is averaged over all the motifs and used to compute a gradient step to improve the synthetic training and validation data.\nLouter = 1\nN N∑ i=1 (L̂iinner_valid − L̂itrue)2 (2)\nStraint+1 = S train t − β∇Louter and Svalidt+1 = Svalidt − β∇Louter (3)\nwhere β is the outer-loop learning rate. For simplicity, only the synthetic training (xtrain) and validation (xvalid) inputs are learned and the corresponding labels (ytrain, yvalid) are kept fixed to\n1We tried an explicit rank-loss as well, but the normalized regression loss performed slightly better empirically.\ntheir initial random values throughout training. Minimizing the outer-loop MSE loss (Louter) modifies the synthetic training and validation inputs to maximize the similarity between the motif-networks’ performance ordering and motifs’ ground-truth ordering.\nAfter each outer-loop training step, the motif-networks are reset to their original initial weights (θinit) and the inner-loop training and evaluation procedure (equation 1) is carried out again. The outer-loop training proceeds until the MSE loss converges, resulting in optimized synthetic data." }, { "heading": "3.2 PREDICTING PERFORMANCE WITH THE TRAINED PETRI DISH", "text": "The Synthetic Petri Dish training procedure described so far results in synthetic training and validation data optimized to sort motif-networks similarly to the ground-truth setting. This section describes how the trained Petri dish can predict the relative performance of unseen motifs, which we call the Synthetic Petri Dish inference procedure. In this procedure, new motifs are instantiated in their individual motif-networks, and the motif-networks are trained and evaluated using the optimized synthetic data (with the same hyperparameter settings as in the inner-loop training and evaluation). The relative inner-loop validation loss for the motif-networks then serves as a surrogate for the motifs’ relative ground-truth validation loss; as stated earlier, such relative loss values are sufficient to compare the potential of new candidate motifs. Such Petri dish inference is computationally inexpensive because it involves the training and evaluation of very small motif-networks with very few synthetic examples. Accurately predicting the performance ordering of unseen motifs is contingent on the generalization capabilities of the Synthetic Petri Dish (this aspect is further investigated in section 4.1)." }, { "heading": "3.3 COMBINING ARCHITECTURE SEARCH WITH THE SYNTHETIC PETRI DISH", "text": "Interestingly, the cheap-to-evaluate surrogate performance prediction given by the trained Petri dish is complementary to most NAS methods that search for motifs, meaning that they can easily be combined. Algorithm 1 in Appendix A.1 shows one possible hybridization of Petri dish and NAS, which is the one we experimentally investigate in this work.\nFirst, the Petri dish model is warm-started by training (inner-loop and outer-loop) using the groundtruth evaluation data (Peval) of a small set of randomly-generated motifs (Xeval). Then in each iteration of NAS, the NAS method generates M new motifs and the Petri dish inference procedure inexpensively predicts their relative performance. The top K motifs (where K << M ) with highest predicted performance are then selected for ground-truth evaluation. The ground-truth performance of motifs both guides the NAS method as well as provides further data to re-train the Petri dish model. The steps outlined above are repeated until convergence and then the motif with the best ground-truth performance is selected for the final test evaluation.\nSynthetic Petri Dish training and inference is orders of magnitude faster than ground-truth evaluations, thus making NAS computationally more efficient and faster to run, which can enable finding higherperforming architectures given a limited compute budget." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SEARCHING FOR THE OPTIMAL SLOPE FOR SIGMOIDAL ACTIVATION FUNCTIONS", "text": "Preliminary experiments demonstrated that when a 2-layer, 100-wide feed-forward network with sigmoidal activation functions is trained on MNIST data, its validation accuracy (holding all else constant) depends on the slope of the sigmoid. The points on the blue curve in Figure 1 demonstrate this fact, where the empirical peak performance is a slope of 0.23. This simple dependence provides a way to clearly illustrate the benefits of the Synthetic Petri Dish model.\nBoth the Synthetic Petri Dish and the NN-based surrogate model are trained using 30 ground-truth points that are randomly selected from a restricted interval of sigmoid slope values (the blue-shaded region in Figure 1). The remaining ground-truth points (outside the blue-shaded region) are used only for testing. The NN-based surrogate control is a 2-layer, 10-neuron-wide feedforward network that takes the sigmoid value as input and predicts the corresponding MNIST network validation accuracy as its output. A mean-squared error loss is computed between the predicted accuracy and the ground-truth validation accuracy, and the network is trained with the Adam optimizer.\nFor training the Synthetic Petri Dish model, each training motif (i.e. sigmoid with a unique slope value) is extracted from the MNIST network and instantiated in a 2-layer, single-neuron-wide motifnetwork (θinit). The setup is similar to the one shown in Figure 2a). The motif-networks are trained in the inner-loop for 200 SGD steps and subsequently their performance on the synthetic validation data (Liinner_valid) is used to compute outer-loop MSE loss w.r.t the ground-truth performance (as described in section 3.1). A total of 20 outer-loop training steps are performed. Hyperparameter selection details for the two models are described in Appendix A.3).\nResults demonstrate that the NN-based model overfits the training points. (red-curve in Figure 1). In contrast, the Synthetic Petri Dish predictions accurately infer that there is a peak (including the falloff to its left) and also its approximate location (green-curve in Figure 1)." }, { "heading": "4.2 ARCHITECTURE SEARCH FOR RECURRENT CELLS", "text": "The previous experiment demonstrated that a Synthetic Petri Dish model trained with limited groundtruth data can successfully generalize to unseen out-of-distribution motifs. This next experiment tests whether the Synthetic Petri Dish can be applied to a more realistic and challenging setting, that of NAS for a NN language model that is applied to the Penn Tree Bank (PTB) dataset – a popular language modeling and NAS benchmark (Marcus et al., 1993). In this experiment, the architectural motif is the design of a recurrent cell. The recurrent cell search space and its ground-truth evaluation setup is the same as in NAO (Luo et al., 2018). This NAS problem is challenging because the search space is expansive and few solutions perform well (Pham et al., 2018). Each cell in the search space is composed of 12 layers (each with the same layer width) that are connected in a unique pattern. An input embedding layer and an output soft-max layer is added to the cell (each of layer width 850) to obtain a full network (27 Million parameters) for ground-truth evaluation. Each ground-truth evaluation requires training on PTB data-set for 600 epochs and takes 10 hours on a Nvidia 1080Ti.\nNAO is one of the state-of-the-art NAS methods for this problem and is therefore used as a baseline in this experiment (called here original NAO). In the published results (Luo et al., 2018), original NAO requires 1,000 ground-truth evaluations (300 GPU days) over three successive NAS iterations to discover a cell with test perplexity of 56.0. These are good results, but the compute cost even to reproduce them is prohibitively large for many researchers. Because the Synthetic Petri Dish offers a potential low-compute option, in this experiment, different NAS methods are compared instead in a setting where only limited ground-truth evaluation data is available (≤ 100 samples), giving a sense of how far different methods can get with more reasonable compute resources.\nEach NAS iteration can be accelerated if the number of costly ground-truth evaluations is reduced by instead cheaply evaluating the majority of candidate motifs (i.e. new cells) in the Petri dish. For the purpose of training the Synthetic Petri Dish, each cell is extracted from its ground-truth setting (850 neurons per layer) and is instantiated in a motif-network with three neurons per layer (its internal cell connectivity, including its depth, remains unchanged). Thus, the ground-truth network that has 27 million parameters is reduced to a motif-network with only 140 parameters. To train motif-networks, synthetic training and validation data each of size 20×10×10 (batch size×time steps×motif-network input size) is learned (thus replacing the 923k training and 73k validation words of PTB). The Petri dish training and inference procedure is very similar to the one described in Experiment 4.1, and it adds negligible compute cost (2 extra hours for training, and a few minutes for inference on a CPU).\nFollowing the steps outlined in algorithm 1 and Figure 2b, the Petri dish surrogate can be combined with two existing NAS methods: (1) Random Search (RS) or (2) NAO itself, resulting in two new methods called Synthetic Petri Dish-RS and Synthetic Petri Dish-NAO. Also, the Random Search NAS method can be combined with partial evaluations resulting in another baseline (Appendix A.4).\nFor the Synthetic Petri Dish variants, at the beginning of search, both the Petri dish surrogate and the NAS method (RS/NAO) used within the Petri dish variant are warm-started with the groundtruth data of an initial motif set (size 40). In each NAS iteration, 100 newly generated motifs (variable M in algorithm 1) are evaluated using the Petri dish inference procedure and only the top 20 predicted motifs (variable K in algorithm 1) are evaluated for their ground-truth performance. The test perplexity of the best found motif at the end of each NAS iteration is plotted in Figure 3 – the blue curve depicts the result for Synthetic Petri Dish-RS and green depicts the result for Synthetic Petri Dish-NAO. For a fair comparison, original NAO is re-run in this limited ground-truth setting and the resulting performance is depicted by the red-curve in Figure 3. The results show that Synthetic Petri Dish-NAO outperforms both Synthetic Petri Dish-RS and NAO when keeping the amount of ground-truth data points the same, suggesting that the Synthetic Petri Dish and NAO complement each other well. The hybridization of Synthetic Petri Dish and NAO finds a cell that is competitive in its performance (test perplexity 57.1) with original NAO (56.0), using only 1/10th of original NAO’s compute (and exceeds the performance of original NAO when both are given equivalent compute)." }, { "heading": "5 DISCUSSION AND CONCLUSIONS", "text": "In the general practice of science, often the question arises of what factor accounts for an observed phenomenon. In the real world, with all its intricacy and complexity, it can be difficult to test or even formulate a clear hypothesis on the relevant factor involved. For that reason, often a hypothesis is formulated and tested in a simplified environment where the relevant factor can be isolated from the confounding complexity of the world around it. Then, in that simplified setting it becomes possible to run rapid and exhaustive tests, as long as there is an expectation that their outcome might correlate to the real world. In this way, the Synthetic Petri Dish is a kind of microcosm of a facet of the scientific method, and its synthetic data is the treatment whose optimization tethers the dynamics within the simplified setting to their relevant counterparts in the real world.\nBy approaching architecture search in this way as a kind of question-answering problem on how certain motifs or factors impact final results, we gain the intriguing advantage that the prediction model is no longer a black box. Instead, it actually contains within it a critical piece of the larger world that it seeks to predict. This piece, a motif cut from the ground-truth network (and its corresponding learning dynamics), carries with it from the start a set of priors that no black box learned model could carry on its own. These priors pop out dramatically in the simple sigmoid slope experiment – the notion that there is an optimal slope for training and roughly where it lies emerges automatically from the fact that the sigmoid slope itself is part of the Petri Dish prediction model. In the later NAS for recurrent cells, the benefit in a more complex domain also becomes apparent, where the advantage of the intrinsic prior enables the Petri Dish to have better performance than a leading NAS method when holding the number of ground-truth evaluations constant, and achieves roughly the same performance with 1/10th the compute when allowing differing numbers of ground-truth evaluations.\nIt is also possible that other methods can be built in the future on the idea of extracting a component of a candidate architecture and testing it in another setting. The opportunity to tease out the underlying causal factors of performance is a novel research direction that may ultimately teach us new lessons on architecture by exposing the most important dimensions of variation through a principled empirical process that could capture the spirit and power of the scientific process itself." } ]
2,020
SYNTHETIC PETRI DISH: A NOVEL SURROGATE MODEL FOR RAPID ARCHITECTURE SEARCH
SP:c4bb47d4a04a539331e2ab2ef62b2854804f6a3c
[ "This paper proposes a new augmentation method based on CutMix. The authors find out that randomly selecting may mix background textures and this will mislead the model. So, they propose to use saliency maps to control the selection of mixed patches, which is called SaliencyMix. This idea seems easy and reasonable, many experiments are conducted to prove the effectiveness of the proposed method. However, the experiments’ results fail to show the ability of the method, and some explanation is missed." ]
Advanced data augmentation strategies have widely been studied to improve the generalization ability of deep learning models. Regional dropout is one of the popular solutions that guides the model to focus on less discriminative parts by randomly removing image regions, resulting in improved regularization. However, such information removal is undesirable. On the other hand, recent strategies suggest to randomly cut and mix patches and their labels among training images, to enjoy the advantages of regional dropout without having any pointless pixel in the augmented images. We argue that such random selection strategies of the patches may not necessarily represent sufficient information about the corresponding object and thereby mixing the labels according to that uninformative patch enables the model to learn unexpected feature representation. Therefore, we propose SaliencyMix that carefully selects a representative image patch with the help of a saliency map and mixes this indicative patch with the target image, thus leading the model to learn more appropriate feature representation. SaliencyMix achieves the best known top-1 error of 21.26% and 20.09% for ResNet-50 and ResNet-101 architectures on ImageNet classification, respectively, and also improves the model robustness against adversarial perturbations. Furthermore, models that are trained with SaliencyMix help to improve the object detection performance. Source code is available at https://github.com/SaliencyMix/SaliencyMix.
[ { "affiliations": [], "name": "BETTER REGULARIZA" }, { "affiliations": [], "name": "A. F. M. Shahab Uddin" }, { "affiliations": [], "name": "Sirazam Monira" }, { "affiliations": [], "name": "Wheemyung Shin" }, { "affiliations": [], "name": "TaeChoong Chung" }, { "affiliations": [], "name": "Sung-Ho Bae" } ]
[ { "authors": [ "R. Achanta", "S. Hemami", "F. Estrada", "S. Susstrunk" ], "title": "Frequency-tuned salient region detection", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Ming-Ming Cheng", "Niloy J Mitra", "Xiaolei Huang", "Philip HS Torr", "Shi-Min Hu" ], "title": "Global contrast based salient region detection", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2014 }, { "authors": [ "R. Cong", "J. Lei", "H. Fu", "M. Cheng", "W. Lin", "Q. Huang" ], "title": "Review of visual saliency detection with comprehensive information", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2019 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "D. Mané", "V. Vasudevan", "Q.V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zijun Deng", "Xiaowei Hu", "Lei Zhu", "Xuemiao Xu", "Jing Qin", "Guoqiang Han", "Pheng-Ann Heng" ], "title": "R3net: Recurrent residual refinement network for saliency detection", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Terrance Devries", "Graham W. Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "ArXiv,", "year": 2017 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "International Journal of Computer Vision,", "year": 2010 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ghiasi Golnaz", "Lin Tsung-Yi", "V. Le Quoc" ], "title": "Dropblock: A regularization method for convolutional networks", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "X. Hou", "L. Zhang" ], "title": "Saliency detection: A spectral residual approach", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2007 }, { "authors": [ "Choe Junsuk", "Shim Hyunjung" ], "title": "Attention-based dropout layer for weakly supervised object localization", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Jang-Hyun Kim", "Wonho Choo", "Hyun Oh Song" ], "title": "Puzzle mix: Exploiting saliency and local statistics for optimal mixup", "venue": null, "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Jianjun Lei", "Bingren Wang", "Yuming Fang", "Weisi Lin", "Patrick Le Callet", "Nam Ling", "Chunping Hou" ], "title": "A universal framework for salient object detection", "venue": "IEEE Transactions on Multimedia,", "year": 2016 }, { "authors": [ "Joseph Lemley", "Shabab Bazrafkan", "Peter Corcoran" ], "title": "Smart augmentation learning an optimal data augmentation strategy", "venue": "IEEE Access,", "year": 2017 }, { "authors": [ "Changyang Li", "Yuchen Yuan", "Weidong Cai", "Yong Xia", "David Dagan Feng" ], "title": "Robust saliency detection via regularized random walks ranking", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Xiaohui Li", "Huchuan Lu", "Lihe Zhang", "Xiang Ruan", "Ming-Hsuan Yang" ], "title": "Saliency detection via dense and sparse reconstruction", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2013 }, { "authors": [ "T. Lin", "P. Goyal", "R. Girshick", "K. He", "P. Dollár" ], "title": "Focal loss for dense object detection", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Nian Liu", "Junwei Han", "Ming-Hsuan Yang" ], "title": "Picanet: Learning pixel-wise contextual attention for saliency detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial", "venue": "attacks. ArXiv,", "year": 2017 }, { "authors": [ "Sebastian Montabone", "Alvaro Soto" ], "title": "Human detection using a mobile platform and novel features derived from a visual saliency mechanism", "venue": "Image and Vision Computing,", "year": 2010 }, { "authors": [ "Srivastava Nitish", "Hinton Geoffrey", "Krizhevsky Alex", "Sutskever Ilya", "Salakhutdinov Ruslan" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Russakovsky Olga", "Deng Jia", "Su Hao", "Krause Jonathan", "Satheesh Sanjeev", "Ma Sean", "Huang Zhiheng", "Karpathy Andrej", "Khosla Aditya", "Bernstein Michael", "C. Berg Alexander", "Fei-Fei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Houwen Peng", "Bing Li", "Haibin Ling", "Weiming Hu", "Weihua Xiong", "Stephen J Maybank" ], "title": "Salient object detection via structured matrix decomposition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Xuebin Qin", "Zichen Zhang", "Chenyang Huang", "Chao Gao", "Masood Dehghan", "Martin" ], "title": "Jagersand. Basnet: Boundary-aware salient object detection", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yao Qin", "Huchuan Lu", "Yiqun Xu", "He Wang" ], "title": "Saliency detection via cellular automata", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Ren Shaoqing", "He Kaiming", "Girshick Ross", "Sun Jian" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2015 }, { "authors": [ "K.K. Singh", "Y.J. Lee" ], "title": "Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "C. Szegedy", "V. Vanhoucke", "S. Ioffe", "J. Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "DeVries Terrance", "W Taylor Graham" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint,", "year": 2017 }, { "authors": [ "Jonathan Tompson", "Ross Goroshin", "Arjun Jain", "Yann Lecun", "Christoph Bregler" ], "title": "Efficient object localization using convolutional networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 648–656,", "year": 2015 }, { "authors": [ "Ren Wu", "Shengen Yan", "Yi Shan", "Qingqing Dang", "Gang Sun" ], "title": "Deep image: Scaling up image recognition", "venue": "arXiv preprint,", "year": 2015 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "Procedings of the British Machine Vision Conference 2016,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint,", "year": 2017 }, { "authors": [ "Lu Zhang", "Ju Dai", "Huchuan Lu", "You He", "Gang Wang" ], "title": "A bi-directional message passing model for salient object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xiaoning Zhang", "Tiantian Wang", "Jinqing Qi", "Huchuan Lu", "Gang Wang" ], "title": "Progressive attention guided recurrent network for salient object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "B. Zhou", "A. Khosla", "A. Lapedriza", "A. Oliva", "A. Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Li Zhou", "Zhaohui Yang", "Qing Yuan", "Zongtan Zhou", "Dewen Hu" ], "title": "Salient region detection via integrating diffusion-based compactness and local contrast", "venue": "IEEE Transactions on Image Processing,", "year": 2015 }, { "authors": [ "Wangjiang Zhu", "Shuang Liang", "Yichen Wei", "Jian Sun" ], "title": "Saliency optimization from robust background detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Zhong Zhun", "Zheng Liang", "Kang Guoliang", "Li Shaozi", "Yang Yi" ], "title": "Random erasing data augmentation", "venue": "arXiv preprint,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning has achieved state-of-the-art (SOTA) performance in many fields, especially in computer vision tasks. This success can mainly be attributed to the deep architecture of convolutional neural networks (CNN) that typically have 10 to 100 millions of learnable parameters. Such a huge number of parameters enable the deep CNNs to solve complex problems. However, besides the powerful representation ability, a huge number of parameters increase the probability of overfitting when the number of training examples is insufficient, which results in a poor generalization of the model.\nIn order to improve the generalization ability of deep learning models, several data augmentation strategies have been studied. Random feature removal is one of the popular techniques that guides the CNNs not to focus on some small regions of input images or on a small set of internal activations, thereby improving the model robustness. Dropout (Nitish et al., 2014; Tompson et al., 2015) and regional dropout (Junsuk & Hyunjung, 2019; Terrance & Graham, 2017; Golnaz et al., 2018; Singh & Lee, 2017; Zhun et al., 2017) are two established training strategies where the former randomly turns off some internal activations and later removes and/or alters random regions of the input images. Both of them force a model to learn the entire object region rather than focusing on the most ∗Department of Computer Science & Engineering, Kyung Hee University, South Korea. †Corresponding author.\nPublished as a conference paper at ICLR 2021\nT ar ge t Im ag e So u rc e Im ag e\nDog - 80% ? Cat - 20% ? Augmented Image Augmented Label Dog - 80% ? Cat - 20% ? LabelOriginal Image Dog Cat T a rg e t Im ag e So u rc e Im ag e Dog - 80% ? Cat - 20% ? Augmented Image Augmented Label Dog - 80% ? Cat - 20% ? LabelOriginal Image Dog Cat T ar g et I m ag e S o u rc e I m ag e Dog - 80% ? Cat - 20% ? Augmented Image Augmented Label Dog - 80% ? Cat - 20% ? LabelOriginal Image Dog Cat\nimportant features and thereby improving the generalization of the model. Although dropout and regional dropout improve the classification performance, this kind of feature removal is undesired since they discard a notable portion of informative pixels from the training images.\nRecently, Yun et al. (2019) proposed CutMix, that randomly replaces an image region with a patch from another training image and mixes their labels according to the ratio of mixed pixels. Unlike Cutout (Devries & Taylor, 2017), this method can enjoy the properties of regional dropout without having any blank image region. However, we argue that the random selection process may have some possibility to select a patch from the background region that is irrelevant to the target objects of the source image, by which an augmented image may not contain any information about the corresponding object as shown in Figure 1. The selected source patch (background) is highlighted with a black rectangle on the source image. Two possible augmented images are shown wherein both of the cases, there is no information about the source object (cat) in the augmented images despite their mixing location on the target image. However, their interpolated labels encourage the model to learn both objects’ features (dog and cat) from that training image. But we recognize that it is undesirable and misleads the CNN to learn unexpected feature representation. Because, CNNs are highly sensitive to textures (Geirhos et al., 2019) and since the interpolated label indicates the selected background patch as the source object, it may encourage the classifier to learn the background as the representative feature for the source object class.\nWe address the aforementioned problem by carefully selecting the source image patch with the help of some prior information. Specifically, we first extract a saliency map of the source image that highlights important objects and then select a patch surrounding the peak salient region of the source image to assure that we select from the object part and then mix it with the target image. Now the selected patch contains relevant information about the source object that leads the model to learn more appropriate feature representation. This more effective data augmentation strategy is what we call, ”SaliencyMix”. We present extensive experiments on various standard CNN architectures, benchmark datasets, and multiple tasks, to evaluate the proposed method. In summary, SaliencyMix has obtained the new best known top-1 error of 2.76% and 16.56% for WideResNet (Zagoruyko & Komodakis, 2016) on CIFAR-10 and CIFAR-100 (Krizhevsky, 2012), respectively. Also, on ImageNet (Olga et al., 2015) classification problem, SaliencyMix has achieved the best known top-1 and top-5 error of 21.26% and 5.76% for ResNet-50 and 20.09% and 5.15% for ResNet-101 (He et al., 2016). In object detection task, initializing the Faster RCNN (Shaoqing et al., 2015) with SaliencyMix trained model and then fine-tuning the detector has improved the detection performance on Pascal VOC (Everingham et al., 2010) dataset by +1.77 mean average precision (mAP). Moreover, SaliencyMix trained model has proved to be more robust against adversarial attack and improves the top-1 accuracy by 1.96% on adversarially perturbed ImageNet validation set. All of these results clearly indicate the effectiveness of the proposed SaliencyMix data augmentation strategy to enhance the model performance and robustness." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 DATA AUGMENTATION", "text": "The success of deep learning models can be accredited to the volume and diversity of data. But collecting labeled data is a cumbersome and time-consuming task. As a result, data augmentation has\nbeen introduced that aims to increase the diversity of existing data by applying various transformations e.g., rotation, flip, etc. Since this simple and inexpensive technique significantly improves the model performance and robustness, data augmentation has widely been used to train deep learning models. Lecun et al. (1998) applied data augmentation to train LeNet for hand written character recognition. They performed several affine transformations such as translation, scaling, shearing, etc. For the same task, Bengio et al. (2011) applied more diverse transformation such as Gaussian noise, salt and pepper noise, Gaussian smoothing, motion blur, local elastic deformation, and various occlusions to the images. Krizhevsky et al. (2012) applied random image patch cropping, horizontal flipping and random color intensity changing based on principal component analysis (PCA). In Deep Image (Wu et al., 2015), color casting, vignetting, and lens distortion are applied besides flipping and cropping to improve the robustness of a very deep network.\nBesides these manually designed data augmentations, Lemley et al. (2017) proposed an end-to-end learnable augmentation process, called Smart Augmentation. They used two different networks where one is used to learn the suitable augmentation type and the other one is used to train the actual task. Devries & Taylor (2017) proposed Cutout that randomly removes square regions of the input training images to improve the robustness of the model. Zhang et al. (2017) proposed MixUp that blends two training images to some degree where the labels of the augmented image are assigned by the linear interpolation of those two images. But the augmented images look unnatural and locally ambiguous. Recently, Cubuk et al. (2019) proposed an effective data augmentation method called AutoAugment that defines a search space of various augmentation techniques and selects the best suitable one for each mini-batch. Kim et al. (2020) proposed PuzzleMix that jointly optimize two objectives i.e., selecting an optimal mask and an optimal mixing plan. The mask tries to reveal most salient data of two images and the optimal transport plan aims to maximize the saliency of the revealed portion of the data. Yun et al. (2019) proposed CutMix that randomly cuts and mixes image patches among training samples and mixes their labels proportionally to the size of those patches. However, due to the randomness in the source patch selection process, it may select a region that does not contain any informative pixel about the source object, and the label mixing according to those uninformative patches misleads the classifier to learn unexpected feature representation.\nIn this work, the careful selection of the source patch always helps to contain some information about the source object and thereby solves the class probability assignment problem and helps to improve the model performance and robustness." }, { "heading": "2.2 LABEL SMOOTHING", "text": "In object classification, the class labels are usually represented by one-hot code i.e., the true labels are expected to have the probability of exactly 1 while the others to have exactly 0. In other words, it suggests the model to be overconfident which causes overfitting to training dataset. As a result, the models have low performance on unseen test dataset. To alleviate this problem, label smoothing allows to relax the model confidence on the true label by setting the class probability to a slightly lower value e.g., lower than 1. As a result, it guides the model to be more adaptive instead of being over-confident and ultimately improves the model robustness and performance (Szegedy et al., 2016). Our method also mixes the class labels and enjoys the benefit of label smoothing." }, { "heading": "2.3 SALIENCY DETECTION", "text": "Saliency detection aims to simulate the natural attention mechanism of human visual system (HVS) and can be classified into two main categories. The first one is a bottom-up approach (Cheng et al., 2014; Zhu et al., 2014; Li et al., 2015; Zhou et al., 2015; Achanta et al., 2009; Li et al., 2013; Hou & Zhang, 2007; Qin et al., 2015; Peng et al., 2016; Lei et al., 2016; Montabone & Soto, 2010) that focuses on exploring low-level vision features. Some visual priors that are inspired by the HVS properties are utilized to describe a salient object. Cheng et al. (2014) utilized a contrast prior and proposed a regional contrast based salient object detection algorithm. Zhu et al. (2014) introduced a robust background measure in an optimization framework to integrate multiple low level cues to obtain clean and uniform saliency maps. Li et al. (2015) optimized the image boundary selection by a boundary removal mechanism and then used random walks ranking to formulate pixel-wised saliency maps. Zhou et al. (2015) proposed a saliency detection model where the saliency information is propagated using a manifold ranking diffusion process on a graph. In addition, some\nPublished as a conference paper at ICLR 2021\nSaliency Detection Peak Salient Region\nSource Image Saliency Detection Peak Salient Region Target ImageSource Patch Augmented Image\nSource Image Saliency Map of\nthe Source Image\nSelecting the Peak Salient Region of the Saliency Map Target Image\nSelecting the\nSource Patch Based on the Peak Salient\nRegion\nAugmented Image\nMixing the Source\nPatch with the Target Image\ntraditional techniques are also introduced to achieve ima e saliency detection, such as frequency domain analysis (Achanta et al., 2009), sparse representation (Li et al., 2013), log-spectrum (Hou & Zhang, 2007) cellular automata (Qin et al., 2015), low-rank recovery (Peng et al., 2016), and Bayesian theory (Lei et al., 2016). Hou & Zhang (2007) proposed a spectral residual method that focuses on the properties of background. Achanta et al. (2009) proposed a frequency tuned approach that preserves the boundary information by retaining sufficient amount of high frequency contents. Montabone & Soto (2010) introduced a method that was originally designed for a fast human detection in a scene by proposing novel features derived from a visual saliency mechanism. Later on this feature extraction mechanism was generalized for other forms of saliency detection.\nThe second one is a top-down approach which is task-driven and utilizes supervised learning with labels. Several deep learning based methods have been proposed for saliency detection (Deng et al., 2018; Liu et al., 2018; Zhang et al., 2018a;b; Qin et al., 2019). Deng et al. (2018) proposed a recurrent residual refinement network (R3Net) equipped with residual refinement blocks (RRBs) to more accurately detect salient regions. Contexts play an important role in the saliency detection task and based on that Liu et al. (2018) proposed a pixel-wise contextual attention network, called PiCANet, to learn selectively attending to informative context locations for each pixel. Zhang et al. (2018a) introduced multi-scale context-aware feature extraction module and proposed a bi-directional message passing model for salient object detection. Zhang et al. (2018b) focused on powerful feature extraction and proposed an attention guided network which selectively integrates multi-level contextual information in a progressive manner. Recently, Qin et al. (2019) proposed a predict and refine architecture for salient object detection called boundary-aware saliency detection network (BASNet). The author introduced a hybrid loss to train a densely supervised Encoder-Decoder network.\nHowever, despite the high performance of top-down approach, there is a lack of generalization for various applications since they are biased towards the training data and limited to the specific objects. In this study, we require a saliency model to focus on the important object/region in a given scene without knowing their labels. As a result, we rely on bottom-up approach which are unsupervised, scale-invariant and more robust for unseen data. It is worth noting that the training based saliency methods can also be applied where the quality and quantity of data for training the saliency methods may be correlated with the effectiveness of data augmentation. Section 3.3 further explains the effects of different saliency detection algorithm on the proposed data augmentation method." }, { "heading": "3 PROPOSED METHOD", "text": "Similar to Yun et al. (2019), we cut a source patch and mix it to the target image and also mix their labels proportionally to the size of the mixed patches. But in order to prevent the model from learning any irrelevant feature representation, the proposed method enforces to select a source patch in a way so that it must contains information about the source object. It first extracts a saliency map of the source image to highlight the objects of interest and then selects a patch surrounding the peak salient region to mix with the target image. Here we explain the process in detail." }, { "heading": "3.1 SELECTION OF THE SOURCE PATCH", "text": "The goal of saliency detection is to find out the pixels or regions that are attractive to the HVS and to assign them with higher intensity values (Cong et al., 2019). A saliency detection method produces the visual saliency map, a gray-scale image, that highlights the objects of interest and thereby mostly\nCIFAR10 Tiny ImageNet CIFAR10 Tiny ImageNet\nfocuses on the foreground. Let Is ∈ RW×H×C is a randomly selected training (source) image with label ys from which a patch will be cut. Then its saliency map detection can be represented as\nIvs = f(Is), (1)\nwhere Ivs ∈ RW×H represents the visual saliency map of the given source image Is as shown in Figure 2 where the objects of interest have higher intensity values and f(·) represents a saliency detection model. Then we search for a pixel Ii,jvs in the saliency map that has the maximum intensity value. The i, j represent the x and y coordinates of the most salient pixel and can be found as\ni, j = argmax(Ivs), (2)\nThen we select a patch, either by centering on the Ii,jvs − th pixel if possible, or keeping the Ii,jvs − th pixel on the selected patch. It ensures that the patch is selected from the object region, not from the background. The size of the patch is determined based on a combination ratio λ which is sampled from the uniform distribution (0, 1) to decide the percentage of an image to be cropped." }, { "heading": "3.2 MIXING THE PATCHES AND LABELS", "text": "Let It ∈ RW×H×C is another randomly selected training (target) image with label yt, to where the source patch will be mixed. SaliencyMix partially mixes It and Is to produce a new training sample Ia, the augmented image, with label ya. The mixing of two images can be defined as\nIa =M Is +M ′ It, (3)\nwhere Ia denotes the augmented image, M ∈ {0, 1}W×H represents a binary mask, M ′ is the complement of M and represents element-wise multiplication. First, the source patch location is defined by using the peak salient information and the value of λ and then the corresponding location of the mask M is set to 1 and others to 0. The element-wise multiplication of M with the source image results with an image that removes everything except the region decided to keep. In contrast, M ′ performs in an opposite way of M i.e., the element-wise multiplication of M ′ with the target image keeps all the regions except the selected patch. Finally, the addition of those two creates a new training sample that contains the target image with the selected source patch in it (See Figure 2). Besides mixing the images we also mix their labels based on the size of the mixed patches as\nya = λyt + (1− λ)ys, (4)\nwhere ya denotes the label for the augmented sample and λ is the combination ratio. Other ways of mixing are investigated in Section 3.4." }, { "heading": "3.3 IMPACT OF DIFFERENT SALIENCY DETECTION METHODS", "text": "Here we investigate the effect of incorporating various saliency detection methods in our SaliencyMix data augmentation technique. We use four well-recognized saliency detection algorithms\n(Montabone & Soto, 2010; Hou & Zhang, 2007; Achanta et al., 2009; Qin et al., 2019), and perform experiments using ResNet-18 as a baseline model on CIFAR-10 dataset and ResNet-50 as a baseline model on Tiny-ImageNet dataset for 200 epochs and 100 epochs, respectively. Note that the statistical saliency models (Montabone & Soto, 2010; Hou & Zhang, 2007; Achanta et al., 2009) work on any size of images. But the learning based model i.e., Qin et al. (2019) are not scale invariant and it should resizes the input image to 224 × 224 and the resulting saliency map is scaled back to the original size. Figure 3(a-b) show that Montabone & Soto (2010) performs better on both the datasets and the effects are identical on CIFA10 and ImageNet datasets. As a result, Montabone & Soto (2010) is used to extract the saliency map in the proposed data augmentation technique." }, { "heading": "3.4 DIFFERENT WAYS OF SELECTING AND MIXING THE SOURCE PATCH", "text": "There are several ways to select the source patch and mix it with the target image. In this section, we explore those possible schemes and examine their effect on the proposed method. We use ResNet18 and ResNet-50 architectures with SaliencyMix data augmentation and perform experiments on CIFAR-10 and Tiny-ImageNet datasets, respectively. We consider five possible schemes: (i) Salient to Corresponding, that selects the source patch from the most salient region and mix it to the corresponding location of the target image; (ii) Salient to Salient, that selects the source patch from the most salient region and mix it to the salient region of the target image; (iii) Salient to Non-Salient, that selects the source patch from the most salient region but mix it to the non-salient region of the target image; (iv) Non-Salient to Salient, that selects the source patch from the non-salient region of the source image but mix it to the salient region of the target image; and (v) Non-Salient to NonSalient, that selects the source patch from the non-salient region of the source image and also mix it to the non-salient region of the target image. To find out the non-salient region, we use the least important pixel of an image.\nFigure 3(c-d) show the classification performance of the proposed SaliencyMix data augmentation with the above mentioned selection and mixing schemes. Similar to the Section 3.3, the effects of the proposed method are similar for CIFAR10 and Tiny-ImageNet datasets. Both the Non-Salient to Salient and Non-Salient to Non-Salient select the source patch from the non-salient region of the source image that doesn’t contain any information about the source object and thereby produce large classification error compared to the other three options where the patch is selected from the most salient region of the source image. It justifies our SaliencyMix i.e., the source patch should be selected in such a way so that it must contain information about the source object. On the other hand, Salient to Salient covers the most significant part of the target image that restricts the model from learning its most important feature and Salient to Non-Salient may not occlude the target object which is necessary to improve the regularization. But Salient to Corresponding keeps balance by changeably occluding the most important part and other based on the orientation of the source and target object. Consequently, it produces more variety of augmented data and thereby achieves the lowest classification error. Also, it introduces less computational burden since only the source image saliency detection is required. Therefore, the proposed method uses Salient to Corresponding as the default selection and mixing scheme." }, { "heading": "4 EXPERIMENTS", "text": "We verify the effectiveness of the proposed SaliencyMix data augmentation strategy on multiple tasks. We evaluate our method on image classification by applying it on several benchmark image recognition datasets using popular SOTA architectures. We also use the SaliencyMix trained model and fine-tune it for object detection task to verify its usefulness in enhancing the detection performance. Furthermore, we validate the robustness of the proposed method against adversarial attacks. All experiments were performed on PyTorch platform with four NVIDIA GeForce RTX 2080 Ti GPUs." }, { "heading": "4.1 IMAGE CLASSIFICATION", "text": "" }, { "heading": "4.1.1 CIFAR-10 AND CIFAR-100", "text": "There are 60, 000 color images of size 32×32 pixels in both the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2012) where CIFAR-10 has 10 distinct classes and CIFAR-100 has 100 classes. The number of training and test images in each dataset is 50, 000 and 10, 000, respectively. We apply several standard architectures: a deep residual network (He et al., 2016) with a depth of 18 (ResNet18) and 50 (ResNet-50), and a wide residual network (Zagoruyko & Komodakis, 2016) with a depth of 28, a widening factor of 10, and dropout with a drop probability of p = 0.3 in the convolutional layers (WideResNet-28-10). We train the networks for 200 epochs with a batch size of 256 using stochastic gradient descent (SGD), Nesterov momentum of 0.9, and weight decay of 5e − 4. The initial learning rate was 0.1 and decreased by a factor of 0.2 after each of the 60, 120, and 160 epochs. The images are normalized using per-channel mean and standard deviation. We perform experiments with and without a traditional data augmentation scheme where the traditional data augmentation includes zero-padding, random cropping, and horizontal flipping.\nTable 1 presents the experimental results on CIFAR datasets where the results are reported on five runs average. It can be seen that for each of the architectures, the proposed SaliencyMix data augmentation strategy outperforms all other methods except PuzzleMix (Kim et al., 2020). It is worth noting that PuzzleMix (Kim et al., 2020) and AutoAugment (Cubuk et al., 2019) require additional optimization process to find out the best augmentation criterion, thereby introduces computational burden. On the other hand, rest of the methods do not require such process. The proposed method achieves the best known top-1 error of 2.76% and 16.56% for WideResNet-28-10 on CIFAR-10 and CIFAR-100 datasets, respectively. Moreover, SaliencyMix shows significant performance improvement over CutMix (Yun et al., 2019) when applied without any traditional augmentation technique. It reduces the error rate by 1.85%, 2.35%, and 1.14% on CIFAR-10 dataset when applied with ResNet18, ResNet-50 and WideResNet-28-10 architectures, respectively. Using the same architectures, it reduces the error rate by 5.69%, 6.76%, and 3.76% on CIFAR-100 dataset, respectively." }, { "heading": "4.1.2 IMAGENET", "text": "ImageNet (Olga et al., 2015) contains 1.2 million training images and 50, 000 validation images of 1000 classes. To perform experiments on ImageNet dataset, we apply the same settings as used in\nYun et al. (2019) for a fair comparison. We have trained our SaliencyMix for 300 epochs with an initial learning rate of 0.1 and decayed by a factor of 0.1 at epochs 75, 150, and 225, with a batch size of 256. Also, the traditional data augmentations such as resizing, cropping, flipping, and jitters have been applied during the training process. Table 2 presents the ImageNet experimental results where the best performance of each method is reported. SaliencyMix outperforms all other methods in comparison and shows competitive results with PuzzleMix (Kim et al., 2020). It drops top-1 error for ResNet-50 by 1.66%, 1.31%, and 0.14% over Cutout, Mixup and CutMix data augmentation, respectively. For ResNet-101 architecture, SaliencyMix achieves the new best result of 20.09% top-1 error and 5.15% top-5 error." }, { "heading": "4.2 OBJECT DETECTION USING PRE-TRAINED SALIENCYMIX", "text": "In this section, we use the SaliencyMix trained model to initialize the Faster RCNN (Shaoqing et al., 2015) that uses ResNet-50 as a backbone network and examine its effect on object detection task. The model is fine-tuned on Pascal VOC 2007 and 2012 datasets and evaluated on VOC 2007 test data using the mAP metric. We follow the fine-tuning strategy of the original method (Shaoqing et al., 2015). The batch size, learning rate, and training iterations are set to 8, 4e−3, and 41K, respectively and the learning rate is decayed by a factor of 0.1 at 33K iterations. The results are shown in Table 3. Pre-training with CutMix and SaliencyMix significantly improves the performance of Faster RCNN. This is because in object detection, foreground information (positive data) is much more important than the background (Lin et al., 2017). Since SaliencyMix helps the augmented image to have more foreground or object part than the background, it leads to better detection performance. It can be seen that SaliencyMix trained model outperforms other methods and achieves a performance gain of +1.77 mAP." }, { "heading": "4.3 CLASS ACTIVATION MAP (CAM) ANALYSIS", "text": "Class Activation Map (CAM) (Zhou et al., 2016) finds out the regions of input image where the model focuses to recognize an object. To investigate this, we extract CAM of models that are trained with various data augmentation techniques. Here we use a vanilla ResNet-50 model equipped with various data augmentation techniques and trained on ImageNet (Olga et al., 2015). Then we extract CAM for un-augmented images as well as for augmented images. Figure 4 presents the experimental results. Figure 4a shows that the proposed data augmentation technique guides the model to precisely focus on the target object compared to others. Also, Figure 4b shows the similar effect when we search for a specific object in a scene with multiple objects. It can be seen that Mixup (Zhang et al., 2017) has a severe problem of being confused when trying to recognize an object because the pixels are mixed and it is not possible to extract class specific features. Also, Cutout (Devries & Taylor, 2017) suffers disadvantages due to the uninformative image region. On the other hand, both the CutMix (Yun et al., 2019) and SaliencyMix effectively focuses on the corresponding features and precisely localizes the two objects in the scene.\nTable 4: Performance comparison on adversarial robustness. Top-1 accuracy (%) of various data augmentation techniques on adversarially perturbed ImageNet validation set.\nBASELINE CUTOUT MIXUP CUTMIX SALIENCYMIX ACC. (%) 8.2 11.5 24.4 31.0 32.96\nTable 5: Training time comparison of various data augmentation techniques using ResNet-18 architecture on CIFAR-10 dataset.\nBASELINE CUTOUT MIXUP CUTMIX SALIENCYMIX TIME (HOUR) 0.83 0.84 0.87 0.89 0.91" }, { "heading": "4.4 ROBUSTNESS AGAINST ADVERSARIAL ATTACK", "text": "Deep learning based models are vulnerable to adversarial examples i.e., they can be fooled by slightly modified examples even when the added perturbations are small and unrecognizable (Szegedy et al., 2014; Goodfellow et al., 2015; Madry et al., 2017). Data augmentation helps to increase the robustness against adversarial perturbations since it introduces many unseen image samples during the training (Madry et al., 2017). Here we verify the adversarial robustness of a model that is trained using various data augmentation techniques and compare their effectiveness. Fast Gradient Sign Method (FGSM) (Madry et al., 2017) is used to generate the adversarial examples and ImageNet pre-trained models of each data augmentation techniques with ResNet-50 architecture is used in this experiment. Table 4 reports top-1 accuracy of various augmentation techniques on adversarially attacked ImageNet validation set. Due to the appropriate feature representation learning and focusing on the overall object rather than a small part, SaliencyMix significantly improves the robustness against the adversarial attack and achieves 1.96% performance improvement over the nearly comparable method CutMix (Yun et al., 2019)." }, { "heading": "4.5 COMPUTATIONAL COMPLEXITY", "text": "We investigate the computational complexity of the proposed method and compare it with other data augmentation techniques in terms of training time. All the models are trained on CIFAR-10 dataset using ResNet-18 architecture for 200 epochs. Table 5 presents the training time comparison. It can be seen that SaliencyMix requires a slightly longer training time compared to others, due to saliency map generation. But considering the performance improvement, it can be negligible." }, { "heading": "5 CONCLUSION", "text": "We have introduced an effective data augmentation strategy, called SaliencyMix, that is carefully designed for training CNNs to improve their classification performance and generalization ability. The proposed SaliencyMix guides the models to focus on the overall object regions rather than a small region of input images and also prevents the model from learning in-appropriate feature representation by carefully selecting the representative source patch. It introduces a little computational burden due to saliency detection, while significantly boosts up the model performance and strengthen the model robustness on various computer vision tasks. Applying SaliencyMix with WideResNet achieves the new best known top-1 error of 2.76% and 16.56% on CIFAR-10 and CIFAR-100, respectively. On ImageNet classification, applying SaliencyMix with ResNet-50 and ResNet-101 obtains the new best known top-1 error of 21.26% and 20.09%, respectively. On object detection, using the SaliencyMix trained model to initialize the Faster RCNN (ResNet-50 as a backbone network) and fine-tuning leads to a performance improvement by +1.77 mAP. Furthermore, SaliencyMix trained model is found to be more robust against adversarial attacks and achieves 1.96% accuracy improvement on adversarially perturbed ImageNet validation set compared to the nearly comparable augmentation method. Considering more detailed and/or high level semantic information for data augmentation will be our future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2018R1C1B3008159) and Basic Science Research Program through the National Research Foundation of Korea (NRF) under Grant [NRF-020R1F1A1050014]." } ]
2,021
SALIENCYMIX: A SALIENCY GUIDED DATA AUG-
SP:62a3d12370f3248b2283ea33d6767b1c914bcbe2
[ "This paper uses the fact that the Wasserstein distance is decreasing under 1-Lipschitz mappings from the ambient space to a feature space in order to propose more robust (to dimensionality ?) estimation of the Wasserstein distance between two probability distributions. A neural network with some sort of weight renormalization is used to produce 1-Lipschitz embeddings. The authors then maximise over the parametrized maps." ]
Optimal transport distances (OT) have been widely used in recent work in Machine Learning as ways to compare probability distributions. These are costly to compute when the data lives in high dimension. Recent work aims specifically at reducing this cost by computing OT using low-rank projections of the data (seen as discrete measures) (Paty & Cuturi, 2019). We extend this approach and show that one can approximate OT distances by using more general families of maps provided they are 1-Lipschitz. The best estimate is obtained by maximising OT over the given family. As OT calculations are done after mapping data to a lower dimensional space, our method scales well with the original data dimension. We demonstrate the idea with neural networks. We use Sinkhorn Divergences (SD) to approximate OT distances as they are differentiable and allow for gradient-based optimisation. We illustrate on synthetic data how our technique preserves accuracy and displays a low sensitivity of computational costs to the data dimension.
[]
[ { "authors": [ "Federico Bassetti", "Antonella Bodini", "Eugenio Regazzini" ], "title": "On minimum kantorovich distance estimators", "venue": "Statistics & probability letters,", "year": 2006 }, { "authors": [ "Sebastian Claici", "Edward Chien", "Justin Solomon" ], "title": "Stochastic wasserstein barycenters", "venue": "arXiv preprint arXiv:1802.05757,", "year": 2018 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Amaury Habrard", "Alain Rakotomamonjy" ], "title": "Joint distribution optimal transportation for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Marco Cuturi", "Arnaud Doucet" ], "title": "Fast computation of wasserstein barycenters", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Richard Mansfield Dudley" ], "title": "The speed of mean glivenko-cantelli convergence", "venue": "The Annals of Mathematical Statistics,", "year": 1969 }, { "authors": [ "Jean Feydy", "Thibault Séjourné", "François-Xavier Vialard", "Shun-Ichi Amari", "Alain Trouvé", "Gabriel Peyré" ], "title": "Interpolating between optimal transport and mmd using sinkhorn divergences", "venue": "arXiv preprint arXiv:1810.08278,", "year": 2018 }, { "authors": [ "Aden Forrow", "Jan-Christian Hütter", "Mor Nitzan", "Philippe Rigollet", "Geoffrey Schiebinger", "Jonathan Weed" ], "title": "Statistical optimal transport via factored couplings", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Aude Genevay", "Gabriel Peyré", "Marco Cuturi" ], "title": "Sinkhorn-autodiff: Tractable wasserstein learning of generative models", "venue": "arXiv preprint arXiv:1706.00292,", "year": 2017 }, { "authors": [ "Aude Genevay", "Lénaic Chizat", "Francis Bach", "Marco Cuturi", "Gabriel Peyré" ], "title": "Sample complexity of sinkhorn divergences", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Leonid Vital?evich Kantorovich" ], "title": "Mathematical methods of organizing and planning production", "venue": "Management Science,", "year": 1960 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Soheil Kolouri", "Phillip E Pope", "Charles E Martin", "Gustavo K Rohde" ], "title": "Sliced-wasserstein autoencoder: An embarrassingly simple generative model", "venue": null, "year": 1804 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Grégoire Montavon", "Klaus-Robert Müller", "Marco Cuturi" ], "title": "Wasserstein training of Boltzmann machines", "venue": "arXiv preprint arXiv:1507.01972,", "year": 2015 }, { "authors": [ "Boris Muzellec", "Marco Cuturi" ], "title": "Subspace detours: Building transport plans that are optimal on subspace projections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "François-Pierre Paty", "Marco Cuturi" ], "title": "Subspace robust wasserstein distances", "venue": "arXiv preprint arXiv:1901.08949,", "year": 2019 }, { "authors": [ "Gabriel Peyré", "Marco Cuturi" ], "title": "Computational optimal transport", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Filippo Santambrogio" ], "title": "Optimal transport for applied mathematicians", "venue": "Birkäuser, NY,", "year": 2015 }, { "authors": [ "Bernhard Schmitzer" ], "title": "Stabilized sparse scaling algorithms for entropy regularized transport problems", "venue": "SIAM Journal on Scientific Computing,", "year": 2019 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Jonathan Weed", "Francis Bach" ], "title": "Sharp asymptotic and finite-sample rates of convergence of empirical measures in wasserstein", "venue": "distance. Bernoulli,", "year": 2019 }, { "authors": [ "Yuichi Yoshida", "Takeru Miyato" ], "title": "Spectral norm regularization for improving the generalizability of deep learning", "venue": "arXiv preprint arXiv:1705.10941,", "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "Optimal Transport metrics (Kantorovich, 1960) or Wasserstein distances, have emerged successfully in the field of machine learning, as outlined in the review by Peyré et al. (2017). They provide machinery to lift distances on X to distances over probability distributions in P(X ). They have found multiple applications in machine learning: domain adaptation Courty et al. (2017), density estimation (Bassetti et al., 2006) and generative networks (Genevay et al., 2017; Patrini et al., 2018). However, it is prohibitively expensive to compute OT between distributions with support in a high-dimensional space and might not even be practically possible as the sample complexity can grow exponentially as shown by Dudley (1969). Similarly, work by Weed et al. (2019) showed a theoretical improvement when the support of distributions is found in a low-dimensional space. Furthermore, picking the ground metric that one should use is not obvious when using high-dimensional data. One of the earlier ideas from Santambrogio (2015) showed that OT projections in a 1-D space may be sufficient enough to extract geometric information from high dimensional data. This further prompted Kolouri et al. (2018) to use this method to build generative models, namely the Sliced Wasserstein Autoencoder. Following a similar approach Paty & Cuturi (2019) and Muzellec & Cuturi (2019) project the measures into a linear subspace E of low-dimension k that maximizes the transport cost and show how this can be used in applications of color transfer and domain adaptation. This can be seen as an extension to earlier work by Cuturi & Doucet (2014) whereby the cost function is parameterized.\nOne of the fundamental innovations that made OT appealing to the machine learning com-\nmunity was the seminal paper by Cuturi (2013) that introduced the idea of entropic regularization of OT distances and the Sinkhorn algorithm. Since then, regularized OT has been successfully used as a loss function to construct generative models such as GANs (Genevay et al., 2017) or RBMs (Montavon et al., 2015) and computing Barycenters (Cuturi & Doucet, 2014; Claici et al., 2018). More recently, the new class of Sinkhorn Divergences was shown by Feydy et al. (2018) to have good geometric properties, and interpolate between Maximum Mean Discrepancies (MMD) and OT.\nBuilding on this previous work, we introduce a general framework for approximating highdimensional OT using low-dimensional projections f by finding the subspace with the worst OT cost, i.e. the one maximizing the ground cost on the low-dimensional space. By taking a general family of parameterizable fφs that are 1-Lipschitz, we show that our method generates a pseudo-metric and is computationally efficient and robust. We start the paper in §2 with background on optimal transport and pseudo-metrics. In §3 we define the theoretical framework for approximating OT distances and show how both linear (Paty & Cuturi, 2019) and non-linear projections can be seen as a special instance of our framework. In §4 we present an efficient algorithm for computing OT distances using Sinkhorn Divergences and fφs that are 1-Lipschitz under the L2 norm. We conclude in §5 with experiments illustrating the efficiency and robustness of our method." }, { "heading": "2 Preliminaries", "text": "We start with a brief reminder of the basic notions needed for the rest of the paper. Let X be a set equipped with a map dX : X × X → R≥0 with non-negative real values. The pair (X , dX ) is said to be a metric space and dX is said to be a metric on X if it satisfies the usual properties:\n• dX (x, y) = 0 if and only if x = y • dX (x, y) = dX (y, x) • dX (x, z) ≤ dX (x, y) + dX (y, z)\nIf dX verifies the above except for the only if condition, it is called a pseudo-metric, and (X , dX ) is said to be a pseudo-metric space. For a pseudo-metric, it may be that dX (x, y) = 0 while x 6= y. We write dX ≤ d′X if for all x, y dX (x, y) ≤ d′X (x, y). It is easy to see that: 1) “≤” is a partial order on pseudo-metrics over X, 2) “≤” induces a complete lattice structure on the set of pseudo-metrics over X , where 3) suprema are computed pointwise (but not infima). Consider X , Y, two metric spaces equipped with respective metrics dX , dY . A map f from X to Y is said to be α-Lipschitz continuous if dY(f(x), f(x′)) ≤ αdX (x, x′). A 1-Lipschitz map is also called non-expansive. Given a map f from X to Y one defines the pullback of dY along f as:\nf̂(dY)(x, x′) = dY(f(x), f(x′)) (1)\nIt is easily seen that: 1) f̂(dY) is a pseudo-metric on X , 2) f̂(dY) is a metric iff f is injective, 3) f̂(dY) ≤ dX iff f is non-expansive, 4) f̂(dY) is the least pseudo-metric on the set X such that f is non-expansive from (X , f̂(dY)) to (X , dY). Thereafter, we assume that all metric spaces considered are complete and separable, i.e. have a dense countable subset.\nLet (X , dX ) be a (complete separable) metric space. Let ΣX be the σ-algebra generated by the open sets of X (aka the Borelian subsets). We write P(X ) for the set of probability distributions on (X ,ΣX). Given a measurable map f : X → Y, and µ ∈P(X) one defines the push-forward of µ along f as:\nf#(µ)(B) = µ(f−1(B)) (2) for B ∈ ΣY . It is easily seen that f#(µ) is a probability measure on (Y,ΣY ) Given µ in P(X ), ν in P(Y), a coupling of µ and ν is a probability measure γ over X × Y such that for all A in ΣX , B in ΣY , γ(A×X ) = µ(A), and γ(X ×B) = ν(B). Equivalently, µ = π0#(π), and ν = π1#(π) for π0, π1 the respective projections. We write Γ(µ, ν) for the set of couplings of µ and ν. There are several ways to lift a given metric structure on dX to one on P(X ). We will be specifically interested in metrics on P(X ) derived from optimal transport problems. The p-Wasserstein metric with p ∈ [1,∞) is defined by:\nWp(dX )(µ, ν)p = inf γ∈Γ(µ,ν) ∫ X×X dpX dγ (3)\nVillani (2008) establishes that if dX is (pseudo-) metric so isWp(dX ). The natural ‘Dirac’ embedding of X into P(X ) is isometric (there is only one coupling). The idea behind the definition is that dpX is used as a measure of the cost of transporting units of mass in X , while a coupling γ specifies how to transport the µ distribution to the ν one. One can therefore compute the mean transportation cost under γ, and pick the optimal γ. Hence the name optimal transport. In most of the paper, we are concerned with the case X = Rd+ for some large d with a metric structure dX given by the Euclidean norm, and we wish to compute the W2 metric between distributions with finite support. Since OT metrics are costly to compute in high dimension, to estimate these efficiently, and mitigate the impact of dimension, we will use a well-chosen family of fs to push the data along a map with a low dimensional co-domain Y also equipped with the Euclidean metric. The reduction maps may be linear or not. They have to be non-expansive to guarantee that the associated pull-back metrics are always below the Euclidean one, and therefore we provide a lower estimate of W2(d2)." }, { "heading": "3 Approximate OT with General Projections - GPW", "text": "With the ingredients from the above section in place, we can now construct a general framework for approximating Wasserstein-like metrics by low-dimensional mappings of X . We write simply W instead of Wp as the value of p plays no role in the development. Pick two metric spaces (X , dX ), (Y, dY), and a family S = (fφ : X → Y; φ ∈ S) of mappings from X to Y. Define a map from P(X )×P(X ) to non-negative reals as follows:\ndS(µ, ν) = sup S W (dY)(fφ#(µ), fφ#(ν)) (4)\nEquivalently and more concisely dS can be defined as: dS(µ, ν) = sup\nφ W (f̂φ(dY))(µ, ν) (5)\nIt is easily seen that:\n1. the two definitions are equivalent 2. dS is a pseudo-metric on P(X ) 3. dS is a metric (not just a pseudo one) if the family fφ jointly separates points in X , and 4. if the fφs are non-expansive from (X , dX ) to (Y, dY), then dS ≤W (dX )\nThe second point follows readily from the second definition. Each f̂φ(dY) is a pseudo-metric on X obtained by pulling back dY (see preceding section), hence, so is W (f̂φ(dY)) on P(X ), and therefore dS being the supremum of this family (in the lattice of pseudo-metrics over X ) is itself a pseudo-metric. The first definition is important because it allows one to perform the OT computation in the target space where it will be cheaper. Thus we have derived from S a pseudo-metric dS on the space of probability measures P(X ). We assume from now on that mappings in S are non-expansive. By point 4. above, we know that dS is bounded above by W (dX ). We call dS the generalized projected Wasserstein metric (GPW) associated to S. In good cases, it is both cheaper to compute and a good estimate." }, { "heading": "3.1 SRW as an instance of GPW", "text": "In Paty & Cuturi (2019), the authors propose to estimate W2 metrics by projecting the ambient Euclidean X into k-dimensional linear Euclidean subspaces. Specifically, their derived metric on P(X), written Sk, can be defined as (Paty & Cuturi, 2019, Th. 1, Eq. 4):\nS2k(µ, ν) = sup Ω W 22 (dY)(Ω 1/2 # (µ),Ω 1/2 # (ν)) (6)\nwhere: 1) dY is the Euclidean metric on Y, 2) Ω contains all positive semi-definite matrices of trace k (and therefore admitting a well-defined square root) with associated semi-metric smaller than dX . We recognise a particular case of our framework where the family of mappings is given by the linear mappings √ Ω : Rd = X → Y = Rk under the constraints above. In particular, all mappings used are linear. The authors can complement the general properties of the approach with a specific explicit bound on the error and show that S2k ≤ W 22 (dX ) ≤ (d/k)S2k. In the general case, there is no upper bound available, and one has only the lower one." }, { "heading": "3.2 Non-linear embeddings for approximating Wasserstein distances", "text": "Using the same Euclidean metric spaces, X = Rd, Y = Rk, we observe that our framework does not restrict us to use linear functions as mappings. One could use a family of mapping given by a neural network (fφ : X → Y;φ ∈ S) where φ ranges over network weights. However, not any φ is correct. Indeed, by point 4) in the list of properties of dS , we need fφs to be non-expansive. Ideally, we could pick S to be the set of all weights such that fφ is non-expansive. There are two problems one needs to solve in order to reduce the idea to actual tractable computations. First, one needs an efficient gradient-based search to look for the weights φ which maximise supSW (dY)(fφ#(µ), fφ#(ν)) (see 4). Second, as the gradient update may take the current fφ out of the non-expansive maps, one needs to project back efficiently in the space of non-expansive. Both problems already have solutions which are going to re-use. For the first point, we will use Sinkhorn Divergence (SD) (Genevay et al., 2017). Recent work Feydy et al. (2018) shows that SD, which one can think as a regularised version of W , is a sound choice as a loss function in machine\nlearning. It can approximate W closely and without bias (Genevay et al., 2017), has better sample complexity (Genevay et al., 2019), as well as quadratic computation time. Most importantly, it is fully differentiable. For the second problem, one can ‘Lipshify’ the linear layers of the network by dividing their (operator) norm after each update. We will use linear layers with Euclidean metrics, and this will need to estimate the spectral radius of each layer. The same could be done with linear layers using a mixture of L1, L2 and L∞ metrics. In fact computing the L1 → L1 operator norm for linear layers is an exact operation, as opposed to using the spectral norm for L2 → L2 case where we approximate using the power method. Note that the power method can only approximate the L2 norm and gradient ascent methods used in the maximization phase are stochastic making our approximation susceptible to more variables. However, it is extremely efficient since it requires computation of optimal transport distances only in the low-dimensional space. We can see this as a trade-off between exactness and efficiency." }, { "heading": "4 Computational details", "text": "In this section we propose Algorithm 1 for stochastically estimating dS between two measures with finite support where the class of mappings S is as defined above. Note that this algorithm can further be used during the training of a discriminator as part of a generative network with an optimal transport objective, similar to Genevay et al. (2017). The Sinkhorn Divergence alternative for dS now uses Sinkhorn divergences as a proxy for OT (compare with equation 4):\nSDφ, (µ, ν) = W (dY)(fφ#(µ), fφ#(ν))− 1 2W (dY)(fφ#(µ), fφ#(µ))− 1 2W (dY)(fφ#(ν), fφ#(ν))\n(7)\nwhere W is the well-known Sinkhorn regularized OT problem (Cuturi, 2013). The nonparameterized version of the divergence has been shown by Feydy et al. (2018) to be an unbiased estimator of W (µ, ν) and converges to the true OT distance when = 0. Their paper also constructs an effective numerical scheme for computing the gradients of the Sinkhorn divergence on GPU, without having to back-propagate through the Sinkhorn iterations, by using autodifferentiation and the detach methods available in PyTorch (Paszke et al., 2019). Moreover, work by Schmitzer (2019) devised an -scaling scheme to trade-off between guaranteed convergence and speed. This gives us further control over how fast the algorithm is. It is important to note that the minimization computation happens in the low-dimensional space, differently from the approach in Paty & Cuturi (2019), which makes our algorithm scale better with dimension, as seen in §5.\nFeydy et al. (2018) established that the gradient of 7 w.r.t to the input measures µ, ν is given by the dual optimal potentials. Since we are pushing the measures through a differentiable function fφ, we can do the maximization step via a stochastic gradient ascent method such as SGD or ADAM (Kingma & Ba, 2014). Finally, after each iteration, we project back into the space of 1-Lipschitz functions fφ. For domain-codomain L2 ←→ L2 the Lipschitz constant of a fully connected layer is given by the spectral norm of the weights, which can be approximated in a few iterations of the power method. Since non-linear activation functions such as ReLU are 1-Lipschitz, in order to project back into the space of constraints we suggest to normalize each layer’s weights with the spectral norm, i.e. for layer i we have φi := φi/||φi||. Previous work done by Neyshabur et al. (2017) as well as Yoshida & Miyato (2017) and Miyato et al. (2018) showed that with smaller magnitude weights, the model can better generalize and improve the quality of generated samples when used on a discriminator in a GAN. We note that if we let fφ to be a 1-Layer fully connected network with no activation, the optimization we perform is very similar with the optimization done by Paty & Cuturi (2019). The space of 1-Lipschitz functions we are optimizing over is larger and\nour method is stochastic, but we are able to recover very similar results at convergence. Moreover, our method applies to situations where the data lives in a non-linear manifold that an fφ such as a neural network is able to model. Comparing different numerical properties of the Subspace Robust Wasserstein distances in 6 with our Generalized Projected Wasserstein Distances is the focus of the next section.\nAlgorithm 1 Ground metric parameterization through φ Input: Measures µ = ∑n i δxiai and ν = ∑n j δyj bj , fφ : Rd → Rk 2-Layer network with dimen-\nsions (d, 20, k) and 1-Lipschitz, optimizer ADAM , power method iterations λ, SDφ, unbiased Sinkhorn Divergence. Output: fφ, SDφ, Initialize: lr, ,λ, fφ ∼ N (0, 10), Objective← SD (blur = 2, p = 2, debias = True) for t→ 1, . . . ,maxiter do L← −SDφ, (fφ#µ, fφ#ν) (pushforward through fφ and evaluate SD in lower space) gradφ ← Autodiff ( L) (maximization step with autodiff)\nφ← φ+ADAM(gradφ) (gradient step with SGD and scheduler) φ← Projλ1−Lip(φ) (projection into 1-Lipschitz space of functions)\nend for" }, { "heading": "5 Experiments", "text": "We consider similar experiments as presented in Forrow et al. (2019) and Paty & Cuturi (2019) and show the mean estimation of SD2φ,k(µ, ν) for different values of k, as well as robustness to noise. We also show how close the distance generated by the linear projector from Paty & Cuturi (2019) is to our distance and highlight the trade-off in terms of computation time with increasing number of dimensions. In order to illustrate our method, we construct two empirical distributions µ̂, ν̂ by taking samples from two independent measures µ = N (0,Σ1) and ν = N (0,Σ2) that live in a 10 dimensional space. Similarly to Paty & Cuturi (2019) we construct the covariance matrices Σ1,Σ2 such that they are of rank 5, i.e. the support of the distributions is given by a 5 dimensional linear subspace. Throughout our experiments we fix fφ to be a 2-layer neural network with a hidden layer of 16 units, activation function ReLU and output of dimension k. We initialize the weights from N (0, 10) and use a standard ADAM optimizer with a decaying cyclic learning rate (Smith, 2017) bounded by [0.1, 1.0]. Decreasing and increasing the learning rate via a scheduler allows us to not fall into local optima. The batch size for the algorithm is set to n = 500, which is the same number of samples that make up the two measures. Besides the neural network variables, we set the regularization strength small enough, to = 0.001, and the scaling to -scaling = 0.95 such that we can accurately estimate the true optimal transport distance, but not spend too much computational time during the Sinkhorn iterates." }, { "heading": "5.1 10-D Gaussian Data OT estimation using SDφ,k", "text": "This leaves us with three variables of interest during the computation of SDφ,k, namely k, d, λ (latent dimension, input dimension, power method iterations). The power method iterations plays an important role during the projection step, as for a small number of iterations, there is a chance\n2 3 4 5 6 7 8 9 10 k\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nM ea\nn No\nrm al\nize d\ndi st\nan ce\ns\nNoisy SD2( , ) W2( , ) SD2( , ) W2( , ) Noisy S2k ( , ) W2( , ) S2k ( , ) W2( , )\nFigure 2: Mean normalized distances with and without noise for SD2φ(µ, ν) and S2k(µ, ν) as a function of latent dimension k. The shaded area shows the standard deviation over 20 runs.\nof breaking the constraint. At the same time, running the algorithm for too long is computationally expensive. In Figure 1 we used λ = 5 power iterations and show the values of SD2k,φ after running 1 for 500 iterations. We compare them to the true OT distance for various levels of k and observe that even with a small number of power iterations, the estimation approaches the true value as k increases. Furthermore, we see that for k = 5 and k = 7 the algorithm converges after 200 steps.\nUsing 20 power iterations, we show how the approximation behaves in the presence of noise as a function of the latent space k. We add Gaussian noise in the form of N (0, I) to µ̂, ν̂ and show in Figure 2 the comparison between no noise and noise for both SRW distances defined in 6 and GPW in 4. We observe that SD2φ,k behaves similarly to S2k in the presence of noise." }, { "heading": "5.2 Computation time", "text": "In Figure. 8 of Paty & Cuturi (2019) they note that their method when using Sinkhorn iterates is quadratic in dimension because of the eigen-decomposition of the displacement matrix. Fundamentally different, we are always optimizing in the embedded space, making the computation of the Sinkhorn iterates linear with dimension. Note that there is the extra computation involved with pushing the measures through the neural network and backpropagating as well as the projection step that depends on the power iteration method. In order to run this experiment we set λ = 5 and generate µ̂, ν̂ by changing dimension d but leaving the rank of Σ1,Σ2 equal to 5. The latent space is fixed to k = 5. In Figure 3 we plot the normalized distances using the two approaches as a function of dimension and see that the gap gets bigger with increasing dimensions, but it is stable. In Figure 4 we plot the log of the relative computation time, taking the d = 10 as a benchmark in both cases. We see that the time to compute SD2φ is linear in dimension and is significantly lower than its counterpart S2k as we increase the number of dimensions. This can be traced back to Algorithm. 1 and Algorithm. 2 of Paty & Cuturi (2019) where at each iteration step, the computation of OT distances in the data space is prohibitively expensive.\n0 250 500 750 1000 1250 1500 1750 2000 Dimension\n100\n101\nRe la\ntiv e\nco m\npu ta\ntio n\ntim e\n(lo g\nsc al e) SD2( , ) S2k ( , )\nFigure 4: Mean relative computation time (log scale) comparison between the two distances. The shaded area shows shows the standard deviation over 20 runs." }, { "heading": "6 Conclusion", "text": "In this paper we presented a new framework for approximating optimal transport distances using a wide family of embedding functions that are 1-Lipschitz. We showed how linear projectors can be considered as a special case of such functions and proceeded to define neural networks as another class of embeddings. We showed how we can use existing tools to build an efficient algorithm that is robust and constant in the dimension of the data. Future work includes showing the approximation is valid for datasets where the support of distributions lies in a low-dimensional non-linear manifold, where we hypothesize that linear projects would fail. Other work includes experimenting with different operator norms such as L1 or Linf for the linear layers and the approximation of W1. An extension of the projection step in 1 to convolutional layers would allow us to experiment with real datasets such as CIFAR-10 and learn a discriminator in an adversarial way with SDk,φ as a loss function. This can be used to show that the data naturally clusters in the embedding space." } ]
2,020
Efficient estimates of optimal transport via low-dimensional embeddings Conference Submissions
SP:d06908461594cd2fb28e636fc85b53589a5e1207
[ "This work (InfoBERT) proposes additional objectives for transformer finetuning to obtain models more robust to adversarial inputs. The authors first propose a mutual information based information bottleneck objective, next the authors propose an adversarial loss inspired method for identifying robust features and a subsequent objective to emphasize the mutual information between global representations and these robust features. The experiments demonstrate that InfoBERT consistently outperforms other adversarial training approaches on a variety of adversarial evaluations." ]
Large-scale pre-trained language models such as BERT and RoBERTa have achieved state-of-the-art performance across a wide range of NLP tasks. Recent studies, however, show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks. We aim to address this problem from an information-theoretic perspective, and propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models. InfoBERT contains two mutual-information-based regularizers for model training: (i) an Information Bottleneck regularizer, which suppresses noisy mutual information between the input and the feature representation; and (ii) an Anchored Feature regularizer, which increases the mutual information between local stable features and global features. We provide a principled way to theoretically analyze and improve the robustness of language models in both standard and adversarial training. Extensive experiments demonstrate that InfoBERT achieves stateof-the-art robust accuracy over several adversarial datasets on Natural Language Inference (NLI) and Question Answering (QA) tasks. Our code is available at https://github.com/AI-secure/InfoBERT.
[ { "affiliations": [], "name": "∗Boxin Wang" }, { "affiliations": [], "name": "Shuohang Wang" }, { "affiliations": [], "name": "Yu Cheng" }, { "affiliations": [], "name": "Zhe Gan" }, { "affiliations": [], "name": "Ruoxi Jia" }, { "affiliations": [], "name": "Bo Li" }, { "affiliations": [], "name": "Jingjing Liu" } ]
[ { "authors": [ "Moustafa Alzantot", "Yash Sharma", "Ahmed Elgohary", "Bo-Jhang Ho", "Mani B. Srivastava", "Kai-Wei Chang" ], "title": "Generating natural language adversarial examples", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "David Barber", "Felix V. Agakov" ], "title": "The im algorithm: A variational approach to information maximization", "venue": "In NeurIPS,", "year": 2003 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "Mutual information neural estimation", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "The Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Ziegler", "Jeffrey Wu", "Clemens Winter", "Christopher Hesse", "Mark Chen", "Eric Sigler", "Mateusz Litwin", "Scott Gray", "Benjamin Chess", "Jack Clark", "Christopher Berner", "Sam McCandlish", "Alec Radford", "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": null, "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": null, "year": 2002 }, { "authors": [ "Pengyu Cheng", "Weituo Hao", "Shuyang Dai", "Jiachang Liu", "Zhe Gan", "L. Carin" ], "title": "Club: A contrastive log-ratio upper bound of mutual information", "venue": null, "year": 2006 }, { "authors": [ "Jeremy M. Cohen", "Elan Rosenfeld", "J. Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), ICML,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned", "venue": "verifiers. CoRR,", "year": 2018 }, { "authors": [ "Javid Ebrahimi", "Anyi Rao", "Daniel Lowd", "Dejing Dou" ], "title": "Hotflip: White-box adversarial examples for text classification", "venue": "In Iryna Gurevych and Yusuke Miyao (eds.),", "year": 2018 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Xiaodong Song" ], "title": "Robust physical-world attacks on deep learning", "venue": null, "year": 2017 }, { "authors": [ "Zhe Gan", "Yen-Chun Chen", "Linjie Li", "Chen Zhu", "Yu Cheng", "Jingjing Liu" ], "title": "Large-scale adversarial training for vision-and-language representation learning", "venue": "arXiv preprint arXiv:2006.06195,", "year": 2020 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "CoRR, abs/1412.6572,", "year": 2015 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Po-Sen Huang", "Robert Stanforth", "Johannes Welbl", "Chris Dyer", "Dani Yogatama", "Sven Gowal", "Krishnamurthy Dvijotham", "Pushmeet Kohli" ], "title": "Achieving verified robustness to symbol substitutions via interval bound propagation", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Mohit Iyyer", "John Wieting", "Kevin Gimpel", "Luke Zettlemoyer" ], "title": "Adversarial example generation with syntactically controlled paraphrase networks", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Joern-Henrik Jacobsen", "Jens Behrmann", "Richard Zemel", "Matthias Bethge" ], "title": "Excessive invariance causes adversarial vulnerability", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Robin Jia", "Percy Liang" ], "title": "Adversarial examples for evaluating reading comprehension systems", "venue": "Association for Computational Linguistics,", "year": 2021 }, { "authors": [ "Robin Jia", "Aditi Raghunathan", "Kerem Göksel", "Percy Liang" ], "title": "Certified robustness to adversarial word substitutions", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Tuo Zhao" ], "title": "SMART: robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Di Jin", "Zhijing Jin", "Joey Tianyi Zhou", "Peter Szolovits" ], "title": "Is BERT really robust? A strong baseline for natural language attack on text classification and entailment", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Lingpeng Kong", "Cyprien de Masson d’Autume", "Lei Yu", "Wang Ling", "Zihang Dai", "Dani Yogatama" ], "title": "A mutual information maximization perspective of language representation learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Jinfeng Li", "Shouling Ji", "Tianyu Du", "Bo Li", "Ting Wang" ], "title": "Textbugger: Generating adversarial text against real-world applications", "venue": "In NDSS. The Internet Society,", "year": 2019 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Xiaodong Liu", "Hao Cheng", "Pengcheng He", "Weizhu Chen", "Yu Wang", "Hoifung Poon", "Jianfeng Gao" ], "title": "Adversarial training for large neural language models", "venue": null, "year": 2004 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: A simple and accurate method to fool deep neural networks", "venue": null, "year": 2016 }, { "authors": [ "Yixin Nie", "Adina Williams", "Emily Dinan", "Mohit Bansal", "Jason Weston", "Douwe Kiela" ], "title": "Adversarial NLI: A new benchmark for natural language understanding", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100, 000+ questions for machine comprehension of text", "venue": null, "year": 2016 }, { "authors": [ "Shuhuai Ren", "Yihe Deng", "Kun He", "Wanxiang Che" ], "title": "Generating natural language adversarial examples through probability weighted word saliency", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Nikunj Saunshi", "Orestis Plevrakis", "Sanjeev Arora", "Mikhail Khodak", "Hrishikesh Khandeparkar" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), ICML,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": null, "year": 2005 }, { "authors": [ "N. Tishby", "N. Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Boxin Wang", "Hengzhi Pei", "Boyuan Pan", "Qian Chen", "Shuohang Wang", "Bo Li" ], "title": "T3: Treeautoencoder constrained adversarial text generation for targeted attack", "venue": "In EMNLP,", "year": 2020 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Mao Ye", "Chengyue Gong", "Qiang Liu" ], "title": "SAFER: A structure-free approach for certified robustness to adversarial word substitutions", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Yue Yu", "Simiao Zuo", "Haoming Jiang", "Wendi Ren", "Tuo Zhao", "Chao Zhang" ], "title": "Fine-tuning pretrained language model with weak supervision: A contrastive-regularized self-training approach", "venue": null, "year": 2010 }, { "authors": [ "Yuan Zang", "Fanchao Qi", "Chenghao Yang", "Zhiyuan Liu", "Meng Zhang", "Qun Liu", "Maosong Sun" ], "title": "Word-level textual adversarial attacking as combinatorial optimization", "venue": "Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Yuan Zhang", "Jason Baldridge", "Luheng He" ], "title": "PAWS: paraphrase adversaries from word scrambling", "venue": "Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Chen Zhu", "Yu Cheng", "Zhe Gan", "Siqi Sun", "Tom Goldstein", "Jingjing Liu" ], "title": "Freelb: Enhanced adversarial training for natural language understanding", "venue": "In ICLR. OpenReview.net,", "year": 2020 }, { "authors": [ "Sicheng Zhu", "Xiao Zhang", "David Evans" ], "title": "Learning adversarially robust representations via worst-case mutual information maximization", "venue": "CoRR, abs/2002.11798,", "year": 2020 }, { "authors": [ "Jin" ], "title": "2020), we observe that some adversarial examples look invalid to human For example, in the last example of Table 9, TextFooler replaces “stand” with “position”, losing the critical information that “girls are standing instead of kneeling” and fooling both human and NLP models", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-supervised representation learning pre-trains good feature extractors from massive unlabeled data, which show promising transferability to various downstream tasks. Recent success includes large-scale pre-trained language models (e.g., BERT, RoBERTa, and GPT-3 (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020)), which have advanced state of the art over a wide range of NLP tasks such as NLI and QA, even surpassing human performance. Specifically, in the computer vision domain, many studies have shown that self-supervised representation learning is essentially solving the problem of maximizing the mutual information (MI) I(X;T ) between the input X and the representation T (van den Oord et al., 2018; Belghazi et al., 2018; Hjelm et al., 2019; Chen et al., 2020). Since MI is computationally intractable in high-dimensional feature space, many MI estimators (Belghazi et al., 2018) have been proposed to serve as lower bounds (Barber & Agakov, 2003; van den Oord et al., 2018) or upper bounds (Cheng et al., 2020) of MI. Recently, Kong et al. point out that the MI maximization principle of representation learning can be applied to not only computer vision but also NLP domain, and propose a unified view that recent pre-trained language models are maximizing a lower bound of MI among different segments of a word sequence.\nOn the other hand, deep neural networks are known to be prone to adversarial examples (Goodfellow et al., 2015; Papernot et al., 2016; Eykholt et al., 2017; Moosavi-Dezfooli et al., 2016), i.e., the outputs of neural networks can be arbitrarily wrong when human-imperceptible adversarial perturbations are added to the inputs. Textual adversarial attacks typically perform word-level substitution (Ebrahimi et al., 2018; Alzantot et al., 2018; Ren et al., 2019) or sentence-level paraphrasing (Iyyer et al., 2018; Zhang et al., 2019) to achieve semantic/utility preservation that seems innocuous to human, while fools NLP models. Recent studies (Jin et al., 2020; Zang et al., 2020; Nie et al., 2020; Wang et al., 2020) further show that even large-scale pre-trained language models (LM) such as\n∗Work was done during Boxin Wang’s Summer internship in Microsoft Dynamics 365 AI Research.\nBERT are vulnerable to adversarial attacks, which raises the challenge of building robust real-world LM applications against unknown adversarial attacks.\nWe investigate the robustness of language models from an information theoretic perspective, and propose a novel learning framework InfoBERT, which focuses on improving the robustness of language representations by fine-tuning both local features (word-level representation) and global features (sentence-level representation) for robustness purpose. InfoBERT considers two MI-based regularizers: (i) the Information Bottleneck regularizer manages to extract approximate minimal sufficient statistics for downstream tasks, while removing excessive and noisy information that may incur adversarial attacks; (ii) the Anchored Feature regularizer carefully selects useful local stable features that are invulnerable to adversarial attacks, and maximizes the mutual information between local stable features and global features to improve the robustness of the global representation. In this paper, we provide a detailed theoretical analysis to explicate the effect of InfoBERT for robustness improvement, along with extensive empirical adversarial evaluation to validate the theory.\nOur contributions are summarized as follows. (i) We propose a novel learning framework InfoBERT from the information theory perspective, aiming to effectively improve the robustness of language models. (ii) We provide a principled theoretical analysis on model robustness, and propose two MIbased regularizers to refine the local and global features, which can be applied to both standard and adversarial training for different NLP tasks. (iii) Comprehensive experimental results demonstrate that InfoBERT can substantially improve robust accuracy by a large margin without sacrificing the benign accuracy, yielding the state-of-the-art performance across multiple adversarial datasets on NLI and QA tasks." }, { "heading": "2 RELATED WORK", "text": "Textual Adversarial Attacks/Defenses Most existing textual adversarial attacks focus on wordlevel adversarial manipulation. Ebrahimi et al. (2018) is the first to propose a whitebox gradientbased attack to search for adversarial word/character substitution. Following work (Alzantot et al., 2018; Ren et al., 2019; Zang et al., 2020; Jin et al., 2020) further constrains the perturbation search space and adopts Part-of-Speech checking to make NLP adversarial examples look natural to human.\nTo defend against textual adversarial attacks, existing work can be classified into three categories: (i) Adversarial Training is a practical method to defend against adversarial examples. Existing work either uses PGD-based attacks to generate adversarial examples in the embedding space of NLP as data augmentation (Zhu et al., 2020a), or regularizes the standard objective using virtual adversarial training (Jiang et al., 2020; Liu et al., 2020; Gan et al., 2020). However, one drawback is that the threat model is often unknown, which renders adversarial training less effective when facing unseen attacks. (ii) Interval Bound Propagation (IBP) (Dvijotham et al., 2018) is proposed as a new technique to consider the worst-case perturbation theoretically. Recent work (Huang et al., 2019; Jia et al., 2019) has applied IBP in the NLP domain to certify the robustness of models. However, IBPbased methods rely on strong assumptions of model architecture and are difficult to adapt to recent transformer-based language models. (iii) Randomized Smoothing (Cohen et al., 2019) provides a tight robustness guarantee in `2 norm by smoothing the classifier with Gaussian noise. Ye et al. (2020) adapts the idea to the NLP domain, and replace the Gaussian noise with synonym words to certify the robustness as long as adversarial word substitution falls into predefined synonym sets. However, to guarantee the completeness of the synonym set is challenging.\nRepresentation Learning MI maximization principle has been adopted by many studies on selfsupervised representation learning (van den Oord et al., 2018; Belghazi et al., 2018; Hjelm et al., 2019; Chen et al., 2020). Specifically, InfoNCE (van den Oord et al., 2018) is used as the lower bound of MI, forming the problem as contrastive learning (Saunshi et al., 2019; Yu et al., 2020). However, Tian et al. (2020) suggests that the InfoMax (Linsker, 1988) principle may introduce excessive and noisy information, which could be adversarial. To generate robust representation, Zhu et al. (2020b) formalizes the problem from a mutual-information perspective, which essentially performs adversarial training for worst-case perturbation, while mainly considers the continuous space in computer vision. In contrast, InfoBERT originates from an information-theoretic perspective and is compatible with both standard and adversarial training for discrete input space of language models." }, { "heading": "3 INFOBERT", "text": "Before diving into details, we first discuss the textual adversarial examples we consider in this paper. We mainly focus on the dominant word-level attack as the main threat model, since it achieves higher attack success and is less noticeable to human readers than other attacks. Due to the discrete nature of text input space, it is difficult to measure adversarial distortion on token level. Instead, because most word-level adversarial attacks (Li et al., 2019; Jin et al., 2020) constrain word perturbations via the bounded magnitude in the semantic embedding space, by adapting from Jacobsen et al. (2019), we define the adversarial text examples with distortions constrained in the embedding space. Definition 3.1. ( -bounded Textual Adversarial Examples). Given a sentence x = [x1;x2; ...;xn], where xi is the word at the i-th position, the -bounded adversarial sentence x′ = [x′1;x ′ 2; ...;x ′ n] for a classifier F satisfies: (1) F(x) = o(x) = o(x′) but F(x′) 6= o(x′), where o(·) is the oracle (e.g., human decision-maker); (2) ||ti − t′i||2 ≤ for i = 1, 2, ..., n, where ≥ 0 and ti is the word embedding of xi." }, { "heading": "3.1 INFORMATION BOTTLENECK AS A REGULARIZER", "text": "In this section, we first discuss the general IB implementation, and then explain how IB formulation is adapted to InfoBERT as a regularizer along with theoretical analysis to support why IB regularizer can help improve the robustness of language models. The IB principle formulates the goal of deep learning as an information-theoretic trade-off between representation compression and predictive power (Tishby & Zaslavsky, 2015). Given the input source X , a deep neural net learns the internal representation T of some intermediate layer and maximizes the MI between T and label Y , so that T subject to a constraint on its complexity contains sufficient information to infer the target label Y . Finding an optimal representation T can be formulated as the maximization of the Lagrangian\nLIB = I(Y ;T )− βI(X;T ), (1)\nwhere β > 0 is a hyper-parameter to control the tradeoff, and I(Y ;T ) is defined as:\nI(Y ;T ) = ∫ p(y, t) log p(y, t)\np(y)p(t) dy dt . (2)\nSince Eq. (2) is intractable, we instead use the lower bound from Barber & Agakov (2003): I(Y ;T ) ≥ ∫ p(y, t) log qψ(y | t) dy dt , (3)\nwhere qψ(y|t) is the variational approximation learned by a neural network parameterized by ψ for the true distribution p(y|t). This indicates that maximizing the lower bound of the first term of IB I(Y ;T ) is equivalent to minimizing the task cross-entropy loss `task = H(Y | T ). To derive a tractable lower bound of IB, we here use an upper bound (Cheng et al., 2020) of I(X;T )\nI(X;T ) ≤ ∫ p(x, t) log(p(t | x)) dx dt − ∫ p(x)p(t) log(p(t | x)) dx dt . (4)\nBy combining Eq. (3) and (4), we can maximize the tractable lower bound L̂IB of IB in practice by:\nL̂IB = 1\nN N∑ i=1 [ log qψ(y (i) | t(i)) ] − β N N∑ i=1 [ log(p(t(i) | x(i)))− 1 N N∑ j=1 log(p(t(j) | x(i))) ] (5)\nwith data samples {x(i), y(i)}Ni=1, where qψ can represent any classification model (e.g., BERT), and p(t | x) can be viewed as the feature extractor fθ : X → T , where X and T are the support of the input source X and extracted feature T , respectively.\nThe above is a general implementation of IB objective function. In InfoBERT, we consider T as the features consisting of the local word-level features after the BERT embedding layer fθ. The following BERT self-attentive layers along with the linear classification head serve as qψ(y|t) that predicts the target Y given representation T .\nFormally, given random variablesX = [X1;X2; ...;Xn] representing input sentences withXi (word token at i-th index), let T = [T1; ...;Tn] = fθ([X1;X2; ...;Xn]) = [fθ(X1); fθ(X2); ...; fθ(Xn)]\ndenote the random variables representing the features generated from input X via the BERT embedding layer fθ, where Ti ∈ Rd is the high-dimensional word-level local feature for word Xi. Due to the high dimensionality d of each word feature (e.g., 1024 for BERT-large), when the sentence length n increases, the dimensionality of features T becomes too large to compute I(X;T ) in practice. Thus, we propose to maximize a localized formulation of IB LLIB defined as:\nLLIB := I(Y ;T )− nβ n∑ i=1 I(Xi;Ti). (6)\nTheorem 3.1. (Lower Bound of LIB) Given a sequence of random variables X = [X1;X2; ...;Xn] and a deterministic feature extractor fθ, let T = [T1; ...;Tn] = [fθ(X1); fθ(X2); ...; fθ(Xn)]. Then the localized formulation of IB LLIB is a lower bound of LIB (Eq. (1)), i.e.,\nI(Y ;T )− βI(X;T ) ≥ I(Y ;T )− nβ n∑ i=1 I(Xi;Ti). (7)\nTheorem 3.1 indicates that we can maximize the localized formulation of LLIB as a lower bound of IB LIB when I(X;T ) is difficult to compute. In Eq. (6), if we regard the first term (I(Y ;T )) as a task-related objective, the second term (−nβ ∑n i=1 I(Xi;Ti)) can be considered as a regularization term to constrain the complexity of representation T , thus named as Information Bottleneck regularizer. Next, we give a theoretical analysis for the adversarial robustness of IB and demonstrate why localized IB objective function can help improve the robustness to adversarial attacks.\nFollowing Definition 3.1, let T = [T1;T2; ...;Tn] and T ′ = [T ′1;T ′ 2; ...;T ′ n] denote the features for the benign sentence X and adversarial sentence X ′. The distributions of X and X ′ are denoted by probability p(x) and q(x) with the support X and X ′, respectively. We assume that the feature representation T has finite support denoted by T considering the finite vocabulary size in NLP. Theorem 3.2. (Adversarial Robustness Bound) For random variables X = [X1;X2; ...;Xn] and X ′ = [X ′1;X ′ 2; ...;X ′ n], let T = [T1;T2; ...;Tn] = [fθ(X1); fθ(X2); ...; fθ(Xn)] and T\n′ = [T ′1;T ′ 2; ...;T ′ n] = [fθ(X ′ 1); fθ(X ′ 2); ...; fθ(X ′ n)] with finite support T , where fθ is a deterministic feature extractor. The performance gap between benign and adversarial data |I(Y ;T )− I(Y ;T ′)| is bounded above by\n|I(Y ;T )− I(Y ;T ′)| ≤ B0 +B1 n∑ i=1 √ |T |(I(Xi;Ti))1/2 +B2 n∑ i=1 |T |3/4(I(Xi;Ti))1/4\n+B3 n∑ i=1 √ |T |(I(X ′i;T ′i ))1/2 +B4 n∑ i=1 |T |3/4(I(X ′i;T ′i ))1/4, (8)\nwhere B0, B1, B2, B3 and B4 are constants depending on the sequence length n, and p(x).\nThe sketch of the proof is to express the difference of |I(Y ;T ) − I(Y ′;T )| in terms of I(Xi;Ti). Specifically, Eq. (25) factorizes the difference into two summands. The first summand, the conditional entropy |H(T | Y ) − H(T ′ | Y )|, can be bound by Eq. (42) in terms of MI between benign/adversarial input and representation I(Xi;Ti) and I(X ′i;T ′ i ). The second summand |H(T ) − H(T ′)| has a constant upper bound (Eq. (85)), since language models have bounded vocabulary size and embedding space, and thus have bounded entropy.\nThe intuition of Theorem 3.2 is to bound the adversarial performance drop |I(Y ;T ) − I(Y ;T ′)| by I(Xi;Ti). As explained in Eq. (3), I(Y ;T ) and I(Y ;T ′) can be regarded as the model performance on benign and adversarial data. Thus, the LHS of the bound represents such a performance gap. The adversarial robustness bound of Theorem 3.2 indicates that the performance gap becomes closer when I(Xi;Ti) and I(X ′i;T ′ i ) decrease. Note that our IB regularizer in the objective function Eq. (6) achieves the same goal of minimizing I(Xi;Ti) while learning the most efficient information features, or approximate minimal sufficient statistics, for downstream tasks. Theorem 3.2 also suggests that combining adversarial training with our IB regularizer can further minimize I(X ′i;T ′ i ), leading to better robustness, which is verified in §4." }, { "heading": "3.2 ANCHORED FEATURE REGULARIZER", "text": "In addition to the IB regularizer that suppresses noisy information that may incur adversarial attacks, we propose a novel regularizer termed “Anchored Feature Regularizer”, which extracts local\nAlgorithm 1 - Local Anchored Feature Extraction. This algorithm takes in the word local features and returns the index of local anchored features.\n1: Input: Word local features t, upper and lower threshold ch and cl 2: δ ← 0 // Initialize the perturbation vector δ 3: g(δ) = ∇δ`task(qψ(t+ δ), y) // Perform adversarial attack on the embedding space 4: Sort the magnitude of the gradient of the perturbation vector from ||g(δ)1||2, ||g(δ)2||2, ..., ||g(δ)n||2 into ||g(δ)k1 ||2, ||g(δ)k2 ||2, ..., ||g(δ)kn ||2 in ascending order, where zi corresponds to its original index. 5: Return: ki, ki+1, ..., kj , where cl ≤ in ≤ j n ≤ ch.\nstable features and aligns them with sentence global representations, thus improving the stability and robustness of language representations.\nThe goal of the local anchored feature extraction is to find features that carry useful and stable information for downstream tasks. Instead of directly searching for local anchored features, we start with searching for nonrobust and unuseful features. To identify local nonrobust features, we perform adversarial attacks to detect which words are prone to changes under adversarial word substitution. We consider these vulnerable words as features nonrobust to adversarial threats. Therefore, global robust sentence representations should rely less on these vulnerable statistical clues. On the other hand, by examining the adversarial perturbation on each local word feature, we can also identify words that are less useful for downstream tasks. For example, stopwords and punctuation usually carry limited information, and tend to have smaller adversarial perturbations than words containing more effective information. Although these unuseful features are barely changed under adversarial attacks, they contain insufficient information and should be discarded. After identifying the nonrobust and unuseful features, we treat the remaining local features in the sentences as useful stable features and align the global feature representation based on them.\nDuring the local anchored feature extraction, we perform “virtual” adversarial attacks that generate adversarial perturbation in the embedding space, as it abstracts the general idea for existing wordlevel adversarial attacks. Formally, given an input sentence x = [x1;x2; ...;xn] with its corresponding local embedding representation t = [t1; ...; tn], where x and t are the realization of random variables X and T , we generate adversarial perturbation δ in the embedding space so that the task loss `task increases. The adversarial perturbation δ is initialized to zero, and the gradient of the loss with respect to δ is calculated by g(δ) = ∇δ`task(qψ(t+δ), y) to update δ ← ∏ ||δ||F≤ (ηg(δ)/||g(δ)||F ). The above process is similar to one-step PGD with zero-initialized perturbation δ. Since we only care about the ranking of perturbation to decide on robust features, in practice we skip the update of δ to save computational cost, and simply examine the `2 norm of the gradient g(δ)i of the perturbation on each word feature ti. A feasible plan is to choose the words whose perturbation is neither too large (nonrobust features) nor too small (unuseful features), e.g., the words whose perturbation rankings are among 50% ∼ 80% of all the words. The detailed procedures are provided in Algorithm 1.\nAfter local anchored features are extracted, we propose to align sentence global representations Z with our local anchored features Ti. In practice, we can use the final-layer [CLS] embedding to represent global sentence-level feature Z. Specifically, we use the information theoretic tool to increase the mutual information I(Ti;Z) between local anchored features Ti and sentence global representations Z, so that the global representations can share more robust and useful information with the local anchored features and focus less on the nonrobust and unuseful ones. By incorporating the term I(Ti;Z) into the previous objective function Eq. (6), our final objective function becomes:\nmax I(Y ;T )− nβ n∑ i=1 I(Xi;Ti) + α M∑ j=1 I(Tkj ;Z), (9)\nwhere Tkj are the local anchored features selected by Algorithm 1 and M is the number of local anchored features. An illustrative figure can be found in Appendix Figure 2.\nIn addition, due to the intractability of computing MI, we use InfoNCE (van den Oord et al., 2018) as the lower bound of MI to approximate the last term I(Tkj ;Z):\nÎ (InfoNCE)(Ti;Z) := EP [ gω(ti, z)− EP̃ [ log ∑ t′i egω(t ′ i,z) ]] , (10)\nwhere gω(·, ·) is a score function (or critic function) approximated by a neural network, ti are the positive samples drawn from the joint distribution P of local anchored features and global representations, and t′i are the negative samples drawn from the distribution of nonrobust and unuseful features P̃." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we demonstrate how effective InfoBERT improves the robustness of language models over multiple NLP tasks such as NLI and QA. We evaluate InfoBERT against both strong adversarial datasets and state-of-the-art adversarial attacks." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Adversarial Datasets The following adversarial datasets and adversarial attacks are used to evaluate the robustness of InfoBERT and baselines. (I) Adversarial NLI (ANLI) (Nie et al., 2020) is a large-scale NLI benchmark, collected via an iterative, adversarial, human-and-model-in-theloop procedure to attack BERT and RoBERTa. ANLI dataset is a strong adversarial dataset which can easily reduce the accuracy of BERTLarge to 0%. (II) Adversarial SQuAD (Jia & Liang, 2017) dataset is an adversarial QA benchmark dataset generated by a set of handcrafted rules and refined by crowdsourcing. Since adversarial training data is not provided, we fine-tune RoBERTaLarge on benign SQuAD training data (Rajpurkar et al., 2016) only, and test the models on both benign and adversarial test sets. (III) TextFooler (Jin et al., 2020) is the state-of-the-art word-level adversarial attack method to generate adversarial examples. To create an adversarial evaluation dataset, we sampled 1, 000 examples from the test sets of SNLI and MNLI respectively, and run TextFooler against BERTLarge and RoBERTaLarge to obtain the adversarial text examples.\nBaselines Since IBP-based methods (Huang et al., 2019; Jia et al., 2019) cannot be applied to largescale language models yet, and the randomized-smoothing-based method (Ye et al., 2020) achieves limited certified robustness, we compare InfoBERT against three competitive baselines based on adversarial training: (I) FreeLB (Zhu et al., 2020a) applies adversarial training to language models during fine-tuning stage to improve generalization. In §4.2, we observe that FreeLB can boost the robustness of language models by a large margin. (II) SMART (Jiang et al., 2020) uses adversarial training as smoothness-inducing regularization and Bregman proximal point optimization during fine-tuning, to improve the generalization and robustness of language models. (III) ALUM (Liu et al., 2020) performs adversarial training in both pre-training and fine-tuning stages, which achieves substantial performance gain on a wide range of NLP tasks. Due to the high computational cost of adversarial training, we compare InfoBERT to ALUM and SMART with the best results reported in the original papers.\nEvaluation Metrics We use robust accuracy or robust F1 score to measure how robust the baseline models and InfoBERT are when facing adversarial data. Specifically, robust accuracy is calculated by: Acc = 1|Dadv| ∑ x′∈Dadv 1[arg max qψ(fθ(x\n′)) ≡ y], where Dadv is the adversarial dataset, y is the ground-truth label, arg max selects the class with the highest logits and 1(·) is the indicator function. Similarly, robust F1 score is calculated by: F1 = 1|Dadv| ∑ x′∈Dadv v(arg max qψ(fθ(x\n′)), a), where v(·, ·) is the F1 score between the true answer a and the predicted answer arg max qψ(fθ(x′)), and arg max selects the answer with the highest probability (see Rajpurkar et al. (2016) for details).\nImplementation Details To demonstrate InfoBERT is effective for different language models, we apply InfoBERT to both pretrained RoBERTaLarge and BERTLarge. Since InfoBERT can be applied to both standard training and adversarial training, we here use FreeLB as the adversarial training implementation. InfoBERT is fine-tuned for 2 epochs for the QA task, and 3 epochs for the NLI task. More implementation details such as α, β, ch, cl selection can be found in Appendix A.1." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Evaluation on ANLI As ANLI provides an adversarial training dataset, we evaluate models in two settings: 1) training models on benign data (MNLI (Williams et al., 2018) + SNLI (Bowman et al., 2015)) only, which is the case when the adversarial threat model is unknown; 2) training models on both benign and adversarial training data (SNLI+MNLI+ANLI+FeverNLI), which assumes the threat model is known in advance.\nResults of the first setting are summarized in Table 1. The vanilla RoBERTa and BERT models perform poorly on the adversarial dataset. In particular, vanilla BERTLarge with standard training achieves the lowest robust accuracy of 26.5% among all the models. We also evaluate the robustness improvement by performing adversarial training during fine-tuning, and observe that adversarial training for language models can improve not only generalization but also robustness. In contrast, InfoBERT substantially improves robust accuracy in both standard and adversarial training. The robust accuracy of InfoBERT through standard training is even higher than the adversarial training baseline FreeLB for both RoBERTa and BERT, while the training time of InfoBERT is 1/3 ∼ 1/2 less than FreeLB. This is mainly because FreeLB requires multiple steps of PGD attacks to generate adversarial examples, while InfoBERT essentially needs only 1-step PGD attack for anchored feature selection.\nResults of the second setting are provided in Table 2, which shows InfoBERT can further improve robust accuracy for both standard and adversarial training. Specifically, when combined with adversarial training, InfoBERT achieves the state-of-the-art robust accuracy of 58.3%, outperforming all existing baselines. Note that although ALUM achieves higher accuracy for BERT on the dev set, it tends to overfit on the dev set, therefore performing worse than InfoBERT on the test set.\nEvaluation against TextFooler InfoBERT can defend against not only human-crafted adversarial examples (e.g., ANLI) but also those generated by adversarial attacks (e.g., TextFooler). Results are summarized in Table 3. We can see that InfoBERT barely affects model performance on the benign test data, and in the case of adversarial training, InfoBERT even boosts the benign test accuracy. Under the TextFooler attack, the robust accuracy of the vanilla BERT drops to 0.0% on both MNLI and SNLI datasets, while RoBERTa drops from 90% to around 20%. We observe that both adversarial training and InfoBERT with standard training can improve robust accuracy by a comparable large margin, while InfoBERT with adversarial training achieves the best performance among all models, confirming the hypothesis in Theorem 3.2 that combining adversarial training with IB regularizer can further minimize I(X ′i;T ′ i ), leading to better robustness than the vanilla one.\nEvaluation on Adversarial SQuAD Previous experiments show that InfoBERT can improve model robustness for NLI tasks. Now we demonstrate that InfoBERT can also be adapted to other NLP tasks such as QA in Table 4. Similar to our observation on NLI dataset, we find that InfoBERT barely hurts the performance on the benign test data, and even improves it in some cases. Moreover, InfoBERT substantially improves model robustness when presented with adversarial QA test sets (AddSent and AddOneSent). While adversarial training does help improve robustness, InfoBERT can further boost the robust performance by a larger margin. In particular, InfoBERT through standard training achieves the state-of-the-art robust F1/EM score as 78.5/72.9 compared to existing adversarial training baselines, and in the meantime requires only half the training time of adversarialtraining-based methods." }, { "heading": "4.3 ANALYSIS OF LOCAL ANCHORED FEATURES", "text": "We conduct an ablation study to further validate that our anchored feature regularizer indeed filters out nonrobust/unuseful information. As shown in Table 1 and 2, adding adversarial data in the training set can significantly improve model robustness. To find out what helps improve the robustness from the MI perspective, we first calculate the MI between anchored features and global features 1M ∑M j=1 I(Tkj ;Z) on the adversarial test data and benign test data, based on the model trained without adversarial training data (denoted by I ′R and IR). We then calculate the MI between nonrobust/unuseful features and global features 1M ′ ∑M ′ i=1 I(Tki ;Z) on the adversarial test data and\nbenign data as well (denoted by I ′N and IN). After adding adversarial examples into the training set and re-training the model, we find that the MI between the local features and the global features substantially increases on the adversarial test data, which accounts for the robustness improvement. We also observe that those local anchored features extracted by our anchored feature regularizer, as expected, contribute more to the MI improvement. As shown in Figure 1, the MI improvement of anchored features on adversarial test data ∆I ′R (red bar on the left) is higher than that of nonrobust/unuseful ∆I ′N (red bar on the right), thus confirming that local anchored features discovered by our anchored feature regularizer have a stronger impact on robustness than nonrobust/unuseful ones.\nWe conduct more ablation studies in Appendix §A.2, including analyzing the individual impact of two regularizers, the difference between global and local features for IB regularizer, hyper-parameter selection strategy and so on." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a novel learning framework InfoBERT from an information theoretic perspective to perform robust fine-tuning over pre-trained language models. Specifically, InfoBERT consists of two novel regularizers to improve the robustness of the learned representations: (a) Information Bottleneck Regularizer, learning to extract the approximated minimal sufficient statistics and denoise the excessive spurious features, and (b) Local Anchored Feature Regularizer, which improves the robustness of global features by aligning them with local anchored features. Supported by our theoretical analysis, InfoBERT provides a principled way to improve the robustness of BERT and RoBERTa against strong adversarial attacks over a variety of NLP tasks, including NLI and QA tasks. Comprehensive experiments demonstrate that InfoBERT outperforms existing baseline methods and achieves new state of the art on different adversarial datasets. We believe this work will shed light on future research directions towards improving the robustness of representation learning for language models." }, { "heading": "6 ACKNOWLEDGEMENT", "text": "We gratefully thank the anonymous reviewers and meta-reviewers for their constructive feedback. We also thank Julia Hockenmaier, Alexander Schwing, Sanmi Koyejo, Fan Wu, Wei Wang, Pengyu Cheng, and many others for the helpful discussion. This work is partially supported by NSF grant No.1910100, DARPA QED-RML-FP-003, and the Intel RSA 2020." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nModel Details1 BERT is a transformer (Vaswani et al., 2017) based model, which is unsupervised pretrained on large corpora. We use BERTLarge-uncased as the baseline model, which has 24 layers, 1024 hidden units, 16 self-attention heads, and 340M parameters. RoBERTaLarge shares the same architecture as BERT, but modifies key hyperparameters, removes the next-sentence pretraining objective and trains with much larger mini-batches and learning rates, which results in higher performance than BERT model on GLUE, RACE and SQuAD.\nStandard Training Details For both standard and adversarial training, we fine-tune InfoBERT for 2 epochs on the QA task, and for 3 epochs on the NLI task. The best model is selected based on the performance on the development set. All fine-tuning experiments are run on Nvidia V100 GPUs. For NLI task, we set the batch size to 256, learning rate to 2 × 10−5, max sequence length to 128 and warm-up steps to 1000. For QA task, we set the batch size to 32, learning rate to 3× 10−5 and max sequence length to 384 without warm-up steps.\nAdversarial Training Details2 Adversarial training introduces hyper-parameters including adversarial learning rates, number of PGD steps, and adversarial norm. When combing adversarial training with InfoBERT, we use FreeLB as the adversarial training implementation, and set adversarial learning rate to 10−1 or 4∗10−2, adversarial steps to 3, maximal perturbation norm to 3∗10−1 or 2 ∗ 10−1 and initial random perturbation norm to 10−1 or 0.\nInformation Bottleneck Regularizer Details For information bottleneck, there are different ways to model p(t | x):\n1. Assume that p(t | x) is unknown. We use a neural net parameterized by qθ(t | x) to learn the conditional distribution p(t | x). We assume the distribution is a Gaussian distribution. The 1We use the huggingface implementation https://github.com/huggingface/transformers for BERT and RoBERTa. 2We follow the FreeLB implementations in https://github.com/zhuchen03/FreeLB.\nneural net qθ will learn the mean and variance of the Gaussian given input x and representation t. By reparameterization trick, the neural net can be backpropagated to approximate the distribution given the training samples.\n2. p(t | x) is known. Since t is the representation encoded by BERT, we actually already know the distribution of p. We also denote it as qθ, where θ is the parameter of the BERT encoder fθ. If we assume the conditional distribution is a Gaussian N (ti, σ) for input xi whose mean is the BERT representation ti and variance is a fixed constant σ, the Eq.6 becomes\nLLIB = 1\nN N∑ i=1\n([ log qψ(y (i) | t(i)) ] − β\nn∑ k=1 [ − c(σ)||t′(i)k − t (i) k || 2 2 + 1 n N∑ j=1 c(σ)||tj − tk||22 ]) ,\n(11)\nwhere c(σ) is a positive constant related to σ. In practice, the sample t′i from the conditional distribution Gaussian N (ti, σ) can be ti with some Gaussian noise, an adversarial examples of ti, or ti itself (assume σ = 0).\nWe use the second way to model p(t | x) for InfoBERT finally, as it gives higher robustness improvement than the first way empirically (shown in the following §A.2). We suspect that the main reason is because the first way needs to approximate the distribution p(t | x) via another neural net which could present some difficulty in model training.\nInformation Bottleneck Regularizer also introduces another parameter β to tune the trad-off between representation compression I(Xi;Ti) and predictive power I(Y ;T ). We search for the optimal β via grid search, and set β = 5× 10−2 for RoBERTa, β = 10−3 for BERT on the NLI task. On the QA task, we set β = 5 × 10−5, which is substantially lower than β on NLI tasks, thus containing more word-level features. We think it is mainly because the QA task relies more on the word-level representation to predict the exact answer spans. More ablation results can be found in the following §A.2.\nAnchored Feature Regularizer Details Anchored Feature Regularizer uses α to weigh the balance between predictive power and importance of anchored feature. We set α = 5× 10−3 for both NLI and QA tasks. Anchored Feature Regularizer also introduces upper and lower threshold cl and ch for anchored feature extraction. We set ch = 0.9 and cl = 0.5 for the NLI task, and set ch = 0.95 and cl = 0.75 for the QA task. The neural MI estimator used by infoNCE uses two-layer fully connected layer to estimate the MI with the intermediate layer hidden size set to 300." }, { "heading": "A.2 ADDITIONAL EXPERIMENTAL RESULTS", "text": "" }, { "heading": "A.2.1 ABLATION STUDY ON INFORMATION BOTTLENECK REGULARIZER", "text": "Modeling p(t | x) As discussed in §A.1, we have two ways to model p(t | x): (i) using an auxiliary neural network to approximate the distribution; (ii) directly using the BERT encoder fθ to calculate the p(t | x). Thus we implemented these two methods and compare the robustness improvement in Table 5. To eliminate other factors such as Anchored Feature Regularizer and adversarial training, we set α = 0, β = 5 × 10−2 and conduct the following ablation experiments via standard training on standard datasets. We observe that although both modeling methods can improve the model robustness, modeling as BERT encoder gives a larger margin than the Auxiliary Net. Moreover, the second way barely sacrifices the performance on benign data, while the first way can hurt the benign accuracy a little bit. Therefore, we use the BERT Encoder fθ to model the p(t | x) in our main paper.\nLocal Features v.s. Global Features Information Bottlenck Regularizer improves model robustness by reducing I(X;T ). In the main paper, we use T as word-level local features. Here we consider T as sentence-level global features, and compare the robustness improvement with T as local features. To eliminate other factors such as Anchored Feature Regularizer and adversarial training, we set α = 0, β = 5× 10−2 and conduct the following ablation experiments via standard training.\nThe experimental results are summarized in Table 6. We can see that while both features can boost the model robustness, using local features yield higher robust accuracy improvement than global features, especially when adversarial training dataset is added.\nHyper-parameter Search We perform grid search to find out the optimal β so that the optimal trade-off between representation compression (“minimality”) and predictive power (“sufficiency”) is achieved. An example to search for the optimal β on QA dataset is shown in Fingure 3, which illustrates how β affects the F1 score on benign and adversarial datasets. We can see that from a very small β, both the robust and benign F1 scores increase, demonstrating InfoBERT can improve both robustness and generalization to some extent. When we set β = 5 × 10−5 (log(β) = −9.9), InfoBERT achieves the best benign and adversarial accuracy. When we set a larger β to further minimize I(Xi;Ti), we observe that the benign F1 score starts to drop, indicating the increasingly compressed representation could start to hurt its predictive capability." }, { "heading": "A.2.2 ABLATION STUDY ON ANCHORED FEATURE REGULARIZER", "text": "Visualization of Anchored Words To explore which local anchored features are extracted, we conduct another ablation study to visualize the local anchored words. We follow the best hyperparameters of Anchored Feature Regularizer introduced in §A.1, use the best BERT model trained on benign datasets (MNLI + SNLI) only and test on the ANLI dev set. We visualize the local anchored words in Table 7 as follows. In the first example, we find that Anchored Features mainly focus on the important features such as quantity number “Two”, the verb “playing” and objects “card”/“poker” to make a robust prediction. In the second example, the matching robust features between hypothesis and premise, such as “people”, “roller” v.s. “park”, “flipped upside” v.s. “ride”, are aligned to infer the relationship of hypothesis and premise. These anchored feature examples confirm that Anchored Feature Regularizer is able to find out useful and stable features to improve the robustness of global representation." }, { "heading": "A.2.3 ABLATION STUDY ON DISENTANGLING TWO REGULARIZERS", "text": "To understand how two regularizers contribute to the improvement of robustness separetely, we apply two regularizers individually to both the standard training and adversarial training. We refer InfoBERT trained with IB regularizer only as “InfoBERT (IBR only)” and InfoBERT trained with Anchored Feature Regularizer only as “InfoBERT (AFR only)”. “InfoBERT (Both)” is the standard setting for InfoBERT, where we incorporate both regularizers during training. For “InfoBERT (IBR only)”, we set α = 0 and perform grid search to find the optimal β = 5 × 10−2. Similarly for “InfoBERT (AFR only)”, we set β = 0 and find the optimal parameters as α = 5× 10−3, ch = 0.9 and cl = 0.5.\nThe results are shown in Table 8. We can see that both regularizers improve the robust accuracy on top of vanilla and FreeLB to a similar margin. Applying one of the regularizer can achieve similar performance of FreeLB, but the training time of InfoBERT is only 1/31̃/2 less than FreeLB. Moreover, after combining both regularizers, we observe that InfoBERT achieves the best robust accuracy." }, { "heading": "A.2.4 EXAMPLES OF ADVERSARIAL DATASETS GENERATED BY TEXTFOOLER", "text": "We show some adversarial examples generated by TextFooler in Table 9. We can see most adversarial examples are of high quality and look valid to human while attacking the NLP models, thus confirming our adversarial dastasets created by TextFooler is a strong benchmark dataset to evaluate model robustness. However, as also noted in Jin et al. (2020), we observe that some adversarial examples look invalid to human For example, in the last example of Table 9, TextFooler replaces “stand” with “position”, losing the critical information that “girls are standing instead of kneeling” and fooling both human and NLP models. Therefore, we expect that InfoBERT should achieve better robustness when we eliminate such invalid adversarial examples during evaluation." }, { "heading": "A.3 PROOFS", "text": "" }, { "heading": "A.3.1 PROOF OF THEOREM 3.1", "text": "We first state two lemmas.\nLemma A.1. Given a sequence of random variables X1, X2, ..., Xn and a deterministic function f , then ∀ i, j = 1, 2, ..., n, we have\nI(Xi; f(Xi)) ≥ I(Xj ; f(Xi)) (12)\nProof. By the definition,\nI(Xi; f(Xi)) = H(f(Xi))−H(f(Xi) | Xi) (13) I(Xj ; f(Xi)) = H(f(Xi))−H(f(Xi) | Xj) (14)\nSince f is a deterministic function,\nH(f(Xi) | Xi) = 0 (15) H(f(Xi) | Xj) ≥ 0 (16)\nTherefore,\nI(Xi; f(Xi)) ≥ I(Xj ; f(Xi)) (17)\nLemma A.2. Let X = [X1;X2; ...;Xn] be a sequence of random variables, and T = [T1;T2; ...;Tn] = [f(X1); f(X2); ...; f(Xn)] be a sequence of random variables generated by a deterministic function f . Then we have\nI(X;T ) ≤ n n∑ i=1 I(Xi;Ti) (18)\nProof. Since X = [X1;X2; ...;Xn] and T = [T1;T2; ...;Tn] are language tokens with its corresponding local representations, we have\nI(X;T ) = I(X;T1, T2, ..., Tn) = n∑ i=1 [H(Ti | T1, T2, ..., Ti−1)−H(Ti | X,T1, T2, ..., Ti−1)]\n(19)\n≤ n∑ i=1 [H(Ti)−H(Ti | X)] = n∑ i=1 I(X;Ti) (20)\n≤ n∑ i=1 n∑ j=1 I(Xj ;Ti) ≤ n n∑ i=1 I(Xi;Ti), (21)\nwhere the first inequality follows because conditioning reduces entropy, and the last inequality is because I(Xi;Ti) ≥ I(Xj ;Ti) based on Lemma A.1.\nThen we directly plug Lemma A.2 into Theorem 3.1, we have the lower bound of LIB as\nI(Y ;T )− βI(X;T ) ≥ I(Y ;T )− nβ n∑ i=1 I(Xi;Ti). (22)" }, { "heading": "A.3.2 PROOF OF THEOREM 3.2", "text": "We first state an easily proven lemma,\nLemma A.3. For any a, b ∈ [0, 1],\n|a log(a)− b log(b)| ≤ φ(|a− b|), (23)\nwhere φ(·) : R+ → R+ is defined as\nφ(x) = 0 x = 0 x log( 1x ) 0 < x < 1 e\n1 e x > 1 e\n. (24)\nIt is easy to verify that φ(x) is a continuous, monotonically increasing, concave and subadditive function.\nNow, we can proceed with the proof of Theorem 3.2.\nProof. We use the fact that\n|I(Y ;T )− I(Y ;T ′)| ≤ |H(T | Y )−H(T ′ | Y )|+ |H(T )−H(T ′)| (25)\nand bound each of the summands on the right separately.\nWe can bound the first summand as follows: |H(T | Y )−H(T ′ | Y )| ≤ ∑ y p(y)|H(T | Y = y)−H(T ′ | Y = y)| (26)\n= ∑ y p(y)| ∑ t p(t | y) log(1/p(t | y))− ∑ t q(t | y) log(1/q(t | y))| (27)\n≤ ∑ y p(y) ∑ t |p(t | y) log p(t | y)− q(t | y) log q(t | y)| (28)\n≤ ∑ y p(y) ∑ t φ(|p(t | y)− q(t | y)|) (29)\n= ∑ y p(y) ∑ t φ(| ∑ x p(t | x)[p(x | y)− q(x | y)]|), (30)\nwhere\np(x | y) = p(y | x)p(x)∑ x p(y | x)p(x)\n(31)\nq(x | y) = p(y | x)q(x)∑ x p(y | x)q(x) . (32)\nSince ∑ x∈X∪X ′ p(x | y)− q(x | y) = 0 for any y ∈ Y , we have that for any scalar a,\n| ∑ x p(t | x)[p(x | y)− q(x | y)])| (33)\n= | ∑ x (p(t | x)− a)(p(x | y)− q(x | y))| (34)\n≤ √∑\nx\n(p(t | x)− a)2 √∑\nx\n(p(x | y)− q(x | y))2. (35)\nSetting a = 1|X−X ′| ∑ x∈X∪X ′ p(t | x) we get\n|H(T | Y )−H(T ′ | Y ) ≤ ∑ y p(y) ∑ t φ (√ V (p(t | x ∈ X ∪ X ′) · ||p(x | y)− q(x | y)||2 ) ,\n(36)\nwhere for any real-value vector a = (a1, ..., an), V (a) is defined to be proportional to the variance of elements of a:\nV (a) = n∑ i=1 (ai − 1 n n∑ j=1 aj) 2, (37)\np(t | x ∈ X ∪ X ′) stands for the vector in which entries are p(t | x) with different values of x ∈ X ∪X ′ for a fixed t, and p(x | y) and q(x | y) are the vectors in which entries are p(x | y) and q(x | y), respectively, with different values of x ∈ X ∪ X ′ for a fixed y. Since\n||p(x | y)− q(x | y)||2 ≤ ||p(x | y)− q(x | y)||1 ≤ 2, (38)\nit follows that\n|H(T | Y )−H(T ′ | Y )| ≤ ∑ y p(y) ∑ t φ ( 2 √ V (p(t | x ∈ X ∪ X ′)) ) (39)\nMoreover, we have\n√ V (p(t | x ∈ X ∪ X ′) ≤ √ V (p(t | x ∈ X )) + V (p(t | x ∈ X ′)) (40)\n≤ √ V (p(t | x ∈ X )) + √ V (p(t | x ∈ X ′)), (41)\nwhere the first inequality is because sample mean is the minimizer of the sum of the squared distances to each sample and the second inequality is due to the subadditivity of the square root function. Using the fact that φ(·) is monotonically increasing and subadditive, we get\n|H(T | Y )−H(T ′ | Y )| ≤ ∑ y p(y) ∑ t φ ( 2 √ V (p(t | x ∈ X )) ) + ∑ y p(y) ∑ t φ ( 2 √ V (p(t | x ∈ X ′)) ) (42)\nNow we explicate the process for establishing the bound for ∑ y p(y) ∑ t φ ( 2 √ V (p(t | x ∈ X )) ) and the one for ∑ y p(y) ∑ t φ ( 2 √ V (p(t | x ∈ X ′)) ) can be similarly derived.\nBy definition of V (·) and using Bayes’ theorem p(t | x) = p(t)p(x|t)p(x) for x ∈ X , we have that\n√ V (p(t | x ∈ X )) = p(t) √∑ x∈X (p(x | t) p(x) − 1 |X | ∑ x′∈X p(x′ | t) p(x′) ) 2 (43)\nDenoting 1 = (1, ..., 1), we have by the triangle inequality that√∑ x∈X (p(x | t) p(x) − 1 |X | ∑ x′∈X p(x′ | t) p(x′) ) 2 (44)\n≤ ||p(x | t) p(x) − 1||2 + √∑ x∈X ( 1− 1 |X | ∑ x′∈X p(x′ | t) p(x′) ) 2 (45)\n= ||p(x | t) p(x) − 1||2 + √ |X | ( 1− 1 |X | ∑ x′∈X p(x′ | t) p(x′) ) 2 (46)\n= ||p(x | t) p(x)\n− 1||2 + √ 1 |X | ( |X | − ∑ x′∈X p(x′ | t) p(x′) ) 2 (47)\n= ||p(x | t) p(x) − 1||2 + 1√ |X | | ∑ x′∈X (1− p(x ′ | t) p(x′) )| (48)\n≤ ||p(x | t) p(x) − 1||2 + 1√ |X | ||p(x | t) p(x) − 1||1 (49) ≤ (1 + 1√ |X | )||p(x | t) p(x) − 1||1 (50)\n≤ 2 minx∈X p(x) ||p(x | t)− p(x)||1 (51)\nFrom an inequality linking KL-divergence and the l1 norm, we have that ||p(x | t)− p(x)||1 ≤ √\n2 log(2)DKL[p(x | t)||p(x)] (52) Plugging Eq. (52) into Eq. (51) and using Eq. (43), we have the following bound:√\nV (p(t | x ∈ X )) ≤ B 2 p(t)\n√ dt, (53)\nwhere B = 4 √ 2 log(2)\nminx∈X p(x) and dt = DKL[p(x | t)||p(x)].\nWe will first proceed the proof under the assumption that Bp(t) √ dt ≤ 1e for any t. We will later see\nthat this condition can be discarded. If Bp(t) √ dt ≤ 1e , then∑\nt\nφ ( 2 √ V (p(t | x ∈ X )) ) (54)\n≤ ∑ t Bp(t) √ dt ( log( 1 B ) + log( 1 p(t)dt ) )\n(55)\n= B log( 1 B ) ∑ t p(t) √ dt +B ∑ t p(t) √ dt log( 1 p(t)dt ) (56)\n≤ B log( 1 B\n)||p(t) √ dt||1 +B|| √ p(t) √ dt||1, (57)\nwhere the last inequality is due to an easily proven fact that for any x > 0, x log( 1x ) ≤ √ x. We p(t) and d(t) are vectors comprising p(t) and dt with different values of t, respectively.\nUsing the following two inequalities: ||p(t) √ dt||1 ≤ √ |T |||p(t) √ dt||2 ≤ √ |T ||| √ p(t)dt||2 (58)\nand\n|| √ p(t) √ dt||1 ≤ √ |T ||| √ p(t) √ dt||2 (59)\n= √ T √ ||p(t) √ dt||1 ≤ |T |3/4 √ || √ p(t)dt||2 (60)\nwe have∑ t φ ( 2 √ V (p(t | x ∈ X )) ) ≤ B log( 1 B ) √ |T ||| √ p(t)dt||2 +B|T |3/4 √ || √ p(t)dt||2. (61)\nUsing the equality\n|| √ p(t)dt||2 = √ E[DKL[p(x|t)||p(x)]] = √ I(X;T ), (62)\nwe reach the following bound∑ t φ ( 2 √ V (p(t | x ∈ X )) ) (63)\n≤ B log( 1 B )|T |1/2I(X;T )1/2 +B|T |3/4I(X;T )1/4. (64)\nPlug Lemma A.2 into the equation above, we have∑ t φ ( 2 √ V (p(t | x ∈ X )) ) (65)\n≤ B log( 1 B )|T |1/2(n n∑ i=1 I(Xi;Ti)) 1/2 +B|T |3/4(n n∑ i=1 I(Xi;Ti)) 1/4 (66)\n≤ √ nB log( 1\nB )|T |1/2 n∑ i=1 I(Xi;Ti) 1/2 + n1/4B|T |3/4 n∑ i=1 I(Xi;Ti) 1/4 (67)\nWe now show the bound is trivial if the assumption that Bp(t) √ dt ≤ 1e does not hold. If the\nassumption does not hold, then there exists a t such that Bp(t) √ dt >\n1 e . Since√\nI(X;T ) = √∑ t p(t)dt ≥ ∑ t p(t) √ dt ≥ p(t) √ dt (68)\nfor any t, we get that √ I(X;T ) ≥ 1eB . Since |T | ≥ 1 and C ≥ 0, we get that our bound in Eq. (63) is at least\nB log( 1\nB )|T |1/2I(X;T )1/2 +B|T |3/4I(X;T )1/4 (69)\n≥ √ |T |( log(1/B)\ne + B1/2|T |1/4 e1/2 ) (70)\nLet f(c) = log(1/c)e + c1/2|T |1/4 e1/2 . It can be verifed that f ′(c) > 0 if c > 0. Since B > 4\n√ 2 log(2)\nby the definition of B, we have f(B) > f(4 √ 2 log(2)) > 0.746. Therefore, we have\nB log( 1\nB )|T |1/2I(X;T )1/2 +B|T |3/4I(X;T )1/4 (71) ≥ 0.746 √ |T | ≥ log(|T |) (72)\nTherefore, if indeed Bp(t) √ dt > 1 e for some t, then the bound in Eq. (63) is trivially\ntrue, since H(T | Y ) is within [0, log(|T |)]. Similarly, we can establish a bound for∑ t φ ( 2 √ V (p(t | x ∈ X ′)) ) as follows:∑\nt\nφ ( 2 √ V (p(t | x ∈ X ′)) ) ≤ √ nB′ log( 1\nB′ )|T |1/2 n∑ i=1 I(X ′i;T ′ i ) 1/2 + n1/4B′|T |3/4 n∑ i=1 I(X ′i;T ′ i ) 1/4,\n(73)\nwhere B′ = 4 √ 2 log(2)\nminx∈X′ q(x) .\nPlugging Eq. (73) and Eq. (65) into Eq. (42), we get\n|H(T | Y )−H(T ′ | Y )| ≤ √ nB log( 1\nB )|T |1/2 n∑ i=1 I(Xi;Ti) 1/2 + n1/4B|T |3/4 n∑ i=1 I(Xi;Ti) 1/4+\n√ nB′ log( 1\nB′ )|T |1/2 n∑ i=1 I(X ′i;T ′ i ) 1/2 + n1/4B′|T |3/4 n∑ i=1 I(X ′i;T ′ i ) 1/4\n(74)\nNow we turn to the third summand in Eq. (25), we have to bound |H(T )−H(T ′)|. Recall the definition of -bounded adversarial example. We denote the set of the benign data representation t that are within the -ball of t′ by Q(t′). Then for any t ∈ Q(t′), we have\n||t′i − ti|| ≤ , (75)\nfor i = 1, 2, ..., n. We also denote the number of the -bounded adversarial examples around the benign representation t by c(t). Then we have the distribution of adversarial representation t′ as follows:\nq(t′) = ∑\nt∈Q(t′)\np(t) c(t) (76)\n|H(T )−H(T ′)| (77) = | ∑ t p(t) log p(t)− ∑ t′ q(t′) log q(t′)| (78)\n= | ∑ t p(t) log p(t)− ∑ t′ [ ( ∑ t∈Q(t′) p(t) c(t) ) log( ∑ t∈Q(t′) p(t) c(t) ) ] | (79)\n≤ | ∑ t p(t) log p(t)− ∑ t′ ∑ t∈Q(t′) p(t) c(t) log( p(t) c(t) )| (80)\n= | ∑ t p(t) log p(t)− ∑ t c(t) p(t) c(t) log( p(t) c(t) )| (81)\n= | ∑ t p(t) log c(t)|, (82)\nwhere the inequality is by log sum inequality. If we denote the C = maxt c(t) which is the maximum number of -bounded textual adversarial examples given a benign representation t of a word sequence x, we have\n|H(T )−H(T ′)| (83) ≤ | ∑ t p(t) log c(t)| (84)\n≤ | ∑ t p(t) logC| = logC. (85)\nNote that given a word sequence x of n with representation t, the number of -bounded textual adversarial examples c(t) is finite given a finite vocabulary size. Therefore, if each word has at most k candidate word perturbations, then logC ≤ n log k can be viewed as some constants depending only on n and .\nNow, combining Eq. (25), Eq. (74) and Eq. (85), we prove the bound in Theorem 3.2." } ]
2,021
INFOBERT: IMPROVING ROBUSTNESS OF LANGUAGE MODELS FROM AN INFORMATION THEORETIC PERSPECTIVE
SP:c2c3dfb15f6f05041cbbe6b4542f5dee3eb4e763
[ "This paper concerns the problem of image segmentation from referring expressions. Given an image and a query phrase about a particular object in the image, the goal is to locate the target object as a mask at the pixel level. The basic framework is U-Net, which consists of two branches: an image encoder and segmentation map decoder (connected at the bottom in a U-shape). The paper proposes to use language to modulate the image encoding and decoding process intensively, by applying auxiliary convolutional connections between the two branches and further condition the convolution kernel on the language embedding. Overall, the paper is easy to follow and has done a good literature review." ]
How to best integrate linguistic and perceptual processing in multimodal tasks is an important open problem. In this work we argue that the common technique of using language to direct visual attention over high-level visual features may not be optimal. Using language throughout the bottom-up visual pathway, going from pixels to high-level features, may be necessary. Our experiments on several English referring expression datasets show significant improvements when language is used to control the filters for bottom-up visual processing in addition to top-down attention.
[ { "affiliations": [], "name": "CESSING WITH" }, { "affiliations": [], "name": "REFERRING EXPRESSIONS" } ]
[ { "authors": [ "Peter Anderson", "Xiaodong He", "Chris Buehler", "Damien Teney", "Mark Johnson", "Stephen Gould", "Lei Zhang" ], "title": "Bottom-up and top-down attention for image captioning and visual question answering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018b", "year": 2018 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Stanislaw Antol", "Aishwarya Agrawal", "Jiasen Lu", "Margaret Mitchell", "Dhruv Batra", "C. Lawrence Zitnick", "Devi Parikh" ], "title": "Vqa: Visual question answering", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Paul Bloom" ], "title": "How children learn the meanings of words", "venue": "MIT press,", "year": 2002 }, { "authors": [ "Bastien Boutonnet", "Gary Lupyan" ], "title": "Words jump-start vision: A label advantage in object recognition", "venue": "Journal of Neuroscience,", "year": 2015 }, { "authors": [ "Ding-Jie Chen", "Songhao Jia", "Yi-Chen Lo", "Hwann-Tzong Chen", "Tyng-Luh Liu" ], "title": "See-through-text grouping for referring image segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Y.-W. Chen", "Y.-H. Tsai", "T. Wang", "Y.-Y. Lin", "M.-H. Yang" ], "title": "Referring expression object segmentation with caption-aware consistency", "venue": "In British Machine Vision Conference (BMVC),", "year": 2019 }, { "authors": [ "Volkan Cirik", "Taylor Berg-Kirkpatrick", "Louis-Philippe Morency" ], "title": "Using syntax to ground referring expressions in natural images", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Charles E Connor", "Howard E Egeth", "Steven Yantis" ], "title": "Visual attention: bottom-up versus topdown", "venue": "Current biology,", "year": 2004 }, { "authors": [ "Maurizio Corbetta", "Gordon L Shulman" ], "title": "Control of goal-directed and stimulus-driven attention in the brain", "venue": "Nature reviews neuroscience,", "year": 2002 }, { "authors": [ "Harm De Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Banchiamlack Dessalegn", "Barbara Landau" ], "title": "More than meets the eye: The role of language in binding and maintaining feature conjunctions", "venue": "Psychological science,", "year": 2008 }, { "authors": [ "Hugo Jair Escalante", "Carlos A Hernández", "Jesus A Gonzalez", "Aurelio López-López", "Manuel Montes", "Eduardo F Morales", "L Enrique Sucar", "Luis Villaseñor", "Michael Grubinger" ], "title": "The segmented and annotated iapr tc-12 benchmark", "venue": "Computer vision and image understanding,", "year": 2010 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "International journal of computer vision,", "year": 2010 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Vittorio Gallese", "George Lakoff" ], "title": "The brain’s concepts: The role of the sensory-motor system in conceptual knowledge", "venue": "Cognitive neuropsychology,", "year": 2005 }, { "authors": [ "Peng Gao", "Hongsheng Li", "Shuang Li", "Pan Lu", "Yikang Li", "Steven CH Hoi", "Xiaogang Wang" ], "title": "Question-guided hybrid convolution for visual question answering", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Kirill Gavrilyuk", "Amir Ghodrati", "Zhenyang Li", "Cees GM Snoek" ], "title": "Actor and action video segmentation from a sentence", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ronghang Hu", "Marcus Rohrbach", "Trevor Darrell" ], "title": "Segmentation from natural language expressions", "venue": "Lecture Notes in Computer Science, pp. 108–124,", "year": 2016 }, { "authors": [ "Ronghang Hu", "Huazhe Xu", "Marcus Rohrbach", "Jiashi Feng", "Kate Saenko", "Trevor Darrell" ], "title": "Natural language object retrieval. 2016", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Ronghang Hu", "Marcus Rohrbach", "Jacob Andreas", "Trevor Darrell", "Kate Saenko" ], "title": "Modeling relationships in referential expressions with compositional modular networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr.2017.470. URL http://dx.doi.org/10.1109/CVPR.2017.470", "year": 2017 }, { "authors": [ "Zhiwei Hu", "Guang Feng", "Jiayu Sun", "Lihe Zhang", "Huchuan Lu" ], "title": "Bi-directional relationship inferring network for referring image segmentation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering", "venue": "Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Tianrui Hui", "Si Liu", "Shaofei Huang", "Guanbin Li", "Sansi Yu", "Faxi Zhang", "Jizhong Han" ], "title": "Linguistic structure guided context modeling for referring image segmentation", "venue": null, "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Ray Jackendoff", "Ray S Jackendoff" ], "title": "Foundations of language: Brain, meaning, grammar, evolution", "venue": null, "year": 2002 }, { "authors": [ "Sahar Kazemzadeh", "Vicente Ordonez", "Mark Matten", "Tamara Berg" ], "title": "Referitgame: Referring to objects in photographs of natural scenes", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Jin-Hwa Kim", "Sang-Woo Lee", "Donghyun Kwak", "Min-Oh Heo", "Jeonghee Kim", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Multimodal residual learning for visual qa", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ranjay Krishna", "Yuke Zhu", "Oliver Groth", "Justin Johnson", "Kenji Hata", "Joshua Kravitz", "Stephanie Chen", "Yannis Kalantidis", "Li-Jia Li", "David A Shamma", "Michael Bernstein", "Li Fei-Fei" ], "title": "Visual genome: Connecting language and vision using crowdsourced dense image", "venue": "URL https://arxiv.org/abs/1602.07332", "year": 2016 }, { "authors": [ "Ruiyu Li", "Kaican Li", "Yi-Chun Kuo", "Michelle Shu", "Xiaojuan Qi", "Xiaoyong Shen", "Jiaya Jia" ], "title": "Referring image segmentation via recurrent refinement networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhenyang Li", "Ran Tao", "Efstratios Gavves", "Cees GM Snoek", "Arnold WM Smeulders" ], "title": "Tracking by natural language specification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Chenxi Liu", "Zhe L. Lin", "Xiaohui Shen", "Jimei Yang", "Xin Lu", "Alan L. Yuille" ], "title": "Recurrent multimodal interaction for referring image segmentation", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Daqing Liu", "Hanwang Zhang", "Feng Wu", "Zheng-Jun Zha" ], "title": "Learning to assemble neural module tree networks for visual grounding", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Jiasen Lu", "Jianwei Yang", "Dhruv Batra", "Devi Parikh" ], "title": "Hierarchical question-image co-attention for visual question answering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jiasen Lu", "Caiming Xiong", "Devi Parikh", "Richard Socher" ], "title": "Knowing when to look: Adaptive attention via a visual sentinel for image captioning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In Proc. icml,", "year": 2013 }, { "authors": [ "Mateusz Malinowski", "Marcus Rohrbach", "Mario Fritz" ], "title": "Ask your neurons: A neural-based approach to answering questions about images", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Junhua Mao", "Jonathan Huang", "Alexander Toshev", "Oana Camburu", "Alan Yuille", "Kevin Murphy" ], "title": "Generation and comprehension of unambiguous object descriptions", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Edgar Margffoy-Tuay", "Juan C Pérez", "Emilio Botero", "Pablo Arbeláez" ], "title": "Dynamic multimodal instance segmentation guided by natural language queries", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Lotte Meteyard", "Bahador Bahrami", "Gabriella Vigliocco" ], "title": "Motion detection and motion verbs: Language affects low-level visual perception", "venue": "Psychological Science,", "year": 2007 }, { "authors": [ "Dipendra Misra", "Andrew Bennett", "Valts Blukis", "Eyvind Niklasson", "Max Shatkhin", "Yoav Artzi" ], "title": "Mapping instructions to actions in 3d environments with visual goal prediction", "venue": "arXiv preprint arXiv:1809.00786,", "year": 2018 }, { "authors": [ "Varun K. Nagaraja", "Vlad I. Morariu", "Larry S. Davis" ], "title": "Modeling context between objects for referring expression understanding", "venue": "Lecture Notes in Computer Science,", "year": 2016 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Friedemann Pulvermüller" ], "title": "Words in the brain’s language", "venue": "Behavioral and brain sciences,", "year": 1999 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Hengcan Shi", "Hongliang Li", "Fanman Meng", "Qingbo Wu" ], "title": "Key-word-aware network for referring expression image segmentation", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xingjian SHI", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-kin Wong", "Wangchun WOO" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Alane Suhr", "Mike Lewis", "James Yeh", "Yoav Artzi" ], "title": "A corpus of natural language for visual reasoning", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "year": 2017 }, { "authors": [ "Jan Theeuwes" ], "title": "Top–down and bottom–up control of visual selection", "venue": "Acta psychologica,", "year": 2010 }, { "authors": [ "Gabriella Vigliocco", "David P Vinson", "William Lewis", "Merrill F Garrett" ], "title": "Representing the meanings of object and action words: The featural and unitary semantic space hypothesis", "venue": "Cognitive psychology,", "year": 2004 }, { "authors": [ "Peng Wang", "Qi Wu", "Jiewei Cao", "Chunhua Shen", "Lianli Gao", "Anton van den Hengel" ], "title": "Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Terry Winograd" ], "title": "Understanding natural language", "venue": "Cognitive psychology,", "year": 1972 }, { "authors": [ "Huijuan Xu", "Kate Saenko" ], "title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Zichao Yang", "Xiaodong He", "Jianfeng Gao", "Li Deng", "Alex Smola" ], "title": "Stacked attention networks for image question answering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Linwei Ye", "Mrigank Rochan", "Zhi Liu", "Yang Wang" ], "title": "Cross-modal self-attention network for referring image segmentation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Licheng Yu", "Patrick Poirson", "Shan Yang", "Alexander C. Berg", "Tamara L. Berg" ], "title": "Modeling context in referring expressions", "venue": "Lecture Notes in Computer Science,", "year": 2016 }, { "authors": [ "Licheng Yu", "Zhe Lin", "Xiaohui Shen", "Jimei Yang", "Xin Lu", "Mohit Bansal", "Tamara L. Berg" ], "title": "Mattnet: Modular attention network for referring expression comprehension", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Rowan Zellers", "Yonatan Bisk", "Ali Farhadi", "Yejin Choi" ], "title": "From recognition to cognition: Visual commonsense reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "As human beings, we can easily understand the surrounding environment with our visual system and interact with each other using language. Since the work of Winograd (1972), developing a system that understands human language in a situated environment is one of the long-standing goals of artificial intelligence. Recent successes of deep learning studies in both language and vision domains have increased the interest in tasks that combine language and vision (Antol et al., 2015; Xu et al., 2015; Krishna et al., 2016; Suhr et al., 2017; Anderson et al., 2018b; Hudson & Manning, 2019). However, how to best integrate linguistic and perceptual processing is still an important open problem. In this work we investigate whether language should be used to control the filters for bottom-up visual processing as well as top-down attention.\nIn the human visual system, attention is driven by both “top-down” cognitive processes (e.g. focusing on target’s color or location) and “bottom-up” salient, behaviourally relevant stimuli (e.g. fast moving objects) (Corbetta & Shulman, 2002; Connor et al., 2004; Theeuwes, 2010). Studies on embodied language explore the link between linguistic and perceptual representations (Pulvermüller, 1999; Vigliocco et al., 2004; Gallese & Lakoff, 2005) and it is often assumed that language has a high-level effect on perception and drives the “top-down” visual attention (Bloom, 2002; Jackendoff & Jackendoff, 2002; Dessalegn & Landau, 2008). However, recent studies from cognitive science point out that language comprehension also affects low-level visual processing (Meteyard et al., 2007; Boutonnet & Lupyan, 2015). Motivated by this, we propose a model1 that can modulate either or both of “bottom-up” and “top-down” visual processing with language conditional filters.\nCurrent deep learning systems for language-vision tasks typically start with low-level image processing that is not conditioned on language, then connect the language representation with high level visual features to control the visual focus. To integrate both modalities, concatenation (Malinowski et al., 2015), element-wise multiplication (Malinowski et al., 2015; Lu et al., 2016; Kim et al., 2016) or attention from language to vision (Xu et al., 2015; Xu & Saenko, 2016; Yang et al., 2016; Lu et al., 2017; Anderson et al., 2018a; Zellers et al., 2019) may be used. Specifically they do not condition low-level visual features on language. One exception is De Vries et al. (2017) which proposes conditioning the ResNet (He et al., 2016) image processing network with language conditioned batch normalization parameters at every stage. Our model differs from these architectures by having explicit “bottom-up” and “top-down” branches and allowing us to experiment with modulating one or both branches with language generated kernels.\n1We will release our code and pre-trained models along with a reproducible environment after the blind review process.\nWe evaluate our proposed model on the task of image segmentation from referring expressions where given an image and a natural language description, the model returns a segmentation mask that marks the object(s) described. We can contrast this with purely image based object detection (Girshick, 2015; Ren et al., 2017) and semantic segmentation (Long et al., 2015; Ronneberger et al., 2015; Chen et al., 2017) tasks which are limited to predefined semantic classes. Our task gives users more flexibility to interact with the system by allowing them to describe objects of interest in free form language. The language input may contain various visual attributes (e.g., color, shape), spatial information (e.g., “on the right”, “in front of”), actions (e.g., “running”, “sitting”) and interactions/relations between different objects (e.g., “arm of the chair that the cat is sitting in”). This makes the task both more challenging and suitable for comparing different strategies of language control.\nThe perceptual module of our model is based on the U-Net image segmentation architecture (Ronneberger et al., 2015). This architecture has clearly separated bottom-up and top-down branches which allows us to easily vary what parts are conditioned on language. The bottom-up branch starts from low level visual features and applies a sequence of contracting filters that result in successively higher level feature maps with lower spatial resolution. Following this is a top-down branch which takes the final low resolution feature map and applies a sequence of expanding filters that eventually result in a segmentation mask at the original image resolution. Information flows between branches through skip connections between contracting and expanding filters at the same level. We experiment with conditioning one or both of these branches with language.\nTo make visual processing conditional on language, we add language-conditional filters at each level of the architecture, similar to Misra et al. (2018). Our baseline only applies languageconditional filters on the top-down branch. Modulating only the top-down/expanding branch with language means the high level features extracted by the bottom-up/contracting branch cannot be language-conditional. Our model expands on this baseline by modulating both branches with language-conditional filters. Empirically, we find that adding language modulation to the bottomup/contracting branch has a significant positive improvement on the baseline model. Our proposed model achieves state-of-the art performance on three different English referring expression datasets." }, { "heading": "2 RELATED WORK", "text": "In this section, we review related work in several related areas: Semantic segmentation classifies the object category of each pixel in an image without language input. Referring expression comprehension locates a bounding box for the object(s) described in the language input. Image segmentation from referring expressions generates a segmentation mask for the object(s) described in the language input. We also cover work on language-conditional (dynamic) filters and studies that use them to modulate deep-learning models with language." }, { "heading": "2.1 SEMANTIC SEGMENTATION", "text": "Primitive semantic segmentation models are based on Fully Convolutional Networks (FCN) (Long et al., 2015). DeepLab (Chen et al., 2017) and U-Net (Ronneberger et al., 2015) are the most notable state-of-the-art semantic segmentation models related to our work. DeepLab replaces regular convolutions with atrous (dilated) convolutions in the last residual block of ResNets (He et al., 2016) and implements Atrous Spatial Pyramid Pooling (ASPP) which fuses multi-scale visual information. The U-Net architecture (Ronneberger et al., 2015) improves over the standard FCN by connecting contracting (bottom-up) and expanding (top-down) paths at the same resolution: the output of the encoder layer at each level is passed to the decoder at the same level." }, { "heading": "2.2 REFERRING EXPRESSION COMPREHENSION", "text": "Early models for this task were typically built using a hybrid LSTM-CNN architecture (Hu et al., 2016b; Mao et al., 2016). Newer models (Hu et al., 2017; Yu et al., 2016; 2018; Wang et al., 2019) use an Region-based CNN (R-CNN) variant (Girshick et al., 2014; Ren et al., 2017; He et al., 2017) as a sub-component to generate object proposals. Nagaraja et al. (2016) proposes a solution based on multiple instance learning. Cirik et al. (2018) implements a model based on Neural Module Networks (NMN) by using syntax information. Among the literature, Compositional Modular Network\n(CMN) (Hu et al., 2017), Modular Attention Network (MAttNet) (Yu et al., 2018) and Neural Module Tree Networks (NMTree) (Liu et al., 2019) are the most notable state-of-the-art methods, and all of them are based on NMN (Andreas et al., 2016)." }, { "heading": "2.3 IMAGE SEGMENTATION FROM REFERRING EXPRESSIONS", "text": "Notable models for this task include Recurrent Multimodal Interaction (RMI) model (Liu et al., 2017), Recurrent Refinement Networks (RRN) (Li et al., 2018), Dynamic Multimodal Network (DMN) (Margffoy-Tuay et al., 2018), Convolutional RNN with See-through-Text Embedding Pixelwise heatmaps (Step-ConvRNN or ConvRNN-STEM) (Chen et al., 2019a), Caption-aware Consistent Segmentation Model (CAC) (Chen et al., 2019b), Bi-directional Relationship Inferring Network (BRINet) Hu et al. (2020) and Linguistic Structure guided Context Modelling (LSCM) module Hui et al. (2020). RRN which has a structure similar to U-Net, is built on top of a Convolutional LSTM (ConvLSTM) (SHI et al., 2015) network. Unlike our model, ConvLSTM filters are not generated from language representation and the multi-modal representation is used only in the initial time step. DMN generates 1 x 1 language-conditional filters for language representation of each word. It performs convolution operation on visual representation with language-conditional filters to generate multi-modal representation for each word. Like RMI, word-level multi-modal representations are fed as input to a multi-modal RRN to obtain multi-modal representation for image/language pairs. Step-ConvRNN starts with a visual-textual co-embedding and uses a ConvRNN to iteratively refine a heatmap for image segmentation. Step-ConvRNN uses a bottom-up and top-down approach similar to this work, however, our model uses spatial language generated kernels within a simpler architecture. CAC also generates 1 x 1 language-conditional dynamic filters. Unlike our model, CAC applies these dynamic filters to single resolution / single feature map and additionally generates location-specific dynamic filters (e.g. left, bottom) to capture relations between the objects exist at the different parts of the image. BRINet implements two different attention mechanisms: language-guided visual attention and vision-guided linguistic attention. LSCM implements a dependency parsing guided bottom-up attention mechanism to predict masks." }, { "heading": "2.4 LANGUAGE-CONDITIONAL FILTERS", "text": "To control a deep learning model with language, early work such as Modulated ResNet (MODERN) (De Vries et al., 2017) and Feature-wise Linear Modulation (FiLM) (Perez et al., 2018) used conditional batch normalization layers with only language-conditioned coefficients rather than customized filters. Finn et al. (2016) generates action-conditioned dynamic filters. Li et al. (2017) is the first work which generates dynamic language-conditional filters. Gao et al. (2018) proposes a VQA solution method which has a group convolutional layer whose filters are generated from the question input. Gavrilyuk et al. (2018) introduces a new task called as actor and action segmentation and to solve this task, proposes an architecture which uses dynamic filters for multiple resolutions. Similar to our work, Misra et al. (2018) adds language conditional filters to a U-Net based architecture for the task of mapping instructions to actions in virtual environments. ? also uses an architecture based on U-Net and Misra et al. (2018) to solve a navigation and spatial reasoning problem. Those models only modulate top-down visual processing with language.\nReferring expression models that incorporate language-conditional filters into the architecture include (Chen et al., 2019b; Margffoy-Tuay et al., 2018). Margffoy-Tuay et al. (2018) generates language-conditional filters for words individually rather than whole sentence. Chen et al. (2019b) generates 1 x 1 language-conditional filters from expressions. To make 1 x 1 language-conditional filters spatially aware, different filters are generated for different image regions (e.g. top, left, right, bottom).\nOur main contribution in this work is an explicit evaluation of language conditional filters for bottom-up visual processing in comparison to only using language for top-down attention control." }, { "heading": "3 MODEL", "text": "Figure 1 shows an overview of our proposed architecture. For a given referring expression S and an input image I , the task is predicting a segmentation mask M that covers the object(s) referred to. First, the model extracts a 64×64×1024 tensor of low-level features using a backbone convolutional\nneural network and encodes the referring expression S to a vector representation r using a long shortterm memory (LSTM) network (Hochreiter & Schmidhuber, 1997). Starting with the visual feature tensor, the model generates feature maps in a contracting and an expanding path where the final map represents the segmentation mask, similar to U-Net (Ronneberger et al., 2015). 3x3 convolutional filters generated from the language representation r (language kernels) are used to modulate both the contracting and the expanding paths. Our experiments show that modulating both paths improves the performance dramatically." }, { "heading": "3.1 LOW-LEVEL IMAGE FEATURES", "text": "Given an input image I , we extract visual features from the fourth layer of the DeepLab ResNet101v2 network (Chen et al., 2017) pre-trained on the Pascal VOC dataset (Everingham et al., 2010). We set W = H = 512 as the image size for our experiments. Thus, the output of the fourth convolutional layer of DeepLab ResNet101-v2 produces a feature map with the size of (64, 64) and 1024 channels for this setup. We concatenate 8-D location features to this feature map following previous work (Hu et al., 2016b; Liu et al., 2017; Ye et al., 2019; Chen et al., 2019a). The final representation, I0, has 1032 channels, and the spatial dimensions are (64, 64)." }, { "heading": "3.2 LANGUAGE REPRESENTATION", "text": "Consider a referring expression S = [w1, w2, ..., wn] where wi represents the i’th word. In this work, each word wi is represented with a 300-dimensional GloVe embedding (Pennington et al., 2014), i.e. wi ∈ R300. We map the referring expression S to hidden states using a long short-term memory network (Hochreiter & Schmidhuber, 1997) as hi = LSTM(hi−1, wi). We use the final hidden state of the LSTM as the textual representation, r = hn. We set the size of hidden states to 256, i.e. hi ∈ R256." }, { "heading": "3.3 SEGMENTATION MODEL", "text": "After generating image (I0) and language (r) representations, our model generates a segmentation Mask M . We take the U-Net (Ronneberger et al., 2015) image segmentation model as the visual processing backbone. Our model extends the U-Net by conditioning both contracting and expanding branches on language using spatial language kernels.\nOur model applies m convolutional modules to the image representation I0. Each module, Fi, takes the concatenation of the previously generated feature map (Downi−1) and its convolved version with a 3×3 language kernel Kid and produces an output feature map (Downi). Each Fi has a 2D convolution layer followed by batch normalization (Ioffe & Szegedy, 2015) and ReLU activation function (Maas et al., 2013). The convolution layers have 5 × 5 filters with stride = 2 and padding = 2 halving the spatial resolution, and they all have the same number of output channels.\nFollowing Misra et al. (2018), we split the textual representation r to m equal parts (ti) to generate language-conditional filters (language kernels). We use each ti to generate a language-conditional kernel (Kid):\nKid = AFFINEi(DROPOUT(ti)) (1)\nEach AFFINEi is an affine transformation followed by normalizing and reshaping to convert the output to a convolutional filter. DROPOUT is the dropout regularization (Srivastava et al., 2014). After obtaining the kernel, we convolve it over the feature map obtained from the previous module to relate expressions to image features:\nGid = CONVOLVE(Kid, Downi−1) (2)\nThen, the concatenation of the resulting text-modulated features (Gid) and the previously generated features (Downi−1) is fed into module Fi for the next step.\nIn the expanding branch, we generate m feature maps starting from the final output of the contracting branch as follows:\nGju = CONVOLVE(Kju, Ij) (3) Upm = Hm(Gmu) (4) Upj = Hj(Gmu ⊕ Upj−1) (5)\nSimilar to the bottom-up phase, Gju is the modulated feature map with language-conditional kernels generated as follows:\nKju = AFFINEj(DROPOUT(tj)) (6)\nwhere AFFINEj is again an affine transformation followed by normalizing and reshaping. Here, we convolve the kernel (Kju) over the feature maps from the contracting branch (Downj). Each upsampling module Hm gets the concatenation (⊕) of the text related features and the feature map (Upj) generated from the previous module. Only the first module operates on just convolved features. Each Hj consists of a 2D deconvolution layer followed by a batch normalization and ReLU activation function. The deconvolution layers have 5 × 5 filters with stride = 2 and padding = 2 doubling the spatial resolution, and they all have the same number of output channels.\nAfter generating the final feature map Up1, we apply a stack of layers (D1, D2, ..., Dm) to map Up1 to the exact image size. Similar to upsampling modules, each Dk is a 2D deconvolution layer followed by batch normalization and the ReLU activation. The deconvolutional layer has 5×5 filters with stride = 2 and padding = 2 to double the spatial sizes of the input. Each Dk preserves the number of channels except for the last one which maps the features to a single channel for the mask prediction. There is no batch norm operation and the ReLU activation for the final module, instead we apply a sigmoid function to turn the final features into probabilities (P ∈ RH×W )." }, { "heading": "3.4 LEARNING", "text": "Given the probabilities (P ∈ RH×W ) for each pixel belonging to the target object(s), and the ground-truth mask G ∈ RH×W , the main training objective is the pixel-wise Binary-Cross-Entropy (BCE) loss:" }, { "heading": "4 EXPERIMENTS", "text": "In this section we first give the details of the datasets and our experimental configurations (Section 4.1). A detailed analysis of the contribution of our idea and the different parts of the architecture is given in Section 4.2. Then we present our main results and compare our model with the state-ofthe-art (Section 4.3). Finally, Section 4.4 shows some qualitative results." }, { "heading": "4.1 DATASETS AND EXPERIMENT SETUP", "text": "Datasets: We evaluate our model on and ReferIt (130.5k expressions, 19.9k images), UNC (142k expressions, 20k images), UNC+ (141.5k expressions, 20k images) (Yu et al., 2016) and Google-Ref (G-Ref) (104.5k expressions, 26.7k images) (Mao et al., 2016) (Kazemzadeh et al., 2014) datasets. Unlike UNC, location-specific expressions are excluded in UNC+ through enforcing annotators to describe objects by their appearance. ReferIt, UNC, UNC+ datasets are collected through a twoplayer game (Kazemzadeh et al., 2014) and have short expressions (avg. 4 words). G-Ref have longer and richer expressions, since its expressions are collected from Amazon Mechanical Turk instead of a two-player game. ReferIt images are collected from IAPR Tc-12 dataset (Escalante et al., 2010) and the others use images present in MS COCO dataset (Lin et al., 2014).\nEvaluation Metrics: Following the previous work (Liu et al., 2017; Margffoy-Tuay et al., 2018; Ye et al., 2019; Chen et al., 2019a), we use overall intersection-over-union (IoU) and precision@X as evaluation metrics. Given the predicted segmentation mask and the ground truth, the IoU metric is the ratio between the intersection and the union of the two. The overall IoU calculates the total intersection over total union score. The second metric, precision@X , calculates the percentage of test examples that have IoU score higher than the threshold X . In experiments, X ∈ {0.5, 0.6, 0.7, 0.8, 0.9}.\nImplementation Details: As (Liu et al., 2017; Margffoy-Tuay et al., 2018; Ye et al., 2019; Chen et al., 2019a), we limit the maximum length of expressions to 20. In all convolutional layers, we set the filter size, stride, and number of filters (ch) as (5, 5), 2, and 96, respectively. The depth is 4 in the U-Net part of the network. We set the dropout probability to 0.2 throughout the network. We use Adam optimizer (Kingma & Ba, 2014) with default parameters. We freeze the DeepLab ResNet101-v2 weights. There are 60 examples in each minibatch. We train our model for 15 epochs on a Tesla V100 GPU and each epoch takes at most two hours depending on the dataset." }, { "heading": "4.2 ABLATION RESULTS", "text": "We performed ablation studies to better understand the contributions of the different parts of our model. Table 1 shows the performances of the different architectures on the UNC validation split with prec@X and overall IoU metrics. Unless otherwise specified, 3×3 language-conditional filters are used in our models.\nModulating both top-down and bottom-up visual processing: We implemented three models, Top-down Modulation, Bottom-Up Modulation and Dual Modulation, to show the effect of modulating language in expanding and contracting visual branches. Since language information leaks through cross-connections between visual branches, we also experimented with a bottom-up modulation model which has no connection between visual branches. Bottom-up Modulation outperforms Top-down Modulation with ≈4.6 IoU improvement. Modulating language in both visual branches yields the best results by improving Bottom-up Modulation model with ≈1.2 IoU score. Language-conditional Spatial Filters: When we compare the performances of Top-Down Modulation w/ 1×1 filters and Top-Down Modulation models, we see that the usage of language-conditional spatial filters brings additional improvement over the base model. Similarly, if we use 1 × 1 filters in our full model, the performance of the model decreases significantly. We performed the same experiment on G-Ref dataset and observed ≈1.3 IoU difference again. FiLM layers vs. Language-conditional Filters: Another method for modulating language is using conditional batch normalization De Vries et al. (2017) or its successor, FiLM layers. Thus, we also replaced language-conditional filters with FiLM layers in Top-Down Modulation w/ 1x1 model and observed ≈0.8 IoU improvement. Morever, since we can take advantage of language-conditional spatial filters, Top-Down Modulation w/ 3x3 model baseline outperforms its FiLM variation with ≈1.9 IoU improvement." }, { "heading": "4.3 QUANTITATIVE RESULTS", "text": "Table 2 shows the comparison of our model with the previous work. Our model outperforms all previous models on all datasets. When we compare our model with the previous state-of-the-art model, Step-ConvRNN, the most significant improvement is on the G-Ref dataset.\nWe also compare our model with MAttNet and NMTree which are referring expression comprehension models. Since they present segmentation results after they predict bounding boxes for objects, they are comparable with our work. Our model is significantly better than MAttNet, NMTree which depends on an explicit object proposal network that is trained on more COCO images. This result shows the ability of our model to detect object regions and relate them with expressions.\nTable 1 presents the comparison of our model with the state-of-the-art in terms of prec@X scores. The difference between our model and the state-of-the-art increases when the threshold increases. This indicates that our model is better at both finding and segmenting the referred objects." }, { "heading": "4.4 QUALITATIVE RESULTS", "text": "In this section, we visualize some of the segmentation predictions of our model to gain better insights about the trained model.\nFigure 2 shows some of the cases that our model segments correctly. These examples demonstrate that our model can learn a variety of language and visual reasoning patterns. For example, the first two examples of the first row show that our model learns to relate superlative adjectives (e.g., longer, shorter) with visual comparison. Examples include spatial prepositions (e.g., on right, on left, next to, behind, over, bottom) demonstrate the spatial reasoning ability of the model. We also see that the model can learn a domain-specific nomenclature (catcher, batter) that is present in the dataset. Lastly, we can see that the model can distinguish the different actions (e.g., standing, squatting, sitting).\nFigure 3 shows some of the incorrect segmentation predictions from our model on the UNC validation dataset. In the figure, each group shows one of the observed patterns within the examples. One of them (a) is that our model tends to combine similar objects or their parts when they are hard to distinguish. Another reason for the errors is that some of the expressions are ambiguous (b), where there are multiple objects that could be the correspondence of the expression. And the model segments both possible objects. Some of the examples (d) are hard to segment completely due to the lack of light or objects that mask the referred objects. Finally, some of the annotations contain incorrect or incomplete ground-truth mask (c)." }, { "heading": "5 CONCLUSION", "text": "We showed that modulating not only top-down but also bottom-up visual processing with language input improves the performance significantly. Our experiments showed that the proposed model achieves state-of-the-art results on 4 different benchmarks and performs significantly (≈ 6 IoU) better than a baseline which uses language only to direct top-down attention. Our future work will focus on using it as a sub-component to solve a far more language-vision task like mapping natural language instructions to sequences of actions." }, { "heading": "A APPENDIX", "text": "A.1 INCREMENTAL SEGMENTATION\nWe also analyzed the behaviour of our model with respect to incrementally given language input in Figure 4. In the initial step, our model only sees an unknown word token. In the second step, our model sees only the first word of the expression. In every step, our model starts to see a new word in addition to the previous ones. Figure 4 shows that our model can capture ambiguities in input image and expressions pairs. For unknown token input, our model captures all salient objects since there is no restriction. When the man word is fed, the model discards unrelated objects like umbrella and wheel. Additionally, when our model starts to see color words for coat, it initially focuses on both men, since both coats has black color. When it sees the final expression, it shifts its focus to the correct object." } ]
2,020
TENTION: MODULATING BOTTOM-UP VISUAL PRO-
SP:22d8012175584e1a71a2ebc6bb6d3103ff42f87d
[ "This paper investigated the problem of learning agents when there are execution delays. The authors (i) used a two-state MDP example to show some equivalence between execution delay and stochasticity of transitions; (ii) analyzed the action aggregation method, which cumulated all the history and then made decisions. They show a classic Policy Iteration (PI) method with the aggregation, unfortunately, has its iteration complexity exponentially depending on the delay time $m > 0$; (iii) formulated the Execution-Delay (ED)-MDP and showed that there exists a non-stationary Markov policy which attains optimal value, while any stationary policies will have suboptimal performance; (iv) proposed a model-based Q-learning method, delayed-Q, which used the predicted future state-action sequence to make decisions; (v) did experiments on Maze, CartPole, Acrobot and Atari tasks to verify the proposed delayed-Q method." ]
The standard Markov Decision Process (MDP) formulation hinges on the assumption that an action is executed immediately after it was chosen. However, assuming it is often unrealistic and can lead to catastrophic failures in applications such as robotic manipulation, cloud computing, and finance. We introduce a framework for learning and planning in MDPs where the decision-maker commits actions that are executed with a delay of m steps. The brute-force state augmentation baseline where the state is concatenated to the last m committed actions suffers from an exponential complexity in m, as we show for policy iteration. We then prove that with execution delay, deterministic Markov policies in the original state-space are sufficient for attaining maximal reward, but need to be non-stationary. As for stationary Markov policies, we show they are sub-optimal in general. Consequently, we devise a non-stationary Q-learning style model-based algorithm that solves delayed execution tasks without resorting to state-augmentation. Experiments on tabular, physical, and Atari domains reveal that it converges quickly to high performance even for substantial delays, while standard approaches that either ignore the delay or rely on state-augmentation struggle or fail due to divergence. The code is available at https://github.com/galdl/rl_delay_basic.git.
[ { "affiliations": [], "name": "Esther Derman" }, { "affiliations": [], "name": "Gal Dalal" } ]
[ { "authors": [ "Avner Bar-Ilan", "Agnès Sulem" ], "title": "Explicit solution of inventory problems with delivery lags", "venue": "Mathematics of Operations Research,", "year": 1995 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Benjamin Bruder", "Huyên Pham" ], "title": "Impulse control problem on finite horizon with execution delay", "venue": "Stochastic Processes and their Applications,", "year": 2009 }, { "authors": [ "Jeffrey S Campbell", "Sidney N Givigi", "Howard M Schwartz" ], "title": "Multiple model q-learning for stochastic asynchronous rewards", "venue": "Journal of Intelligent & Robotic Systems,", "year": 2016 }, { "authors": [ "Baiming Chen", "Mengdi Xu", "Liang Li", "Ding Zhao" ], "title": "Delay-aware model-based reinforcement learning for continuous control", "venue": "arXiv preprint arXiv:2005.05440,", "year": 2020 }, { "authors": [ "Baiming Chen", "Mengdi Xu", "Zuxin Liu", "Liang Li", "Ding Zhao" ], "title": "Delay-aware multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:2005.05441,", "year": 2020 }, { "authors": [ "Luc Dugard", "Erik I Verriest" ], "title": "Stability and control of time-delay systems, volume 228", "venue": null, "year": 1998 }, { "authors": [ "Gabriel Dulac-Arnold", "Daniel Mankowitz", "Todd Hester" ], "title": "Challenges of real-world reinforcement learning", "venue": "arXiv preprint arXiv:1904.12901,", "year": 2019 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "John Fearnley" ], "title": "Exponential lower bounds for policy iteration", "venue": "In International Colloquium on Automata, Languages, and Programming,", "year": 2010 }, { "authors": [ "Vlad Firoiu", "Tina Ju", "Josh Tenenbaum" ], "title": "At human speed: Deep reinforcement learning with action delay", "venue": "arXiv preprint arXiv:1810.07286,", "year": 2018 }, { "authors": [ "Emilia Fridman" ], "title": "Introduction to time-delay systems: Analysis and control", "venue": null, "year": 2014 }, { "authors": [ "Thomas Dueholm Hansen", "Uri Zwick" ], "title": "Lower bounds for howard’s algorithm for finding minimum mean-cost cycles", "venue": "In International Symposium on Algorithms and Computation,", "year": 2010 }, { "authors": [ "Todd Hester", "Peter Stone" ], "title": "Texplore: real-time sample-efficient reinforcement learning for robots", "venue": "Machine learning,", "year": 2013 }, { "authors": [ "Romain Hollanders", "Jean-Charles Delvenne", "Raphaël M Jungers" ], "title": "The complexity of policy iteration is exponential for discounted markov decision processes", "venue": "IEEE 51st IEEE Conference on Decision and Control (CDC),", "year": 2012 }, { "authors": [ "Ronald A Howard" ], "title": "Dynamic programming and Markov processes", "venue": "John Wiley,", "year": 1960 }, { "authors": [ "Pooria Joulani", "Andras Gyorgy", "Csaba Szepesvári" ], "title": "Online learning under delayed feedback", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Konstantinos V Katsikopoulos", "Sascha E Engelbrecht" ], "title": "Markov decision processes with delays and asynchronous cost collection", "venue": "IEEE transactions on automatic control,", "year": 2003 }, { "authors": [ "Seung Wook Kim", "Yuhao Zhou", "Jonah Philion", "Antonio Torralba", "Sanja Fidler" ], "title": "Learning to simulate dynamic environments with gamegan", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Wei Niu", "Xiaolong Ma", "Yanzhi Wang", "Bin Ren" ], "title": "26ms inference time for resnet-50: Towards real-time execution of all dnns on smartphone", "venue": null, "year": 1905 }, { "authors": [ "Ciara Pike-Burke", "Shipra Agrawal", "Csaba Szepesvari", "Steffen Grünewälder" ], "title": "Bandits with delayed anonymous", "venue": "feedback. stat,", "year": 2017 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes.: Discrete Stochastic Dynamic Programming", "venue": null, "year": 2014 }, { "authors": [ "Simon Ramstedt", "Chris Pal" ], "title": "Real-time reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jean-Pierre Richard" ], "title": "Time-delay systems: an overview of some recent advances and open problems. automatica", "venue": null, "year": 2003 }, { "authors": [ "Bruno Scherrer" ], "title": "Improved and generalized upper bounds on the complexity of policy iteration", "venue": "Mathematics of Operations Research,", "year": 2016 }, { "authors": [ "Alessandro Toschi", "Mustafa Sanic", "Jingwen Leng", "Quan Chen", "Chunlin Wang", "Minyi Guo" ], "title": "Characterizing perception module performance and robustness in production-scale autonomous driving system", "venue": "In IFIP International Conference on Network and Parallel Computing,", "year": 2019 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "arXiv preprint arXiv:1509.06461,", "year": 2015 }, { "authors": [ "Thomas J Walsh", "Ali Nouri", "Lihong Li", "Michael L Littman" ], "title": "Learning and planning in environments with delayed feedback", "venue": "Autonomous Agents and Multi-Agent Systems,", "year": 2009 }, { "authors": [ "Ted Xiao", "Eric Jang", "Dmitry Kalashnikov", "Sergey Levine", "Julian Ibarz", "Karol Hausman", "Alexander Herzog" ], "title": "Thinking while moving: Deep reinforcement learning with concurrent control", "venue": null, "year": 2004 }, { "authors": [ "Hengyu Zhao", "Yubo Zhang", "Pingfan Meng", "Hui Shi", "Li Erran Li", "Tiancheng Lou", "Jishen Zhao" ], "title": "Towards safety-aware computing system design in autonomous vehicles", "venue": null, "year": 1905 } ]
[ { "heading": "1 INTRODUCTION", "text": "The body of work on reinforcement learning (RL) and planning problem setups has grown vast in recent decades. Examples for such distinctions are different objectives and constraints, assumptions on access to the model or logged trajectories, on-policy or off-policy paradigms, etc. (Puterman, 2014). However, the study of delay in RL remains scarce. It is almost always assumed the action is executed as soon as the agent chooses it. This assumption seldom holds in real-world applications (DulacArnold et al., 2019). Latency in action execution can either stem from the increasing computational complexity of modern systems and related tasks, or the infrastructure itself. The wide range of such applications includes robotic manipulation, cloud computing, financial trading, sensor feedback in autonomous systems, and more. To elaborate, consider an autonomous vehicle required for immediate response to a sudden hazard on the highway. Driving at high speed, it suffers from perception module latency when inferring the surrounding scene, as well as delay in actuation once a decision has been made. While the latter phenomenon is an instance of execution delay, the former corresponds to observation delay. These two types of delay are in fact equivalent and can thus be treated with the same tools (Katsikopoulos & Engelbrecht, 2003).\nRelated works. The notion of delay is prominent in control theory with linear time-invariant systems (Bar-Ilan & Sulem, 1995; Dugard & Verriest, 1998; Richard, 2003; Fridman, 2014; Bruder & Pham, 2009). While the delayed control literature is vast, our work intersects with it mostly in motivation. In the above control theory formulations, the system evolves according to some known diffusion or stochastic differential equation. Differently, the discrete-time MDP framework does not require any structural assumption on the transition function or reward.\n˚Equal contribution\nA few works consider a delay in the reward signal rather than in observation or execution. Delayed reward has been studied on multi-armed bandits for deterministic and stochastic latencies (Joulani et al., 2013) and for the resulting arm credit assignment problem (Pike-Burke et al., 2017). In the MDP setting, Campbell et al. (2016) proposed a Q-learning variant for reward-delay that follows a Poisson distribution. Katsikopoulos & Engelbrecht (2003) considered three types of delay: observation, execution, and reward. Chen et al. (2020b) studied execution delay on multi-agent systems. The above works on MDPs employed state-augmentation with a primary focus on empirical evaluation of the degradation introduced by the delay. In this augmentation method, all missing information is concatenated with the original state to overcome the partial observability induced by the delay. The main drawback of this embedding method is the exponential growth of the state-space with the delay value (Walsh et al., 2009; Chen et al., 2020a) and, in the case of (Chen et al., 2020b), an additional growth that is polynomial with the number of agents.\nWalsh et al. (2009) avoided state-augmentation in MDPs with delayed feedback via a planning approach. By assuming the transition kernel to be close to deterministic, their model-based simulation (MBS) algorithm relies on a most-likely present state estimate. Since the Delayed-Q algorithm we devise here resembles to MBS in spirit, we highlight crucial differences between them: First, MBS is a conceptual algorithm that requires the state-space to be finite or discretized. This makes it highly sensitive to the state-space size, as we shall demonstrate in Sec. 7[Fig. 5(c)], prohibiting it from running on domains like Atari. Differently, Delayed-Q works with the original, possibly continuous state-space. Second, MBS is an offline algorithm: it estimates a surrogate, non-delayed MDP from samples, and only then does it solve that MDP to obtain the optimal policy (Walsh et al., 2009)[Alg. 2, l. 16]. This is inapplicable to large continuous domains and is again in contrast to Delayed-Q.\nRecent studies considered a concurrent control setting where action sampling occurs simultaneously with state transition (Ramstedt & Pal, 2019; Xiao et al., 2020). Both assumed a single action selection between two consecutive observations, thus reducing the problem to an MDP with execution delay of m “ 1. Chen et al. (2020a) have generalized it to an arbitrary number of actions between two observations. Hester & Stone (2013) addressed execution delay in the braking control of autonomous vehicles with a relatively low delay of m § 3. All these works employ state-augmentation to preserve the Markov property of the process, whereas we are interested whether this restriction can be lifted. Additionally, they studied policy-gradient (policy-based) methods, while we introduce a Q-learning style (value-based) algorithm. Likewise, Firoiu et al. (2018) proposed a modified version of the policy-based IMPALA (Espeholt et al., 2018) which is evaluated on a single video game with delay values of m § 7. To the best of our knowledge, our work is the first to tackle a delayed variant of the popular Atari suite (Bellemare et al., 2013).\nContributions. Revisiting RL with execution delay both in theory and practice, we introduce:\n1. Analysis of a delayed MDP quantifying the trade-off between stochasticity and delay.\n2. The first tight upper and lower complexity bounds on policy iteration for action-augmented MDPs. We stress that this is also a contribution to general RL theory of non-delayed MDPs.\n3. A new formalism of execution-delay MDPs that avoids action-embedding. Using it, we prove that out of the larger set of history-dependent policies, restricting to non-stationary deterministic Markov policies is sufficient for optimality in delayed MDPs. We also derive a Bellman-type recursion for a delayed value function.\n4. A model-based DQN-style algorithm that yields non-stationary Markov policies. Our algorithm outperforms the alternative standard and state-augmented DDQN in 39 of 42 experiments spanning over 3 environment categories and delay of up to m “ 25." }, { "heading": "2 PRELIMINARIES: NON-DELAYED STANDARD MDP", "text": "Here, we describe the standard non-delayed MDP setup. Later, in Sec. 5, we introduce its generalization to the delayed case. We follow and extend notations from (Puterman, 2014)[Sec. 2.1.]. An infinite horizon discounted MDP is a tuple pS,A, P, r, q where S and A are finite state and action spaces, P : S ˆA Ñ S is a transition kernel, the reward r : S ˆA Ñ R is a bounded function, and P r0, 1q is a discount factor. At time t, the agent is in st and draws an action at according to a decision rule dt that maps past information to a probability distribution qdt over the action set. Once at is taken, the agent receives a reward rpst, atq.\nA decision rule can be history-dependent (H) or Markovian (M) , and randomized (R) or deterministic (D). Denote by Ht the set of possible histories up to time t. Then, a history-dependent decision-rule is given by dt : Ht Ñ A with ht fiÑ qdtphtqp¨q. A Markovian decision-rule, on the other hand, maps states to actions, i.e., dt : S Ñ A with s fiÑ qdtpsqp¨q. A policy ⇡ :“ pdtqt•0 is a sequence of decision rules whose type dictates that of the policy. It can be either Markovian deterministic (⇧MD) or randomized (⇧MR), history-dependent deterministic (⇧HD) or randomized (⇧HR). It is stationary if its decision rules do not depend on time, i. e., dt “ d for all t • 0. This defines the smaller class of stationary policies: deterministic (⇧SD) and randomized (⇧SR). Note that stationary policies are inherently Markovian. Indeed, at time t “ 0, d : H0 Ñ A is state-dependent because H0 “ S. Since the policy is stationary, i. e., dt “ d @t, subsequent decision rules are also state-dependent, thus Markovian. This makes ⇧HR the most general set and ⇧SD the most specific.\nWe denote probability model by P⇡0 , where the subscript 0 stands for the delay value m “ 0. The related random variables are denoted by s̃t P S, ãt P A and h̃t P pS ˆAqt ˆ S . The value function given policy ⇡ P ⇧HR is defined as v⇡psq “ E⇡0 „∞8 t“0 t rps̃t, ãtq ˇ̌ ˇ̌ s̃0 “ s ⇢ , where the expectation\nis taken with respect to (w.r.t.) P⇡0 p¨|s̃0 “ sq. Let the optimal value function v\n˚psq :“ max ⇡P⇧HR v ⇡psq, @s P S . (1)\nOur goal is to find a policy ⇡˚ that yields v˚, and it is known that focusing on stationary deterministic policies ⇡ P ⇧SD is sufficient for reaching the optimum in (1) (Puterman, 2014)[Thm. 6.2.10.].\n3 MDPS WITH DELAY: A DEGRADATION EXAMPLE\nIn an MDP with execution delay1 m, any action chosen at time t is executed at t ` m. Therefore, at each step, the agent witnesses the current state and action being executed, but selects a new action that will be applied in a future state. We assume that m decided actions are already awaiting execution at t “ 0, so at any given time, the queue of pending actions is of constant length m. As we illustrate in the next example, having a delay generally comes at a price. Example 3.1 (Two-state MDP). Consider the MDP in Fig. 1. It has two states and two actions: S “ ts0, s1u,A “ ta0, a1u. The transition kernel is independent of the action: for all s, s\n1 P S s.t. s ‰ s1, P ps1|s, aq “ P ps1|sq “ p where p P r0.5, 1s. The reward is positive for one of the two actions only: rps0, a0q “ rps1, a1q “ 1, rps0, a1q “ rps1, a0q “ 0.\nWe inspect the return obtained from the commonly used set of stationary deterministic policies ⇧SD. As expected, the highest possible return is attained when m “ 0, but monotonically decreases with the delay, m, and increases with the level of certainty, p. We analytically quantify this effect in the following and give a proof in Appx. A.1.\nProposition 3.1. For delay m P N and p P r0.5, 1s, the optimal return of ⇡˚ P ⇧SD is 1`p2p´1qm2p1´ q . Remark 3.1. This result demonstrates a clear tradeoff between stochasticity and delay. For p Ñ 0.5 or m Ñ 8, the return goes to its minimal value of 0.5{p1 ´ q. Contrarily, for p Ñ 1 or m Ñ 0, it goes to its maximal value of 1{p1 ´ q." }, { "heading": "4 THE AUGMENTATION APPROACH", "text": "In this section, we consider state-augmentation for solving MDPs with execution delay. We begin with defining an equivalent MDP with a larger state space that memorizes all missing information for an informed decision. Due to the full observability, the resulting optimal augmented policy attains the optimal return in the original delayed MDP.\n1The exact terminology used by Katsikopoulos & Engelbrecht (2003) is action delay, while in (Bertsekas et al., 1995)[Section 1.4] it is time lag. We prefer the term execution delay since the action is itself decided instantaneously.\nDefinition 4.1 (m-AMDP). Given MDP pS,A, P, r, q and m P N, an m-Augmented MDP (mAMDP) is a tuple pXm,A, F, g, q such that Xm :“ S ˆAm is the augmented state-space, A the original action-space, F is the transition matrix given in Appx. B.1,(14), and g is the reward function given in Appx. B.1, (15).\nThe pending action queue is concatenated to the original state to form an augmented state xt :“ pst, a´1t , ¨ ¨ ¨ , a´mt q P Xm, where a´it is the i-th pending action at time t. It means that in the following step, t ` 1, action a´mt will be executed independently of the present action selection, the queue will shift to the right, and the newly selected action will be at the second coordinate. By construction, the m-AMDP is non-delayed; it directly accounts for execution delay through its state-representation, as opposed to our coming formulation in Sec. 5. We further define a stationary deterministic policy ⇡̄ P ⇧̄SDm with corresponding decision rule d̄ : Xm Ñ A and augmented value function v⇡̄pxq :“ E⇡̄ “∞8 t“0 t gpx̃t, ãtq|x̃0 “ x ‰ . As in (1), our goal is to solve v̄ ˚pxq “ max⇡̄P⇧̄SDm v⇡̄pxq, @x P Xm.\nWe now analyze the classical Policy Iteration (PI) algorithm (Howard, 1960) for m-augmented MDPs and provide a finite-time analysis of its convergence. We refer to it as mA-PI and provide its pseudo-code in Appx. B.2. We consider PI since it is a canonical representative upon which many other algorithms are built. Admittedly, we did not find any other formal result quantifying the effect of augmentation on a planning or learning algorithm, other than a PAC upper bound for R-max with ✏-optimal policies (Walsh et al., 2009). A proof for the next result is given in Appx. B.4. Theorem 4.1 (Lower Bound for mA-PI). The number of iterations required for mA-PI to converge in m-AMDP Mm is ⌦p|Xm |q “ ⌦p|S ||A |mq.\nThm. 4.1 does not take advantage of the special delay problem structure but rather is an application of our more general result to augmented MDPs (Appx.B.4). As pointed out in Scherrer et al. (2016), the lower-bound complexity of PI is considered an open problem, at least in the most general MDP formulation. Lower-bounds have been derived in specific cases only, such as deterministic MDPs (Hansen & Zwick, 2010), total reward criterion (Fearnley, 2010) or high discount factor (Hollanders et al., 2012). Even though we did not intend to directly address this open question, our lower bound result seems to be a contribution on its own to the general theory of non-delayed MDPs.\nNext, we show that the above lower bound is tight (up to a factor of |A| and logarithmic terms) and mA-PI is guaranteed to converge after Õp|S||A|m`1q. A proof is given in Appx. B.5. Theorem 4.2 (mA-PI Convergence). The mA-PI algorithm converges to the optimal value-policy pair pv̄˚, ⇡̄˚q in at most |S||A|mp|A| ´ 1q Q log p1{ q´1 log p1{1´ q U iterations." }, { "heading": "5 EXECUTION-DELAY MDP: A NEW FORMULATION", "text": "In this section, we introduce and study the stochastic process generated by an MDP with execution delay, without resorting to state-augmentation. In the ED-MDP we consider, the probability measure changes according to the delay value m. We assume that during the m initial steps, actions are sequentially executed according to a fixed queue ā :“ pā0, ¨ ¨ ¨ , ām´1q P Am. Unlike m-AMDPs, the initial queue of pending actions here plays the role of an exogenous variable that is not embedded into the state-space. A policy ⇡ P ⇧HR induces a probability measure P⇡m that is defined through a set of equations which, for brevity, we defer to Appx. C[(16)-(19)]. We note that for t † m, decision rules do not depend on the history, while for t • m, they depend on the history up to t ´ m only. Let µ be an initial state distribution and a Dirac distribution. Using this and the notations from Sec. 2, we can explicitly write the probability of a sample path. See proof in Appx. C.1. Proposition 5.1. For policy ⇡ :“ pd0, d1, ¨ ¨ ¨ q P ⇧HR, the probability of observing history ht :“ ps0, a0, s1, a1 ¨ ¨ ¨ , at´1, stq is given by:\nP⇡mps̃0 “ s0, ã0 “ a0, s̃1 “ s1, ã1 “ a1, ¨ ¨ ¨ , ãt´1 “ at´1, s̃t “ stq\n“ µps0q ˜ m´1π\nk“0 ākpakqppsk`1|sk, akq\n¸ ˜ t´1π\nk“m qdk´mphk´mqpakqppsk`1|sk, akq\n¸ .\nFrom Prop. 5.1 we deduce that, differently than the standard MDP setting where any Markov policy induces a Markov process, the delayed process is not Markovian even for stationary policies (see\nAppx. C.2 for a formal proof). Next, we show that for any history-dependent policy and starting state, there exists a Markov policy (not necessarily stationary) that generates the same process distribution. Consequently, despite execution delay, one can restrict attention to Markov policies without impairing performance. Theorem 5.1. Let ⇡ P ⇧HR be a history dependent policy. For all s0 P S, there exists a Markov policy ⇡1 P ⇧MR that yields the same process distribution as ⇡, i.e., P⇡1mps̃t´m “ s1, ãt “ a|s̃0 “ s0q “ P⇡mps̃t´m “ s1, ãt “ a|s̃0 “ s0q, @a P A, s1 P S, t • m.\nThe proof is given in Appx. C.3. It builds on the concept that for each history-dependent policy ⇡ P ⇧HR, one can choose a sequence of Markov decision rules that reconstruct the same timedependent action distribution in the process induced by ⇡.\nThis result proves attainability of the optimum over ⇧MR, but not how one can efficiently find an optimal policy. In Appx. C.5, (27), we formally define the delayed value function vµ0:µm´1,⇡m for policy ⇡ and initial action distribution queue µ0 : µm´1 :“ pµ0, . . . , µm´1q. In Thm. C.1 there, we show that it satisfies a non-stationary Bellman-type recursion. Though the question of how to efficiently find an optimal non-stationary Markov policy remains generally open, we partially answer it by proving that a deterministic Markov policy is sufficient for the optimal delayed value function. Theorem 5.2. For any action distribution queue µ0 : µm´1 :“ pµ0, . . . , µm´1q and s0 P S,\nmax ⇡P⇧MD\nv µ0:µm´1,⇡ m “ max ⇡P⇧MR v µ0:µm´1,⇡ m .\nDegradation due to stationarity. To complement the finding that a deterministic Markov policy can be optimal for any ED-MDP, we show that restricting to stationary policies impairs performance in general. Thus, while in non-delayed MDPs it is enough to focus on the latter, in ED-MDPs the restriction should be to the more general class of Markov policies. Proposition 5.2. There exists an m-ED-MDP for which all stationary policies are sub-optimal. This result follows from computing the optimal return for stationary and non-stationary policies in the ED-MDP from Example 3.1 using simulation. We elaborate on this further in Appx. C.4. There, we also confirm that our theoretical return from Prop. 3.1 matches closely with simulation. Lastly, a visualization of the results from this section is given in Fig. 2." }, { "heading": "6 A NEW ALGORITHM: DELAYED-Q", "text": "We now introduce an algorithm capable of successfully handling tasks with execution delay by inferring the future m-step state before each decision. Algorithm Description. Fig. 3 depicts the algorithm. As a first stage, to select an action at to be executed in a future state st`m, we infer that future state ŝt`m using the current state st and the queue of pending actions pat´m, . . . , at´1q. This is done by successively applying an approximate forward model m times: ŝt`1 “ fpst, at´mq, . . . , ŝt`m “ fpŝt`m´1, at´1q. More details on the forward models are given in Sec. 7. The approximate model here is simpler than other model-based algorithms such as tree-search\nmethods, because it does not require access to the reward function. Also, only a single trajectory is\nsampled rather than exponentially many w.r.t. the horizon length. We do note this method benefits from the environment not being entirely stochastic (Walsh et al., 2009). Still, as we show next, it performs well even on noisy environments. As a second stage, we select an action according to a policy at “ ⇡pŝt`mq. The two stages of this procedure can be represented as a non-stationary Markov policy ⇡tpstq, where the non-stationarity stems from the time-dependency of the action queue, and the Markov property from the policy being applied on st and no prior history. Notably, the Q-function here does not take past actions as input, contrarily to the augmentation approach in Sec. 4. To better stress the non-stationarity, we note that applying the policy on the same state at different times can output different actions. Lastly, for training, we maintain a sample-buffer of length m which we use to shift action at into the tuple pst`m, rt`m, at, st`m`1q prior to each insertion to the replay buffer. During the course of this work, we also experimented with a model-free variant. Instead of ‘un-delaying’ the Q-function with the forward-model, we defined a delayed Q-function trained on sequences whose actions were shifted m steps forward. However, the obtained results were unsatisfactory, seemingly because the Q-function is unable to implicitly learn the m-step transition.\nPoint-Estimate Approaches. For completeness, we mention alternatives to using a ‘most-likely’ state estimate, such as an expected future state. To demonstrate why point-estimate prediction can be devastating, consider an MDP where s “ px, tq: position and time, respectively. Starting from s0 “ p0, 0q, t progresses deterministically, while x behaves like a random walk with momentum; i. e., if x ° 0, then x ` 1 is more likely than x ´ 1, and vice versa. The process obviously diverges with time. Consider two actions: one is good when |x| is big, and the other when |x| is small. For a large delay m, the PDF of the state is bi-modal and symmetric around pZ,mq and p´Z,mq for some finite Z. But, a point estimate (e. g., ML or MAP) would yield a value of p0,mq. In addition to this example, we observe that in our Ex. 3.1, any alternative to a ‘most-likely’ state estimate is worse: there, the optimal policy applies actions based on the most-likely state (see proof of Prop. 3.1), while it is easy to see that any other policy weighing future state probabilities leads to lower reward.\n7 EXPERIMENTS\nWe perform experiments in a wide range of domains: tabular, physical, and image-based Atari. All of them include stochasticity: In the maze we inject noise to actions; in the physical domains we perturb the masses at each step; and Atari is stochastic by nature. We compare our algorithm with two baselines: Oblivious-Q and Augmented-Q. Oblivious-Q is the standard Q-learning that ignores delay and assumes each decision to be immediately executed. Augmented-Q acts on the m´AMDP introduced in Def. 4.1. We test all domains on delays m P t0, 5, 15, 25u with 5 seeds per each run. All results are summarized in Fig. 10, and are provided in more detail with std. in Appx. D.1, Table 2.\nTabular Maze Domain. We begin with testing Delayed-Q on a Maze domain (Brockman et al., 2016)[tinyurl.com/y34tmfm9]. It is based on tabular Q-learning and enables us to study the merits of our method decoupled from the coming DDQN added complexities. Moreover, it conveys the exponential complexity of Augmented-Q. The forward-model we construct is naturally tabular as well: it predicts a state s1 according to the highest visitation frequency given ps, aq. The objective in Maze is to find the shortest path from a start position to a goal state in a randomly-generated N ˆ N maze. Reaching the goal yields a reward of 1, and ´1{p10N2q per step otherwise. The maximal episode length is 10N2 steps, so the cumulative reward is in r´1, 1s. We also create a Noisy Maze environment that perturbs each action w.p. p P r0, 0.5s. Convergence plots are given in Fig. 6. Delayed-Q outperforms the rest for all delay values m, while Oblivious-Q fails in all runs for m ° 0. Since the augmented state-space grows exponentially with m, Augmented-Q converges more slowly as m increases. In fact, for m ° 15 the simulation fails to run due to memory incapacity for the Q-table; this explains its absence in Figs. 6-10. To confirm the exponential complexity growth of Augmented-Q and compare it with Delayed-Q, we trained both agents with increasing delay values, and reported the number of training episodes each one required before reaching a cumulative reward of 0.5. Fig. 4 clearly demonstrates the exponential (resp. linear) dependence of Augmented-Q (resp. Delayed-Q) in the delay value. The linear dependence of Delayed-Q in m is not surprising: Delayed-Q is algorithmically identical to Q-learning, except\nfor the m-step forward-model calls and the replay buffer shift of m samples. To further analyze its sensitivity to the state-space size, we ran tabular Delayed-Q on increasing maze sizes, for a fixed m “ 5. As Fig. 5(c) shows, the performance drops exponentially, suggesting high sensitivity to the state-space size and highlighting one shortcoming of MBS (Walsh et al., 2009) (see Sec. 1).\nPhysical Domains. Next, we test our approach on two continuous domains: CartPole2 and Acrobot. The CartPole task requires balancing a pole connected to a cart that actuates left or right. In Acrobot, one needs to swing up the lower of two links connected by a joint above a certain height. The agent receives a reward of 1 if the pole stays above a certain angle in Cartpole, and in Acrobot it receives ´1 until it reaches the goal. The episode length is 500 steps in both tasks. We also create noisy versions of both tasks: At each step, normal additive noises are independently added to each physical component’s mass, with std of 0.1 of the nominal mass.\nWe extend the famous DDQN algorithm (Van Hasselt et al., 2015) and compare to it, though our method is general and can be seamlessly integrated into any Q-learning based algorithm. Our one-step forward-model is implemented with a neural network (NN) of the same architecture as the Q-network. Namely, it consists of two hidden layers, each of width 24, with ReLu activations. The input of the\n2Since Cartpole fails in „10 steps if the initial actions are random, we initialize the m-lengthed action-queue with optimal actions using a pretrained non-delayed model. We wait for 2m steps before starting to append samples to the replay buffer to avoid unfair advantage due to these actions.\nforward-model NN is the concatenation of ps, aq and its output is s1. Training the forward-model NN is conducted together with the Q-network training with the same hyperparameters and sample batches; this makes the implementation easy and simple. For Augmented-Q, a concatenation of the pending actions to the state is fed to the Q-network.\nFig. 6 depicts the performance of the three algorithms for different values of m for Noisy Cartpole. As expected from a physical domain, ignoring delay gives catastrophic results even for m “ 5. Augmented-Q performs moderately up to m “ 15, but fails for larger delays. Delayed-Q performs the best for all m values, and performs well even on the challenging task of balancing a noisy pole with m “ 25. We observe similar behavior in all Cartpole and Acrobot experiments, as shown in Fig. 10. Moreover, in Fig. 7, we demonstrate the relative robustness of Delayed-Q to different delay values. All tested environments exhibit superior performance of Delayed-Q for a wide range of delays. In Noisy Acrobot, Delayed-Q performs better for m “ 25 than the alternatives do for m “ 2. Figs. 5(a)-5(b) show a clear trade-off between noise and delay, as we also discuss in Rmk. 3.1. For high\ndelays, the agent is much more sensitive to an increase in stochasticity.\nTo quantify the dependence of Delayed-Q on the model accuracy, we compare the learned model to a perfect one, i.e., the environment itself. Fig. 9 shows performance is impaired more as the delay increases and suggests a better model can potentially improve reward by 20-30%. Further, we test the robustness of Delayed-Q to misspecified delay by training it with m “ 10 and evaluating on other delay values. Fig. 8 shows the evaluation performance for m P t5, . . . , 15u. It demonstrates the robustness of our method – varying performance in evaluation (for good or bad) does not stem from delay misspecification. Instead, the delay is ‘forgotten’ after training, and Fig. 8 depicts the general effect of execution delay on performance. For shorter delay than the training one, i. e., m † 10, performance even improves. The reason is that, first, during training, the Q-function is ‘un-delayed’ due to the replay buffer shift that relates the actions to the correct execution time. Second, the forward-model is trained based on single-step transitions and only during inference is it queried m times. Thus, these two networks composing the agent are oblivious to the delay they were trained on.\nAtari Domains. We run the last set of experiments on the Atari Learning Environment (Bellemare et al., 2013). We inspect 8 games from those that were successfully tackled with the original Qnetwork architecture and hyperparameters of DDQN (Van Hasselt et al., 2015). Since a learned forward-model for images conditioned on actions is a hanging question in the research frontier, we leave it for future work and use the simulator itself for prediction. It is stochastic in nature and thus encompasses approximation error. For Augmented-Q, we concatenate the action queue to the output of the CNN part of the Q-network; the extended vector is then fed into the subsequent fully-connected part of it. We train all games for 1M steps. Fig. 6 shows convergence plots for MsPacman. Delayed-Q is consistently better than Augmented-Q for all m values, which is, in turn, better than Oblivious-Q. Although the gap between all three algorithms is small for m “ 5, it increases with m. For m “ 25, the delay is too large for the augmentation to have a positive effect compared to Oblivious-Q, and they perform the same. This behavior is representative of all Atari games, as can be seen in Fig. 10. Lastly, we compared Delayed-Q with a fourth algorithm which uses an RNN policy that is unaware of the delay value. The results are given in Appx. D.2, showing that a recurrent policy does not improve upon Augmented-Q or Oblivious-Q. This result is not surprising though: as stated in Thm. 5.1, the history sequence st´m, st´m´1, . . . does not aid the policy any further than only using st´m." }, { "heading": "8 DISCUSSION", "text": "In this work, we found that non-stationary deterministic Markov policies are optimal in delayed MDPs. Though more expressive, the standard state augmentation approach is intractable for all but the shortest delays, while the oblivious approach that ignores delay suffers from inferior performance. We derived a Q-learning based algorithm that generates a Markov policy by combining a transition forward model with Q-network. The forward-model produces a simple future-state estimate. Incorporating probabilistic estimates and other improvements such as integration of image-based action-dependent learned forward-models (Kim et al., 2020), are left for future research. Extensions of our work for real-world applications can be unknown or varying delay. In the first case, a good prior for the delay value can often be used, e. g., for autonomous vehicles, as the latency statistics of the different hardware and software components are well studied (Zhao et al., 2019; Niu et al., 2019), while in production systems, they are almost constant (Toschi et al., 2019). Our algorithm is also readily extendable to the second case of varying delay. Differently from the augmentation approach, our 1-step forward-model decouples the algorithm from the delay used for training, as Fig. 8 depicts. Also, quantization of the delay is not essential as long as the forward model can operate with variable delay values. Finally, our framework can be extended to policy-gradient-based methods that are particularly useful for continuous control, where observation delay is inherent." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors would like to thank Daniel J. Mankowitz and Timothy A. Mann for motivating this work." } ]
2,021
NON-STATIONARY MARKOV POLICIES
SP:90db8e0421db85e4e43b6fbed1cb68aab5c414e7
[ "This work proposes variants of robust OT/p-wasserstein-dist (3)/(4), where the ground cost is in some sense the maximum over costs with (prefixed) groups of features. The motivation is similar to that for feature selection: where perhaps only few of these groups of features are critical/sufficient for OT purposes. So it can also be understood as joint feature-group selection with OT. The resulting convex problem is proposed to be solved using FW, whose details are presented (including convergence)." ]
Optimal transport is a machine learning problem with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature-robust optimal transport (FROT) for highdimensional data, which solves high-dimensional OT problems using feature selection to avoid the curse of dimensionality. Specifically, we find a transport plan with discriminative features. To this end, we formulate the FROT problem as a min–max optimization problem. We then propose a convex formulation of the FROT problem and solve it using a Frank–Wolfe-based optimization algorithm, whereby the subproblem can be efficiently solved using the Sinkhorn algorithm. Since FROT finds the transport plan from selected features, it is robust to noise features. To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence. By conducting synthetic and benchmark experiments, we demonstrate that the proposed method can find a strong correspondence by determining important layers. We show that the FROT algorithm achieves state-of-the-art performance in real-world semantic correspondence datasets.
[]
[ { "authors": [ "David Alvarez-Melis", "Tommi Jaakkola", "Stefanie Jegelka" ], "title": "Structured optimal transport", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "David Alvarez-Melis", "Youssef Mroueh", "Tommi S Jaakkola" ], "title": "Unsupervised hierarchy matching with optimal transport over hyperbolic spaces", "venue": null, "year": 2020 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Mathieu Blondel", "Vivien Seguy", "Antoine Rolet" ], "title": "Smooth and sparse optimal transport", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Charlotte Bunne", "David Alvarez-Melis", "Andreas Krause", "Stefanie Jegelka" ], "title": "Learning generative models across incomparable spaces", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Marco Cuturi", "Arnaud Doucet" ], "title": "Fast computation of wasserstein barycenters", "venue": null, "year": 2014 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Ishan Deshpande", "Yuan-Ting Hu", "Ruoyu Sun", "Ayis Pyrros", "Nasir Siddiqui", "Sanmi Koyejo", "Zhizhen Zhao", "David Forsyth", "Alexander G Schwing" ], "title": "Max-sliced Wasserstein distance and its use for GANs", "venue": null, "year": 2019 }, { "authors": [ "Sofien Dhouib", "Ievgen Redko", "Tanguy Kerdoncuff", "Rémi Emonet", "Marc Sebban" ], "title": "A swiss army knife for minimax optimal transport", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Steven N Evans", "Frederick A Matsen" ], "title": "The phylogenetic kantorovich–rubinstein metric for environmental sequence samples", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2012 }, { "authors": [ "Marguerite Frank", "Philip Wolfe" ], "title": "An algorithm for quadratic programming", "venue": "Naval research logistics quarterly,", "year": 1956 }, { "authors": [ "Bolin Gao", "Lacra Pavel" ], "title": "On the properties of the softmax function with application in game theory and reinforcement learning", "venue": "arXiv preprint arXiv:1704.00805,", "year": 2017 }, { "authors": [ "Arthur. Gretton", "Kenji. Fukumizu", "C. Hui. Teo", "Le. Song", "Bernhard. Schölkopf", "Alex Smola" ], "title": "A kernel statistical test of independence", "venue": "In NIPS,", "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Paul Hongsuck Seo", "Jongmin Lee", "Deunsol Jung", "Bohyung Han", "Minsu Cho" ], "title": "Attentive semantic alignment with offset-aware correlation kernels", "venue": null, "year": 2018 }, { "authors": [ "Martin Jaggi" ], "title": "Revisiting frank-wolfe: Projection-free sparse convex optimization", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Hicham Janati", "Marco Cuturi", "Alexandre Gramfort" ], "title": "Wasserstein regularization for sparse multitask regression", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Soheil Kolouri", "Yang Zou", "Gustavo K Rohde" ], "title": "Sliced wasserstein kernels for probability distributions", "venue": null, "year": 2016 }, { "authors": [ "Soheil Kolouri", "Kimia Nadjahi", "Umut Simsekli", "Roland Badeau", "Gustavo Rohde" ], "title": "Generalized sliced wasserstein distances", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Tam Le", "Makoto Yamada", "Kenji Fukumizu", "Marco Cuturi" ], "title": "Tree-sliced approximation of wasserstein distances", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Yanbin Liu", "Makoto Yamada", "Yao-Hung Hubert Tsai", "Tam Le", "Ruslan Salakhutdinov", "Yi Yang" ], "title": "Lsmi-sinkhorn: Semi-supervised squared-loss mutual information estimation with optimal transport", "venue": null, "year": 1909 }, { "authors": [ "Yanbin Liu", "Linchao Zhu", "Makoto Yamada", "Yi Yang" ], "title": "Semantic correspondence as an optimal transport problem", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Juhong Min", "Jongmin Lee", "Jean Ponce", "Minsu Cho" ], "title": "Hyperpixel flow: Semantic correspondence with multi-layer neural features", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Juhong Min", "Jongmin Lee", "Jean Ponce", "Minsu Cho" ], "title": "Spair-71k: A large-scale benchmark for semantic correspondence", "venue": "arXiv preprint arXiv:1908.10543,", "year": 2019 }, { "authors": [ "Yu Nesterov" ], "title": "Smooth minimization of non-smooth functions", "venue": "Mathematical programming,", "year": 2005 }, { "authors": [ "François-Pierre Paty", "Marco Cuturi" ], "title": "Subspace robust wasserstein distances", "venue": "In ICML,", "year": 2019 }, { "authors": [ "François-Pierre Paty", "Marco Cuturi" ], "title": "Regularized optimal transport is ground cost adversarial", "venue": null, "year": 2020 }, { "authors": [ "Ignacio Rocco", "Relja Arandjelovic", "Josef Sivic" ], "title": "Convolutional neural network architecture for geometric matching", "venue": null, "year": 2017 }, { "authors": [ "Ignacio Rocco", "Relja Arandjelović", "Josef Sivic" ], "title": "End-to-end weakly-supervised semantic alignment", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Ignacio Rocco", "Mircea Cimpoi", "Relja Arandjelović", "Akihiko Torii", "Tomas Pajdla", "Josef Sivic" ], "title": "Neighbourhood consensus networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Yossi Rubner", "Carlo Tomasi", "Leonidas J Guibas" ], "title": "The earth mover’s distance as a metric for image retrieval", "venue": "International journal of computer vision,", "year": 2000 }, { "authors": [ "Paul-Edouard Sarlin", "Daniel DeTone", "Tomasz Malisiewicz", "Andrew Rabinovich" ], "title": "SuperGlue: Learning feature matching with graph neural networks", "venue": "arXiv preprint arXiv:1911.11763,", "year": 2019 }, { "authors": [ "Ryoma Sato", "Makoto Yamada", "Hisashi Kashima" ], "title": "Fast unbalanced optimal transport on tree", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Meyer Scetbon", "Laurent Meunier", "Jamal Atif", "Marco Cuturi" ], "title": "Handling multiple costs in optimal transport: Strong duality and efficient computation", "venue": "arXiv preprint arXiv:2006.07260,", "year": 2020 }, { "authors": [ "Hongteng Xu", "Dixin Luo", "Lawrence Carin" ], "title": "Scalable gromov-wasserstein learning for graph partitioning and matching", "venue": "arXiv preprint arXiv:1905.07645,", "year": 2019 }, { "authors": [ "Hongteng Xu", "Dixin Luo", "Hongyuan Zha", "Lawrence Carin Duke" ], "title": "Gromov-wasserstein learning for graph matching and node embedding", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Yuguang Yan", "Wen Li", "Hanrui Wu", "Huaqing Min", "Mingkui Tan", "Qingyao Wu" ], "title": "Semi-supervised optimal transport for heterogeneous domain adaptation", "venue": null, "year": 2018 }, { "authors": [ "Ming Yuan", "Yi Lin" ], "title": "Model selection and estimation in regression with grouped variables", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2006 }, { "authors": [ "Mikhail Yurochkin", "Sebastian Claici", "Edward Chien", "Farzaneh Mirzazadeh", "Justin M Solomon" ], "title": "Hierarchical optimal transport for document representation", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Optimal transport (OT) is a machine learning problem with several applications in the computer vision and natural language processing communities. The applications include Wasserstein distance estimation (Peyré et al., 2019), domain adaptation (Yan et al., 2018), multitask learning (Janati et al., 2019), barycenter estimation (Cuturi & Doucet, 2014), semantic correspondence (Liu et al., 2020), feature matching (Sarlin et al., 2019), and photo album summarization (Liu et al., 2019). The OT problem is extensively studied in the computer vision community as the earth mover’s distance (EMD) (Rubner et al., 2000). However, the computational cost of EMD is cubic and highly expensive. Recently, the entropic regularized EMD problem was proposed; this problem can be solved using the Sinkhorn algorithm with a quadratic cost (Cuturi, 2013). Owing to the development of the Sinkhorn algorithm, researchers have replaced the EMD computation with its regularized counterparts. However, the optimal transport problem for high-dimensional data has remained unsolved for many years.\nRecently, a robust variant of the OT was proposed for high-dimensional OT problems and used for divergence estimation (Paty & Cuturi, 2019; 2020). In the robust OT framework, the transport plan is computed with the discriminative subspace of the two data matrices X ∈ Rd×n and Y ∈ Rd×m. The subspace can be obtained using dimensionality reduction. An advantage of the subspace robust approach is that it does not require prior information about the subspace. However, given prior information such as feature groups, we can consider a computationally efficient formulation. The computation of the subspace can be expensive if the dimensionality of data is high, for example, 104.\nOne of the most common prior information items is a feature group. The use of group features is popular in feature selection problems in the biomedical domain and has been extensively studied in Group Lasso (Yuan & Lin, 2006). The key idea of Group Lasso is to prespecify the group variables and select the set of group variables using the group norm (also known as the sum of `2 norms). For example, if we use a pretrained neural network as a feature extractor and compute OT using the features, then we require careful selection of important layers to compute OT. Specifically, each\nlayer output is regarded as a grouped input. Therefore, using a feature group as prior information is a natural setup and is important for considering OT for deep neural networks (DNNs).\nIn this paper, we propose a high-dimensional optimal transport method by utilizing prior information in the form of grouped features. Specifically, we propose a feature-robust optimal transport (FROT) problem, for which we select distinct group feature sets to estimate a transport plan instead of determining its distinct subsets, as proposed in (Paty & Cuturi, 2019; 2020). To this end, we formulate the FROT problem as a min–max optimization problem and transform it into a convex optimization problem, which can be accurately solved using the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013). The FROT’s subproblem can be efficiently solved using the Sinkhorn algorithm (Cuturi, 2013). An advantage of FROT is that it can yield a transport plan from high-dimensional data using feature selection, using which the significance of the features is obtained without any additional cost. Therefore, the FROT formulation is highly suited for high-dimensional OT problems. Through synthetic experiments, we initially demonstrate that the proposed FROT is robust to noise dimensions (See Figure 1). Furthermore, we apply FROT to a semantic correspondence problem (Liu et al., 2020) and show that the proposed algorithm achieves SOTA performance." }, { "heading": "Contribution:", "text": "• We propose a feature robust optimal transport (FROT) problem and derive a simple and efficient Frank–Wolfe based algorithm. Furthermore, we propose a feature-robust Wasserstein distance (FRWD).\n• We apply FROT to a high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance-based feature selection algorithm with less computational cost than the original algorithm.\n• We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms." }, { "heading": "2 BACKGROUND", "text": "In this section, we briefly introduce the OT problem.\nOptimal transport (OT): The following are given: independent and identically distributed (i.i.d.) samples X = {xi}ni=1 ∈ Rd×n from a d-dimensional distribution p, and i.i.d. samples Y = {yj}mj=1 ∈ Rd×m from the d-dimensional distribution q. In the Kantorovich relaxation of OT, admissible couplings are defined by the set of the transport plan:\nU(µ, ν) = {Π ∈ Rn×m+ : Π1m = a,Π>1n = b},\nwhere Π ∈ Rn×m+ is called the transport plan, 1n is the n-dimensional vector whose elements are ones, and a = (a1, a2, . . . , an)> ∈ Rn+ and b = (b1, b2, . . . , bm)> ∈ Rm+ are the weights. The OT problem between two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj determines the\noptimal transport plan of the following problem:\nmin Π∈U(µ,ν)\nn∑\ni=1\nm∑\nj=1\nπijc(xi,yj), (1)\nwhere c(x,y) is a cost function. For example, the squared Euclidean distance is used, that is, c(x,y) = ‖x − y‖22. To solve the OT problem, Eq. (1) (also known as the earth mover’s distance) using linear programming requires O(n3), (n = m) computation, which is computationally expensive. To address this, an entropic-regularized optimal transport is used (Cuturi, 2013).\nmin Π∈U(µ,ν)\nn∑\ni=1\nm∑\nj=1\nπijc(xi,yj) + H(Π),\nwhere ≥ 0 is the regularization parameter, and H(Π) = ∑ni=1 ∑m j=1 πij(log(πij) − 1) is the entropic regularization. If = 0, then the regularized OT problem reduces to the EMD problem. Owing to entropic regularization, the entropic regularized OT problem can be accurately solved using Sinkhorn iteration (Cuturi, 2013) with a O(nm) computational cost (See Algorithm 1).\nWasserstein distance: If the cost function is defined as c(x,y) = d(x,y) with d(x,y) as a distance function and p ≥ 1, then we define the p-Wasserstein distance of two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj as\nWp(µ, ν) =\n min\nΠ∈U(µ,ν)\nn∑\ni=1\nm∑\nj=1\nπijd(xi,yj) p\n 1/p\n.\nRecently, a robust variant of the Wasserstein distance, called the subspace robust Wasserstein distance (SRW), was proposed (Paty & Cuturi, 2019). The SRW computes the OT problem in the discriminative subspace. This can be determined by solving dimensionality-reduction problems. Owing to the robustness, it can compute the Wasserstein from noisy data. The SRW is given as\nSRW(µ, ν) = min\nΠ∈U(µ,ν) max U∈Rd×k,U>U=Ik\nn∑\ni=1\nm∑\nj=1\nπij‖U>xi −U>yj‖22\n 1 2\n, (2)\nwhere U is the projection matrix with k ≤ d, and Ik ∈ Rk×k is the identity matrix. The SRW or its relaxed problem can be efficiently estimated using either eigenvalue decomposition or the Frank–Wolfe algorithm." }, { "heading": "3 PROPOSED METHOD", "text": "This paper proposes FROT. We assume that the vectors are grouped as x = (x(1) > , . . . ,x(L) > )> and y = (y(1) > , . . . ,y(L) > )>. Here, x(`) ∈ Rd` and y(`) ∈ Rd` are the d` dimensional vectors, where ∑L `=1 d` = d. This setting is useful if we know the explicit group structure for the feature vectors a priori. In an application in L-layer neural networks, we consider x(`) and y(`) as outputs of the `th layer of the network. If we do not have a priori information, we can consider each feature independently (i.e., d1 = d2 = . . . = dL = 1 and L = d). All proofs in this section are provided in the Appendix." }, { "heading": "3.1 FEATURE-ROBUST OPTIMAL TRANSPORT (FROT)", "text": "The FROT formulation is given by\nmin Π∈U(µ,ν) max α∈ΣL\nn∑\ni=1\nm∑\nj=1\nπij\nL∑\n`=1\nα`c(x (`) i ,y (`) j ), (3)\nwhere ΣL = {α ∈ RL+ : α>1L = 1} is the probability simplex. The underlying concept of FROT is to estimate the transport plan Π using distinct groups with large distances between {x(`)i }ni=1 and {y(`)j }mj=1. We note that determining the transport plan in nondistinct groups is difficult because the data samples in {x(`)i }ni=1 and {y (`) j }mj=1 overlap. By contrast, in distinct groups, {x (`) i }ni=1 and {y(`)j }mj=1 are different, and this aids in determining an optimal transport plan. This is an intrinsically similar idea to the subspace robust Wasserstein distance (Paty & Cuturi, 2019), which estimates the transport plan in the discriminative subspace, while our approach selects important groups. Therefore, FROT can be regarded as a feature selection variant of the vanilla OT problem in Eq. (1), whereas the subspace robust version uses dimensionality-reduction counterparts.\nAlgorithm 1 Sinkhorn algorithm. 1: Input: a, b,C, , tmax 2: Initialize K = e−C/ ,u = 1n,v =\n1m, t = 0 3: while t ≤ tmax and not converge do 4: u = a/(Kv) 5: v = b/(K>u) 6: t = t+ 1 7: end while 8: return Π = diag(u)Kdiag(v)\nAlgorithm 2 FROT with the Frank–Wolfe. 1: Input: {xi}ni=1, {yj}mj=1, η, and . 2: Initialize Π, compute {C`}L`=1. 3: for t = 0 . . . T do 4: Π̂ = argminΠ∈U(µ,ν)〈Π,MΠ(t)〉 +\nH(Π)\n5: Π(t+1) = (1− γ)Π(t) + γΠ̂ 6: with γ = 22+t . 7: end for 8: return Π(T )\nUsing FROT, we can define a p-feature robust Wasserstein distance (p-FRWD).\nProposition 1 For the distance function d(x,y),\nFRWDp(µ, ν) = min\nΠ∈U(µ,ν) max α∈ΣL\nn∑\ni=1\nm∑\nj=1\nπij\nL∑\n`=1\nα`d(x (`) i ,y (`) j ) p\n 1/p\n, (4)\nis a distance for p ≥ 1.\nNote that we can show that 2-FRWD is a special case of SRW with d(x,y) = ‖x − y‖2 (See Appendix). The key difference between SRW and FRWD is that FRWD can use any distance, while SRW can only use d(x,y) = ‖x− y‖2." }, { "heading": "3.2 FROT OPTIMIZATION", "text": "Here, we propose two FROT algorithms based on the Frank–Wolfe algorithm and linear programming.\nFrank–Wolfe: We propose a continuous variant of the FROT algorithm using the Frank–Wolfe algorithm, which can be fully differentiable. To this end, we introduce entropic regularization for α and rewrite the FROT as a function of Π. Therefore, we solve the following problem for α:\nmin Π∈U(µ,ν) max α∈ΣL\nJη(Π,α),with Jη(Π,α) = n∑\ni=1\nm∑\nj=1\nπij\nL∑\n`=1\nα`c(x (`) i ,y (`) j )− ηH(α),\nwhere η ≥ 0 is the regularization parameter, and H(α) = ∑L`=1 α`(log(α`) − 1) is the entropic regularization for α. An advantage of entropic regularization is that the nonnegative constraint is naturally satisfied, and the entropic regularizer is a strong convex function.\nLemma 2 The optimal solution of the optimization problem\nα∗ = argmax α∈ΣL\nJη(Π,α), with Jη(Π,α) = L∑\n`=1\nα`φ` − ηH(α)\nwith a fixed admissible transport plan Π ∈ U(µ, ν), is given by\nα∗` = exp\n( 1 ηφ` )\n∑L `′=1 exp ( 1 ηφ`′\n) with Jη(Π,α∗) = η log ( L∑\n`=1\nexp\n( 1\nη φ`\n)) + η.\nUsing Lemma 2 (or Lemma 4 in Nesterov (2005)) together with the setting φ` =∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) i ) = 〈Π,C`〉, [C`]ij = c(x (`) i ,y (`) i ), the global problem is equivalent to\nmin Π∈U(µ,ν) Gη(Π), with Gη(Π) = η log\n( L∑\n`=1\nexp\n( 1\nη 〈Π,C`〉\n)) . (5)\nNote that this is known as a smoothed max-operator (Nesterov, 2005; Blondel et al., 2018). Specifically, regularization parameter η controls the “smoothness” of the maximum.\nProposition 3 Gη(Π) is a convex function relative to Π.\nThe derived optimization problem of FROT is convex. Therefore, we can determine globally optimal solutions. Note that the SRW optimization problem is not jointly convex (Paty & Cuturi, 2019) for the projection matrix and the transport plan. In this study, we employ the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013), using which we approximate Gη(Π) with linear functions at Π(t) and move Π toward the optimal solution in the convex set (See Algorithm 2).\nThe derivative of the loss function Gη(Π) at Π(t) is given by\n∂Gη(Π)\n∂Π ∣∣∣∣ Π=Π(t) = L∑\n`=1\nα (t) ` C` =MΠ(t) with α (t) ` =\nexp (\n1 η 〈Π(t),C`〉\n)\n∑L `′=1 exp ( 1 η 〈Π(t),C`′〉 ) .\nThen, we update the transport plan by solving the EMD problem:\nΠ(t+1) = (1− γ)Π(t) + γΠ̂ with Π̂ = argmin Π∈U(µ,ν) 〈Π,MΠ(t)〉,\nwhere γ = 2/(2+ k). Note thatMΠ(t) is given by the weighted sum of the cost matrices. Thus, we can utilize multiple features to estimate the transport plan Π for the relaxed problem in Eq. (5).\nUsing the Frank–Wolfe algorithm, we can obtain the optimal solution. However, solving the EMD problem requires a cubic computational cost that can be expensive if n and m are large. To address this, we can solve the regularized OT problem, which requires O(nm). We denote the Frank–Wolfe algorithm with EMD as FW-EMD and the Frank–Wolfe algorithm with Sinkhorn as FW-Sinkhorn.\nComputational complexity: The proposed method depends on the Sinkhorn algorithm, which requires an O(nm) operation. The computation of the cost matrix in each subproblem needs an O(Lnm) operation, where L is the number of groups. Therefore, the entire complexity is O(TLnm), where T is the number of Frank–Wolfe iterations (in general, T = 10 is sufficient).\nProposition 4 For each t ≥ 1, the iteration Π(t) of Algorithm 2 satisfies\nGη(Π (t))−Gη(Π∗) ≤\n4σmax(Φ >Φ)\nη(t+ 2) (1 + δ),\nwhere σmax(Φ>Φ) is the largest eigenvalue of the matrix Φ>Φ and Φ = (vec(C1), vec(C2), . . . , vec(CL))>; and δ ≥ 0 is the accuracy to which internal linear subproblems are solved.\nBased on Proposition 4, the number of iterations depends on η, , and the number of groups. If we set a small η, convergence requires more time. In addition, if we use entropic regularization with a large , the δ in Proposition 4 can be large. Finally, if we use more groups, the largest eigenvalue of the matrix Φ>Φ can be larger. Note that the constant term of the upper bound is large; however, the Frank–Wolfe algorithm converges quickly in practice.\nLinear Programming: Because limη→0+ Gη(Π) = max`∈{1,2,...,L} ∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) j ), the FROT problem can also be written as\nmin Π∈U(µ,ν) max `∈{1,2,...,L}\nn∑\ni=1\nm∑\nj=1\nπijc(x (`) i ,y (`) j ). (6)\nBecause the objective is the max of linear functions, it is convex with respect to Π. We can solve the problem via linear programming:\nmin Π∈U(µ,ν),t\nt, s.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L. (7)\nThis optimization can be easily solved using an off-the-shelf LP package. However, the computational cost of this LP problem is high in general (i.e., O(n3), n = m)." }, { "heading": "3.3 APPLICATION: SEMANTIC CORRESPONDENCE", "text": "We applied our proposed FROT algorithm to semantic correspondence. The semantic correspondence is a problem that determines the matching of objects in two images. That is, given input image pairs (A,B), with common objects, we formulated the semantic correspondence problem to estimate the transport plan from the key points in A to those in B; this framework was proposed in (Liu et al., 2020). In Figure 2, we show an overview of our proposed framework.\nCost matrix computation C`: In our framework, we employed a pretrained convolutional neural network to extract dense feature maps for each convolutional layer. The dense feature map of the `th layer output of the sth image is given by\nf (`,s) s,q+(r−1)hs ∈ R d` , q = 1, 2, . . . , hs, r = 1, 2, . . . , ws, ` = 1, 2, . . . , L,\nwhere ws and hs are the width and height of the sth image, respectively, and d` is the dimension of the `th layer’s feature map. Note that because the dimension of the dense feature map is different for each layer, we sample feature maps to the size of the 1st layer’s feature map size (i.e., hs ×ws). The `th layer’s cost matrix for images s and s′ is given by\n[C`]ij = ‖f (`,s)i − f (`,s′) j ‖22, i = 1, 2, . . . , wshs, j = 1, 2, . . . , ws′hs′ .\nSource image\nTarget image CAM\nFeature Robust Optimal Transport (FROT)x (`) <latexit sha1_base64=\"L3BU/IT3q3uES8MG5EgG5Qq4+XQ=\">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yeXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/Bx/eYmM=</latexit>\ny(`) <latexit sha1_base64=\"YQVQlZDTAKyi4yW8oPOIIFbJkMw=\">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yfXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/B6rAYmQ=</latexit>\nC`<latexit sha1_base64=\"OacnRizJRBX89wDWgRojPha2Xaw=\">AACLWnicvZ1Lb2NJFYBrhtc8gOlhWCDBIkzT0IOalrsBDRpAmjyd7nR3nDjv8XTkx43jaef6jh+J7Yw3LOkfwIIVSEggVvAX2PAHWIzEH0AsB4kNC+px69Z9VZ1zfN2TKIlTPuc7p6pOPW7VveVG0O0MhqXSpy+9/IUvfunLX3nl1dde/+rXvv7GjTe/cTDojfpNb7/Z6/b6R436wOt2fG9/2Bl2vaOg79UvGl3vsPFsVbx/eOn1B52evzecBN6HF/W23znrNOtDnnR64zu1oTceSs5132vNrmuNXre1elrzut3Z6Y2bpbsl+bWUfXEvfHGThV+V3pvrb7Eaa7Eea7IRu2Ae89mQv+6yOhvw7w/YPVZiAU/7kF3ztD5/1ZHve2zGXuO6Iy7lcYk6T33Gf7f5fx+EqT7/XzAHUrvJrXT5T59rLrFbpX+U/lT6rPT30p9L/yr9z8q6lgzhy4T/bShdLzh949ffqv4X1Lrgf4fs3Gg5Nbr8lfCvI3UuZE7v5KQLmivvQ3bGfibz3OFlEMgUURpN5cfl9DefVd/bvXX9/dLvS//m5fC70qelv/GS8C//0/zDjrf7W04XfJ9rXUmbnrTvcdvXvLY8zu9w1jVbC1/70rcOl/LDesnXDfirHv89435qjvCzEqb3eI5wpK78e5EhPQrT605t/X9aey8h5yI05essYTVM78oY7vOocVE8NpaRkqash+lBLNLzCWey9ocZwkaUbtfVkZXWXc6JuNfYLUdZiLx2cvKxGntHEJYihig/VU8+17jm6ZqqYqTHGTPZ/j+U7/o8pSNlVR8h0hpcZondDqOnJ/9Tv5P1uMRucs47/O8M4YeO0UX4kR/XNH90pC/CH9M64j7U+Pc8fvgL9wT2Qre7RZRGtq3mlQoUKcqHPOs12e+O+W/hw3XCh3dkX4zlezJNWTmXrUv4/D3+3wp/fyxfXfIYU2OBGE3uh70jXKI7vLdZC9l9PlZ0pf67ctSdxV7NEqNCljPgfYYfcvS415f9g37HnVvdb/iyx1PEYTi2iXbczSErnZmU/yVoQdRxW46raT9FPGXpcfkaH4lnaAt1XicUC0IeZ6EvXz9LRH2SaSRUmz3j792WcV2TPXqbSw5l/LnsDOU8yWZDvXs7bC1QvXbk3MROMxJFPB7IempaIlC/V2M/Bzgj2Y774WimPf4k9OcTtPYgoW/SryVpxp7yNupmCZ/Fe6om7oekrpRoy5JS/dUd/vt+FBvtzJwhj9yJ+hO4lXXYXfnT4j+zSKsT9S/uHLTkHDW/NQQydwF/9d3Yj06F+5yAvxpJdgD2pa2wXwnYDwDZSkStgJKaWgGpgexJVL/wMTuVvUWXp56n5qpZTV9eGwWyt6iHkXUdzo9F7/75fP+Uf7trQ9dug3+vpmpcpF3LVHde4/LruYx1EuNJLuMJibGby9glMaq5jCoixhtyhtJiU9kqepITT9dXpj3ZdkuIMURr9qKx1s67F/EwxGWAtkzwbgVgrRBYqwBrlcBaA1hrBNY6wFonsDYA1gaBVQZYZQJrE2BtElgPANYDAushwHpIYG0BrC0C6xHAekRgPQZYjwmsJwDrCYG1DbC2CawKwKoQWDsAa4fA2gVYuwRWFWBVCaw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHBNYxwDomsE4A1glp5K4DtDrBswbAahBYTYDVJLBaAKtFYEHzJo/AOgNYZwRWG2C1CaxzgHVOYHUAVofA+ghgfURgPQNYzwisLsDqElgXAOuCwPIBlk9gQdcfPQIrAFgBgfUxwPqYwOoDrD6BNQBYAwJrCLCGBNYIYI0IrEuAdUlgXQGsKwJrDLDGBNYEYE0IrCnAmpJG7mm4GptMe8riqw3xVVu8l8NoJR7iejGPcbMNs/LlKonkGhl+/uHJNU2IbeTwo2ud6d1lNzsuiZ+fiPVXjOdxSfyMJZCrtuIeCGhEqGWk8VGDK/spuexxXCpV7TpjyHFJ/Kynh2IbOfy8pS73qGF2XBI/k6lLBnxdUEvJ4uc30AhUC2XwsxyY6JOIY3DWWgtl8DMemBiQiH25mwIxtRR+1tIJ9yUhclwS3+bqiLrSUvgZDaV/S0vja/AcVYfnpFpsoqhNIlXlEPZ1QKKKdE+ObxA5Lomll9EjbFwSS19Dj7BxSfzKGnYs2ZtjLHlE6JOTsviVLjhajkixUkEQKyRiFd0zVefomfZJfUhaGl8qmNZeIbb2Cqq1V4itfRvd2uOS+CvHurzrgFLq+To0i/hZX1aeZgkz4sclaXTc6J+UpVnAzwSy8tSSwrSKpKyysOS00pTXkGZnWl+7qnTcnrSSXbEyMHvRSnbVysDsQSvZNSsDs/esZNetDMyes5LdsDIwe81KtmxlYPaYleymlYHZW1ayD6wMzJ6ykn1oZWD2kpXslpWB2UNWso+sDMzesZJ9bGVg9oyV7BMrA7NXrGS3rQzMHrGSrVgZmL1hJbtjZWD2hJXsrpWB2QtWslUrA7MHrGT3rAzM3q+S3bcyMHu+SvbAysDs9SrZQysDs8erZI+sDMzerpI9tjIwe7pK9sTKwO3l6twMYzsYeTlyrw3jcuuycbwQG0fR+jMtHx4xH3Yb9nzEbbismHmj2Uu9kvfeenJNe5iSgv02ssMMcSjnw+IVlZpdDXX5m7d2SrcA+V/EypSQC/xKtJ5Juqhmton1dR3lJeVOwDKKSLmHbx9FxPT8rtxi8viRQ5+y798gRAh9n6hBjPG0BZeNeF9k7mhJ9lCtGOnzuj9cfbt9V6uHdflcgH5CxozBWM1JQvOYoDlNaJ4Amubp2Vb4tIMqWZeOWJkQa9FN+VdrXZC1zkNPs1GZZ8FN92VsDcJ7XSBffGaeU2gSS0jXaTr9NCdGMbvvac6EwMfsyKc5UwIft0vflzpify++kiGs9MGY0JozJwnHcMVTPhdLvuA/aj/a5edp2N+Zfg5qD11HCeTR3H2AeIJSPYM7ydQE9OyQOptAPBmp7F8yL8OAnhRKejADvHrKarLFDklUvQPqIudFsp7PzmdV3wtJtdoqbHWYiunk2Oqq80X4kxcVpiygmHlRHpgYmMcDaiTYysDe28BtyVUylAix++CKDIqF+eOP2qrny4urhqltbXEexOvyFtoHM8oU7eXmtVu0n6OUeHpMdZWxixrI56d70f2Bao/JrdGT7d5Lac7rgcdbeCfUbiJKEBrJx9HomZ5NTsAynlp0pwhdPbPN35cag/oTp762Dz3trFfd4Ked9boW/LRzEJVpYJmn30HNpYOofPM5UzRn7PQG54nLDwzBlRPKvb5jlr07d0zQTd+BOwbr01gcF7gL2Ni2UTC92iTyZVLAl0nki42C8SV5x4CplWT6/F6m+R6Bj/Hf+OwjJL1QEooWJTsKpV3jJo4UIEi0ORY8AuXb6pNmsmmr84311Pmz2as5KhB5ZqfERqE8jXbEsj1WEe/i3HRvtgh/Jzn+Fult4ty0v0X6n7KMJMEroyTtM25NguZKZswwc2aTVrzkx7E5McSllPwYuLKKj6PYMhjllMFoAWUwyikDG5dSBpc5/l4uwN/LHH9tXIq/qnztMWvkoD1VfKkXsYbJm4mZIpFiIgSOC/f4qjm20RWTJz/KU5ExWoyI8fMpzWlELh1zgpHQaIA2jLzSgEZTcYWpT47F7F74DFpFFLmCyto9qxBa6t7JNXAGomR1/awViDnN0hFjY+FyoVuTfS8jLoNrwem1fbcN7HwqS0mupGByQ7WQ7U9dOyDFZqlZC9lRgmKdNlvNWoB74EWW8rk1p+cFesT8MqXbooxfyXylV/tcUbzofBa3Tcm3eqYh22JUevG8KU62Tbj4NP8HFv8HC/J/YPHfzo8/e223sCL3VNthWt4q6AriRLsz1gr3bPrytPH4mm6Stsyq0Wl/0Am0Z9Z+5Azh0Uh6Yc4MF1rYHi2tnWwNZ0RtKBfu9TYdVcVW7aAZzbjQjNhcSRa5fpxGeV3EOQVwnpNXjIuw6Mt7Q4pZxK1Zxmdx+v8ia5TxGaabR+kXk2uc2f5xUWusdnu42kg/Nbao1d54Db2I1eR4jS1mNTnrv30u9yJLzV1vi7Ns5jV58/VFcPNm4vPXz3Lk53IB75Yjr2wUjC9HkS/FVqmh+oafTXDzdV4XswqejZPFrILje6ti5WEo2chcRAmVc0qovAB/yzn+2rgUf5dz/C3Stgwj62+R1qYZW9LfrM9bjuuMOwXzomxm80O1Sc8n3h6ei88Hbb8p6+9i9pvwPcOkUM+QjKwXEU/JOHoR0bOF8h+imLosdn8CVHPF6svc2Vtk/9F+RyR895P7LkR77jB+6ec/4vfOqpT5S0w/GZJmHhdmnmSYJ4WZ8RHanXd8eR5nmLa845knGaYt7xSmu8czT+pAKygVps8droRe6RUg+N6VClN7NBX0ylEQWQvI1vSpbQHamh9aw9zPU2eYe4R0nDVScQfxtVw9pYe1hqntI0QZGj+wROzaji4R/B13RrKe0cVpbvK2o/uV/GtwI4mfUZyjqeeouI1z46ftQVzsvB/rLeXJ1Taa2iaVQRtdBpgTt32m1+R74ep6l6lPQtOnM+N2aPcY5nlWygn/bRSRcqr4JopIjfPFxuIKiog59ceVW0weXXmj5GiTYZ9FppU8lkrxdQ1NpXzWziGaSvt0CCwVc5YI5CPGM6j0MGW2AzAwZ97sOCIXow9FLCZOodrB1MlD63XPQ4Sua3YC6x9YbR+AuhtW3Q1Qt2zVLYO6W+H5U+mrgq3w9ClI174PgSW4Z4QYipm/dRz+xKWwRHOeOcxNnn1Oo+PWkfAWMH7jaeaUQNc4SzlLMH4KtYuZd1o1pmSrmeuCau61L+WzbarsCsWkfObBYcbPw8J+Hmb8zGdi/NT+ze+V9mZ+H0xpd5G12iXWaxdZs11i3Wb9zS8Fmr+HOf7auLg67jprGeed8aqIL1c8XX+C8CyMD6y8rae7ivoN95PE4o6r9H3SyVOxXZ7ka5vTqly6q2ydzxC2w9PxzOeQq5JblbOH00z6IViaFYBcKcDW5G2Sz9sEn/PJdp9h9oazNDYKlLObXCnAFuQVp9cr1vLAeu7iVwpaWObztweyTvPo8XchT92kSobloiXvHHwM2C6HfclyNFfWz3yJnu62TH8HzIGmPI4+XThJEel4ykjm1ObROtEnF8tP0dzrH4LWZnp9LN0Xt2UcQavdQl99zrl6UrAtdZ6yHyJmvdPItpnfnrIS14asDuSnTKQ/+1xFl3kP8tzYjz8z4a4DkeaHn8tlLMdPVlY1kfZKPDFjakWfHbfE66oXai/xlitKscN/N+S4NGDvyRO4taxuCSJVnfIiWpV5f4//f8R+JC3rkyaE7NsyRax6vh2TXpe5eu/0xs17d0vya8n+4uD+3Xs/vnt/5yc33/8FU1+vsG9z3m12j73L3mebvFXv85z8iv2R/YX99df/fP7y81efv65EX34p1HmLJb6ef/P/i9dl0A==</latexit>\na <latexit sha1_base64=\"MoFVAJMAn0YI/DAGDrQZ+yOSN10=\">AACLQnicvZ1JcyNJFYBzhm1olplhLkRwMdM09BBNh7qBGKIJIsar3O3utmx5H087JLksq1su1WixJXn0B/rEjQBOEMGB4GfAgT/AYX4CcCKGCC5zIJfKytoy33uq6rHDtpx673svM18ulVmVagbdzmBYqXz62utf+vJXvvq1N75+4xvf/Na333zr7e/sDXqjfsvbbfW6vf5BszHwuh3f2x12hl3vIOh7jYtm19tvvlgW7+9fev1Bp+fvDCeB99FFo+13zjqtxpAn7R03e93TxslbNyt3K/JrIfviXvjiJgu/ar23V99hx+yU9ViLjdgF85jPhvx1lzXYgH9/yO6xCgt42kfsmqf1+auOfN9jM3aD6464lMclGjz1Bf/d5v99GKb6/H/BHEjtFrfS5T99rrnAblX+Uflz5bPK3yt/qfyz8rmVdS0ZwpcJ/9tUul5w8ubL79b/B2pd8L9Ddm60nBpd/kr415E6FzKnd3LSBc2V9yE7Y7+Qee7wMghkiiiNlvLjcvqbz+oPtm9d/7Dyx8q/eTn8ofJp5a+8JPzL/7b+tOVt/57TBd/nWlfSpifte9z2Na8tj/M7nHXNVsLXvvStw6X8sF7ydQP+qsd/z7ifmiP8rIXpPZ4jHKkr/15kSI/D9IZTW/+f1t5JyLkILfk6S1gO07syhvs8alwUj41lpKQpq2F6EIv0fMKZrP1hhrAWpdt1dWSldRdzIu4Gu+UoC5HXTk4+lmPvCMJCxBDlp+rJ5xrXPF1TVYz0OGMm2/9H8l2fp3SkrOojRFqTyyyw22H09OR/6neyHhfYTc55j/+dIfzQMVqGH/lxTfNHR3oZ/pjWEffhmH/P44dfuiewF7rdlVEa2baaVypQpCgf8qwfy353zH8LH64TPrwn+2Is35Npysq5bF3C5x/w/5b4+2P56pLHmBoLxGhyP+wd4RLd4r3NSsju87GiK/Xfl6PuLPZqlhgVspwB7zP8kKPHvb7sH/Q77tzqfsOXPZ4iDsOxTbTjbg5Z6cyk/K9AC6KO23JcTfsp4ilLj8sf85F4hrbQ4HVCsSDkcRb68vWLRNQnmUZCtdkz/t5tGdfHskdvc8mhjD+XnaGcJ9lsqHdvh60FqteOnJvYaUaiiMcDWU8tSwTq947ZLwHOSLbjfjiaaY8/Cf35BK09SOib9GtJmrFnvI26WcJn8Z6qifshqSsl2rKkVH91h/++H8VGOzNnyCN3ov4EbmUddlf+nPKfWaTVifoXdw5O5Rw1vzUEMncBf/X92I9OhfucgL8aSXYA9qWnYb8SsB8BsrWIWgMlNbUGUgPZk6h+4WN2InuLLk89T81Vs5q+vDYKZG/RCCPrOpwfi979i/n+Of9214au3Sb/Xk7VuEi7lqnuvMblV3MZqyTG01zGUxJjO5exTWLUcxl1RIw35QzllE1lq+hJTjxdX5n2ZNutIMYQrdmLxlo7717EwxAXAdoiwbslgLVEYC0DrGUCawVgrRBYqwBrlcBaA1hrBFYVYFUJrHWAtU5gPQRYDwmsRwDrEYG1AbA2CKzHAOsxgfUEYD0hsJ4CrKcE1ibA2iSwagCrRmBtAawtAmsbYG0TWHWAVSewdgDWDoG1C7B2Caw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHpJG7AdAaBM+aAKtJYLUAVovAOgVYpwQWNG/yCKwzgHVGYLUBVpvAOgdY5wRWB2B1CKznAOs5gfUCYL0gsLoAq0tgXQCsCwLLB1g+gQVdf/QIrABgBQTWxwDrYwKrD7D6BNYAYA0IrCHAGhJYI4A1IrAuAdYlgXUFsK4IrDHAGhNYE4A1IbCmAGtKGrmn4WpsMu0Zi682xFdt8V4Oo5V4iOvFPMbNNszKl6skkmtk+PmHJ9c0IbaRw4+uDaZ3l93suCR+fiLWXzGexyXxM5ZArtqKeyCgEeE4I42PGlzZT8llj+NSqWrXGUOOS+JnPT0U28jh5y0NuUcNs+OS+JlMQzLg64LjlCx+fgONQMehDH6WAxN9EnEMzlqPQxn8jAcmBiRiX+6mQEwthZ+1dMJ9SYgcl8S3uQairrQUfkZD6d/S0vgaPEfV4TmpFlsoaotIVTmEfR2QqCLdk+MbRI5LYulV9Agbl8TSV9AjbFwSv7KGHUt25hhLHhP65KQsfqULjpYDUqzUEMQaiVhH90z1OXqmXVIfkpbGlwqmtdeIrb2Gau01YmvfRLf2uCT+yrEh7zqglHq+Ds0iftaXladZwoz4cUkaHTf6J2VpFvAzgaw8taQwrSIpqywsOK205DWk2ZnW164qHbcnrWSXrAzMXrSSXbYyMHvQSnbFysDsPSvZVSsDs+esZNesDMxes5KtWhmYPWYlu25lYPaWlexDKwOzp6xkH1kZmL1kJbthZWD2kJXsYysDs3esZJ9YGZg9YyX71MrA7BUr2U0rA7NHrGRrVgZmb1jJblkZmD1hJbttZWD2gpVs3crA7AEr2R0rA7P3q2R3rQzMnq+S3bMyMHu9SnbfysDs8SrZAysDs7erZA+tDMyerpI9sjJwe7k6N8PYDkZejtxrw7jcumwclmLjIFp/puXDI+bDbsOej7gNlxUzbzR7qVfy3ltPrmkPU1Kw30Z2mCEO5XxYvKJSs6uhLn/z1k7pFiD/i1iZEnKBX4nWM0kX1cw2sb6uoryk3AlYRREp9/DtooiYnt+VW0wenzv0Kfv+TUKE0PeJmsQYT1tw2Yj3ReaOlmQPdRojfVH3h6tvt+9q9bAhnwvQT8iYMRirOUloHhI0pwnNI0DTPD17Gj7toErWpSNWJsRadEv+1VoXZK3z0NNsVOZZcNN9GVuD8F4XyBefmecUWsQS0nWaTj/JiVHM7nuaMyHwMTvyac6UwMft0veljtjfi69kCCt9MCa05sxJwjFc8ZTPxZIv+I/aj3b5eRL2d6afg9pD11ECeTR3HyCeoFTP4E4yNQE9O6TOJhBPRir7l8zLMKAnhZIezACvnrFj2WKHJKreAXWR8yJZz2fns6rvhaRaPS1sdZiK6eTY6qrzMvzJiwpTFlDMvCoPTAzM4wE1EmxlYO9t4LbkKhlKhNh9cEUGxcL88Udt1fPlxVXD1LZWngfxuryF9sGMMkV7uXntFu3nKCWeHlNdZeyiBvL56V50f6DaY3Jr9GS791Ka83rg8RbeCbVbiBKERvJxNHqmZ5MTsIynFt0pQlfPbPP3pcag/sSpr+1DTzvrVTf4aWe9rgU/7RxEZRpY5ul3UHPpICrffM4UzRk7vcF54vIDQ3DlhHKv75hl784dE3TTd+COwfo0FscF7gI2tm0UTK82iXyZFPBlEvlio2B8Sd4xYGolmT6/l2m+R+Bj/Dc++whJL5SEokXJjkJp17iJIwUIEm2OBY9A+bb6pJls2up8Yz11/mz2ag4KRJ7ZKbFRKE+jHbBsj1XEuzg33ZuV4e8kx98ivU2cm/a3SP9TlZEkeFWUpH3GrUnQXMmMGWbObNKKl/w4NieGuJSSHwNXVvFxFFsGo5wyGJVQBqOcMrBxKWVwmePvZQn+Xub4a+NS/FXla49ZIwftqeJLvYg1TN5MzBSJFBMhcFy4x1fNsY2umDz5UZ6KjNFiRIyfT2lOI3LpmBOMhEYTtGHklQY0moorTH1yLGb3wmfQKqLIFVTW7lmF0FL3Tq6AMxAlq+tnpUDMaZaOGBsLlwvdmux7GXEZXAtOr+27bWDnU1lKciUFkxuqhWx/6toBKTZLzVrIjhIU67TZatYC3AOXWcrn1pyeF+gR88uUbosyfiXzlV7tc0Vx2fksbpuSb/VMQ7bFqPTieVOcbJtw8Wn+Dyz+D0ryf2Dx386PP3ttt7Ak91TbYVreKugS4kS7M3Ya7tn05Wnj8TXdJG2R1aPT/qATaM+s/cgZwqOR9MKcGS60sD1aWjvZGs6I2lAu3OttOqqKrdpBM5pxoRmxuZIscv04jfJaxjkFcJ6TV4xlWPTlvSHFLOLWLOOzOP1/kTXK+AzTzaP0i8k1zmz/WNYaq90erjbST42Vtdobr6FXsZocr7FyVpOz/tvncq+y1Nz1Vp5lM6/Jm6+Xwc2bic9fP4uRn4sFvFuMvLJRML4cRL4UW6WG6ht+NsHN13ktZxU8GyflrILje6ti5WEo2cgso4SqOSVULcHfao6/Ni7F38Ucf4u0LcPI+luktWnGhvQ36/OG4zrjTsG8KJvZ/FBt0vOJt4fn4vNB22/K+lvOfhO+Z5gU6hmSkfUq4ikZR68iejZQ/kMUU5fF7k+Aaq5YfZk7e4vsP9rviITvfnLfhWjPHcYv/fxH/N5ZlTJ/ieknQ9LMw8LMowzzqDAzPkK7844vz8MM05Z3PPMow7TlncJ093jmSR1oBaXG9LnDtdArvQIE37tSY2qPpoZeOQoiawHZmj61LUBb80NrmPt5Ggxzj5COs2Yq7iC+lmuk9LDWMLV9gChD4weWiF3b0SWCv+POSDYyujjNdd52dL+Sfw1uJPEzinM09RwVt3Fu/LQ9iIud92O9pTy52kZT26QyaKPLAHPits/0mnwvXF3vMvVJaPp0ZtwO7Q7DPM9KOeG/jSJSThVfRxGpcV5uLC6hiJhTf1y5xeTRlTdKjtYZ9llkWsljqRRfV9BUymft7KOptE+HwFIxZ4lAPmI8g0oPU2ZbAANz5s2WI3Ix+lDEYuIUqh1MnTyyXvc8Qui6Ziew/p7V9h6ou2bVXQN1q1bdKqi7EZ4/lb4q2AhPn4J07fsQWIJ7RoihmPlbx+FPXApLNOeZw9zk2ec0Om4dCW8B4zeeZk4JdI2zlLME46dQu5h5p1VjSraeuS6o5177Uj7bps6uUEzKZx7sZ/zcL+znfsbPfCbGT+3f/F5pb+b3wZR2F1mrXWK9dpE12yXWbdbf/FKg+buf46+Ni6vjrrOWcd4Zr4r4csXT9ScIz8L4wMrberqrqN9wP0ks7rhK3yedPBXb5Um+tjmtyqW7zFb5DGEzPB3PfA65KrllOXs4yaTvg6VZA8i1AmxN3iT5vEnwOZ9s9xlmrzlLY61AObvJtQJsQV5yer1kLQ+s5y5+raCFRT5/eyjrNI8efxfy1E2qZVguWvLOwSeA7WrYlyxGc2X9zJfo6W7L9PfAHGjKk+jThZMUkY6njGRObR6tEn1ysfwUzb3+IWhtptfH0n1xW8YRtNot9NXnnKsnBdtS5xn7MWLWO41sm/ntCatwbcjqQH7KRPqzz1V0mfcgz439+DMT7joQaX74uVzGcvxkZVUTaa/EEzOmVvTZcQu8rnqh9gJvuaIUO/x3U45LA/ZAnsCtZXVLEKnqlBfRqsz7O/z/A/YTaVmfNCFk35UpYtXz3Zj0qszVg5O3bt67W5FfC/YXe/fv3vvp3ftbP7v5wRJTX2+w73HebXaPvc8+YOu8Ve/ynDxnv2a/Zb97+beX/3r5n5efKdHXXwt13mGJr5ef/x/s41+A</latexit>\nb<latexit sha1_base64=\"URFFgGflwoI4lopeshLNFmtz3XI=\">AACLQnicvZ1JcyNJFYBzhm1olplhLkRwMdM09BBNh7qBGKIJIsar3O3utmx5H087JLksq1su1WixJXn0B/rEjQBOEMGB4GfAgT/AYX4CcCKGCC5zIJfKytoy33uq6rHDtpx673svM18ulVmVagbdzmBYqXz62utf+vJXvvq1N75+4xvf/Na333zr7e/sDXqjfsvbbfW6vf5BszHwuh3f2x12hl3vIOh7jYtm19tvvlgW7+9fev1Bp+fvDCeB99FFo+13zjqtxpAn7R03e93T5slbNyt3K/JrIfviXvjiJgu/ar23V99hx+yU9ViLjdgF85jPhvx1lzXYgH9/yO6xCgt42kfsmqf1+auOfN9jM3aD6464lMclGjz1Bf/d5v99GKb6/H/BHEjtFrfS5T99rrnAblX+Uflz5bPK3yt/qfyz8rmVdS0ZwpcJ/9tUul5w8ubL79b/B2pd8L9Ddm60nBpd/kr415E6FzKnd3LSBc2V9yE7Y7+Qee7wMghkiiiNlvLjcvqbz+oPtm9d/7Dyx8q/eTn8ofJp5a+8JPzL/7b+tOVt/57TBd/nWlfSpifte9z2Na8tj/M7nHXNVsLXvvStw6X8sF7ydQP+qsd/z7ifmiP8rIXpPZ4jHKkr/15kSI/D9IZTW/+f1t5JyLkILfk6S1gO07syhvs8alwUj41lpKQpq2F6EIv0fMKZrP1hhrAWpdt1dWSldRdzIu4Gu+UoC5HXTk4+lmPvCMJCxBDlp+rJ5xrXPF1TVYz0OGMm2/9H8l2fp3SkrOojRFqTyyyw22H09OR/6neyHhfYTc55j/+dIfzQMVqGH/lxTfNHR3oZ/pjWEffhmH/P44dfuiewF7rdlVEa2baaVypQpCgf8qwfy353zH8LH64TPrwn+2Is35Npysq5bF3C5x/w/5b4+2P56pLHmBoLxGhyP+wd4RLd4r3NSsju87GiK/Xfl6PuLPZqlhgVspwB7zP8kKPHvb7sH/Q77tzqfsOXPZ4iDsOxTbTjbg5Z6cyk/K9AC6KO23JcTfsp4ilLj8sf85F4hrbQ4HVCsSDkcRb68vWLRNQnmUZCtdkz/t5tGdfHskdvc8mhjD+XnaGcJ9lsqHdvh60FqteOnJvYaUaiiMcDWU8tSwTq947ZLwHOSLbjfjiaaY8/Cf35BK09SOib9GtJmrFnvI26WcJn8Z6qifshqSsl2rKkVH91h/++H8VGOzNnyCN3ov4EbmUddlf+nPKfWaTVifoXdw5O5Rw1vzUEMncBf/X92I9OhfucgL8aSXYA9qWnYb8SsB8BsrWIWgMlNbUGUgPZk6h+4WN2InuLLk89T81Vs5q+vDYKZG/RCCPrOpwfi979i/n+Of9214au3Sb/Xk7VuEi7lqnuvMblV3MZqyTG01zGUxJjO5exTWLUcxl1RIw35QzllE1lq+hJTjxdX5n2ZNutIMYQrdmLxlo7717EwxAXAdoiwbslgLVEYC0DrGUCawVgrRBYqwBrlcBaA1hrBFYVYFUJrHWAtU5gPQRYDwmsRwDrEYG1AbA2CKzHAOsxgfUEYD0hsJ4CrKcE1ibA2iSwagCrRmBtAawtAmsbYG0TWHWAVSewdgDWDoG1C7B2Caw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHpJG7AdAaBM+aAKtJYLUAVovAOgVYpwQWNG/yCKwzgHVGYLUBVpvAOgdY5wRWB2B1CKznAOs5gfUCYL0gsLoAq0tgXQCsCwLLB1g+gQVdf/QIrABgBQTWxwDrYwKrD7D6BNYAYA0IrCHAGhJYI4A1IrAuAdYlgXUFsK4IrDHAGhNYE4A1IbCmAGtKGrmn4WpsMu0Zi682xFdt8V4Oo5V4iOvFPMbNNszKl6skkmtk+PmHJ9c0IbaRw4+uDaZ3l93suCR+fiLWXzGexyXxM5ZArtqKeyCgEeE4I42PGlzZT8llj+NSqWrXGUOOS+JnPT0U28jh5y0NuUcNs+OS+JlMQzLg64LjlCx+fgONQMehDH6WAxN9EnEMzlqPQxn8jAcmBiRiX+6mQEwthZ+1dMJ9SYgcl8S3uQairrQUfkZD6d/S0vgaPEfV4TmpFlsoaotIVTmEfR2QqCLdk+MbRI5LYulV9Agbl8TSV9AjbFwSv7KGHUt25hhLHhP65KQsfqULjpYDUqzUEMQaiVhH90z1OXqmXVIfkpbGlwqmtdeIrb2Gau01YmvfRLf2uCT+yrEh7zqglHq+Ds0iftaXladZwoz4cUkaHTf6J2VpFvAzgaw8taQwrSIpqywsOK205DWk2ZnW164qHbcnrWSXrAzMXrSSXbYyMHvQSnbFysDsPSvZVSsDs+esZNesDMxes5KtWhmYPWYlu25lYPaWlexDKwOzp6xkH1kZmL1kJbthZWD2kJXsYysDs3esZJ9YGZg9YyX71MrA7BUr2U0rA7NHrGRrVgZmb1jJblkZmD1hJbttZWD2gpVs3crA7AEr2R0rA7P3q2R3rQzMnq+S3bMyMHu9SnbfysDs8SrZAysDs7erZA+tDMyerpI9sjJwe7k6N8PYDkZejtxrw7jcumwclmLjIFp/puXDI+bDbsOej7gNlxUzbzR7qVfy3ltPrmkPU1Kw30Z2mCEO5XxYvKJSs6uhLn/z1k7pFiD/i1iZEnKBX4nWM0kX1cw2sb6uoryk3AlYRREp9/DtooiYnt+VW0wenzv0Kfv+TUKE0PeJmsQYT1tw2Yj3ReaOlmQPdRojfVH3h6tvt+9q9bAhnwvQT8iYMRirOUloHhI0pwnNI0DTPD17Gj7toErWpSNWJsRadEv+1VoXZK3z0NNsVOZZcNN9GVuD8F4XyBefmecUWsQS0nWaTj/JiVHM7nuaMyHwMTvyac6UwMft0veljtjfi69kCCt9MCa05sxJwjFc8ZTPxZIv+I/aj3b5eRL2d6afg9pD11ECeTR3HyCeoFTP4E4yNQE9O6TOJhBPRir7l8zLMKAnhZIezACvnrFj2WKHJKreAXWR8yJZz2fns6rvhaRaPS1sdZiK6eTY6qrzMvzJiwpTFlDMvCoPTAzM4wE1EmxlYO9t4LbkKhlKhNh9cEUGxcL88Udt1fPlxVXD1LZWngfxuryF9sGMMkV7uXntFu3nKCWeHlNdZeyiBvL56V50f6DaY3Jr9GS791Ka83rg8RbeCbVbiBKERvJxNHqmZ5MTsIynFt0pQlfPbPP3pcag/sSpr+1DTzvrVTf4aWe9rgU/7RxEZRpY5ul3UHPpICrffM4UzRk7vcF54vIDQ3DlhHKv75hl784dE3TTd+COwfo0FscF7gI2tm0UTK82iXyZFPBlEvlio2B8Sd4xYGolmT6/l2m+R+Bj/Dc++whJL5SEokXJjkJp17iJIwUIEm2OBY9A+bb6pJls2up8Yz11/mz2ag4KRJ7ZKbFRKE+jHbBsj1XEuzg33ZuV4e8kx98ivU2cm/a3SP9TlZEkeFWUpH3GrUnQXMmMGWbObNKKl/w4NieGuJSSHwNXVvFxFFsGo5wyGJVQBqOcMrBxKWVwmePvZQn+Xub4a+NS/FXla49ZIwftqeJLvYg1TN5MzBSJFBMhcFy4x1fNsY2umDz5UZ6KjNFiRIyfT2lOI3LpmBOMhEYTtGHklQY0moorTH1yLGb3wmfQKqLIFVTW7lmF0FL3Tq6AMxAlq+tnpUDMaZaOGBsLlwvdmux7GXEZXAtOr+27bWDnU1lKciUFkxuqhWx/6toBKTZLzVrIjhIU67TZatYC3AOXWcrn1pyeF+gR88uUbosyfiXzlV7tc0Vx2fksbpuSb/VMQ7bFqPTieVOcbJtw8Wn+Dyz+D0ryf2Dx386PP3ttt7Ak91TbYVreKugS4kS7M3Ya7tn05Wnj8TXdJG2R1aPT/qATaM+s/cgZwqOR9MKcGS60sD1aWjvZGs6I2lAu3OttOqqKrdpBM5pxoRmxuZIscv04jfJaxjkFcJ6TV4xlWPTlvSHFLOLWLOOzOP1/kTXK+AzTzaP0i8k1zmz/WNYaq90erjbST42Vtdobr6FXsZocr7FyVpOz/tvncq+y1Nz1Vp5lM6/Jm6+Xwc2bic9fP4uRn4sFvFuMvLJRML4cRL4UW6WG6ht+NsHN13ktZxU8GyflrILje6ti5WEo2cgso4SqOSVULcHfao6/Ni7F38Ucf4u0LcPI+luktWnGhvQ36/OG4zrjTsG8KJvZ/FBt0vOJt4fn4vNB22/K+lvOfhO+Z5gU6hmSkfUq4ikZR68iejZQ/kMUU5fF7k+Aaq5YfZk7e4vsP9rviITvfnLfhWjPHcYv/fxH/N5ZlTJ/ieknQ9LMw8LMowzzqDAzPkK7844vz8MM05Z3PPMow7TlncJ093jmSR1oBaXG9LnDtdArvQIE37tSY2qPpoZeOQoiawHZmj61LUBb80NrmPt5Ggxzj5COs2Yq7iC+lmuk9LDWMLV9gChD4weWiF3b0SWCv+POSDYyujjNdd52dL+Sfw1uJPEzinM09RwVt3Fu/LQ9iIud92O9pTy52kZT26QyaKPLAHPits/0mnwvXF3vMvVJaPp0ZtwO7Q7DPM9KOeG/jSJSThVfRxGpcV5uLC6hiJhTf1y5xeTRlTdKjtYZ9llkWsljqRRfV9BUymft7KOptE+HwFIxZ4lAPmI8g0oPU2ZbAANz5s2WI3Ix+lDEYuIUqh1MnTyyXvc8Qui6Ziew/p7V9h6ou2bVXQN1q1bdKqi7EZ4/lb4q2AhPn4J07fsQWIJ7RoihmPlbx+FPXApLNOeZw9zk2ec0Om4dCW8B4zeeZk4JdI2zlLME46dQu5h5p1VjSraeuS6o5177Uj7bps6uUEzKZx7sZ/zcL+znfsbPfCbGT+3f/F5pb+b3wZR2F1mrXWK9dpE12yXWbdbf/FKg+buf46+Ni6vjrrOWcd4Zr4r4csXT9ScIz8L4wMrberqrqN9wP0ks7rhK3yedPBXb5Um+tjmtyqW7zFb5DGEzPB3PfA65KrllOXs4yaTvg6VZA8i1AmxN3iT5vEnwOZ9s9xlmrzlLY61AObvJtQJsQV5yer1kLQ+s5y5+raCFRT5/eyjrNI8efxfy1E2qZVguWvLOwSeA7WrYlyxGc2X9zJfo6W7L9PfAHGjKk+jThZMUkY6njGRObR6tEn1ysfwUzb3+IWhtptfH0n1xW8YRtNot9NXnnKsnBdtS5xn7MWLWO41sm/ntCatwbcjqQH7KRPqzz1V0mfcgz439+DMT7joQaX74uVzGcvxkZVUTaa/EEzOmVvTZcQu8rnqh9gJvuaIUO/x3U45LA/ZAnsCtZXVLEKnqlBfRqsz7O/z/A/YTaVmfNCFk35UpYtXz3Zj0qszVg5O3bt67W5FfC/YXe/fv3vvp3ftbP7v5wRJTX2+w73HebXaPvc8+YOu8Ve/ynDxnv2a/Zb97+beX/3r5n5efKdHXXwt13mGJr5ef/x93y1+B</latexit>\nCAM\n····\n····\nmin ⇧2U(a,b) log\nLX\n`=1\nexp\n✓ 1\n⌘ h⇧, C`i\n◆!\n<latexit sha1_base64=\"adV/TDBqGDyqoqJaqG6T80xxsfE=\">AACLyXicvZ1LcxvHEYBHeTrOw3J8SKpyga0ooVKKClSScsopV5lvSpREkODbsFgAuATXWi5WeJAAaJxyi35ADjklVTmk8jNyyR/IwT8hlVPKqcolh8zO7Ozsa6a7sZDJIgkOur/umel57MzuoBV4bn9QrX5+6ytf/drXv/HNN7715re/893vvXX77e8f9LvDXtvZb3e9bu+o1ew7nus7+wN34DlHQc9pXrY857D1YiV8//DK6fXdrr83GAfOJ5fNju+eu+3mgCed3n7VuHT905tGq+ud1dxKw/Ur4vX+gvjTvC/+tO5NKw2v22l4zvlgodEfXnIVx/M+XJw+f1JpOKMgeue812zfLE75m4PmtOE1/Y7nVCK4RK2choqNnnir0XM7F4N70Z/T23eqD6riq5J/sRi9uMOir1r37bV3WIOdsS5rsyG7ZA7z2YC/9liT9fn3x2yRVVnA0z5hNzytx1+54n2HTdmbXHfIpRwu0eSpL/jvDv/v4yjV5/+HzL7QbnMrHv/pcc0Ku1v9R/Uv1S+qf6/+tfrP6v+MrBvBCH0Z878tqesEp2/97of1/4Jal/zvgF1oLauGx1+F/rlC51Lk9H5Bekiz5X3AztmvRZ5dXgaBSAlLoy39uJr8/ov6B7t3b35S/VP1X7wc/lj9vPo3XhL+1X/af95xdv/A6SHf51rXwqYj7Dvc9g2vLYfzXc66YavRa1/45nIpP6qXYt2Av+ry31Pup+KEftai9C7PEY7kib+XOdKTKL1p1Vb/Z7X3UnI2Qlu8zhNWonRPxHCPR42N4rCRiJQsZS1KDxKRXkw4F7U/yBHW43SzroqsrO5SQcS9ye5ayiLMq1uQj5XEOyGhEjPC8pP15HONG56uqDJGupwxFe3/E/Guz1NcISv7iDCtxWUqbCGKnq74T/5O12OF3eGce/zvFOGHitF5+FEc1zR/VKTPwx/dOpI+NPj3LH74c/cE9kK1u3mURr6tFpUKFCnShyLrDdHvjvjv0IeblA/3RF+M5TsiTVq5EK0r9PnH/L9l/v5IvLriMSbHgnA0eRj1jnCJ7vDeZjVi9/hY4Qn998WoO028mqZGhTynz/sMP+Koca8n+gf1jj23qt/wRY8niYNobAvbsVdAljpTIf8haCGs444YV7N+hvGUpyflG3wknqItNHmdUCyE8jgLPfH6RSrq00wtIdvsOX9vQcR1Q/ToHS45EPFnszMQ8ySTDfnuQtRaoHp1xdzETNMSZTzui3pqGyJQvddgvwE4Q9GOe9Fopjz+LPLnM7R2P6Wv028Eacqe8zZqZ4U+h+/JmngYkTwh0RElJfur+/z3wzg2Ork5QxHZjfsTuJW57IH4OeM/01jLjfsXew7OxBy1uDUEIncBf/Vu4kelwn1OwF8NBTsA+9KzqF8J2E8B2VpMrYGSiloDqYHoSWS/8JKdit7C46kXmblqXtMX10aB6C2aUWTdRPPjsHf/cr5/xb/ttaFqt8W/VzI1HqbdiFR7XpPya4WMNRLjWSHjGYmxW8jYJTHqhYw6IsZbYoZyxiaiVXQFJ5murky7ou1WEWOI0uzGY62ZtxjzMMQlgLZE8G4ZYC0TWCsAa4XAWgVYqwTWGsBaI7DWAdY6gbUBsDYIrE2AtUlgPQJYjwisxwDrMYG1BbC2CKwnAOsJgfUUYD0lsJ4BrGcE1jbA2iawagCrRmDtAKwdAmsXYO0SWHWAVSew9gDWHoG1D7D2CawDgHVAYB0CrEMC6whgHRFYxwDrmMA6AVgnpJG7CdCaBM9aAKtFYLUBVpvAOgNYZwQWNG9yCKxzgHVOYHUAVofAugBYFwSWC7BcAutTgPUpgfUCYL0gsDyA5RFYlwDrksDyAZZPYEHXH10CKwBYAYH1EmC9JLB6AKtHYPUBVp/AGgCsAYE1BFhDAusKYF0RWNcA65rAGgGsEYE1BlhjAmsCsCakkXsSrcam056z5GpDctUW7+UgXomHuE7CY9xsQ6982UoivUaGn384Yk0TYms5/OjaZGp32c5OSuLnJ+H6K8bzpCR+xhKIVdvwHghoRGjkpPFRgyv7CbnscVwqVe46Y8hJSfysp4tiazn8vKUp9qhhdlISP5NpCgZ8XdDIyOLnN9AI1Ihk8LMcmOiTiCNw1tqIZPAzHpgYkIg9sZsCMZUUftbiRvuSEDkpiW9zTURdKSn8jIbSv2Wl8TV4garDC1IttlHUNpEqcwj72idRw3RHjG8QOSmJpW+gR9ikJJa+ih5hk5L4lTXsWLI3w1jyhNAnp2XxK11wtByRYqWGINZIxDq6Z6rP0DPtk/qQrDS+VDCtvUZs7TVUa68RW/s2urUnJfFXjk1x1wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZaaFitdIW15B6Z1pdu8p03J60lF02MjB70VJ2xcjA7EFL2VUjA7P3LGXXjAzMnrOUXTcyMHvNUnbDyMDsMUvZTSMDs7csZR8ZGZg9ZSn72MjA7CVL2S0jA7OHLGWfGBmYvWMp+9TIwOwZS9lnRgZmr1jKbhsZmD1iKVszMjB7w1J2x8jA7AlL2V0jA7MXLGXrRgZmD1jK7hkZmL1fKbtvZGD2fKXsgZGB2euVsodGBmaPV8oeGRmYvV0pe2xkYPZ0peyJkYHby1W5GSR2MIpyZF8bxuXWZuN4LjaO4vVnWj4cYj7MNsz5SNqwWdHzRr2Xei3uvXXEmvYgIwX7rWUHOeJAzIfDV1RqfjXU5m/R2indAuR/GSsTQi7wK9FqJmmj6tkm1tc1lJeUOwE3UETKPXz7KCKm57flFpPHTy36lH3/FiFC6PtELWKMZy3YbCT7In1HS7qHOkuQvqz7w+W33Xe5etgUzwWoJ2T0GIzVHKc0jwmak5TmCaCpn549i552kCVr0wlXJsK16Lb4q7QuyVoXkaf5qCyyYKf7Irb60b0ukC8+088ptIklpOo0m35aEKOY3fcsZ0zgY3bks5wJgY/bpe8JnXB/L7mSEVrpgTGhNKdWEo5hi6diLpZ8yX/kfrTNz9Oov9P9HNQePEsJFNHsfUD4BKV8Bnecqwno2SF5NkH4ZKS0f8WcHAN6UijtwRTw6jlriBY7IFHVDqiNXBTJaj47m1V1LyTV6llpq4NMTKfHVludz8OfoqjQZQHFzOvyQMfALB5QI8FUBubeBm5LtpKhRIjZB1tkUCzMHn/UVj1bXmw1TG1r8/MgWZd30T7oUaZsLzer3bL9HKXEs2OqrYxt1EA8P92N7w+Ue0x2ja5o905Gc1YPHN7C3Ui7jShBaCQfxaNndjY5Bst4YtCdIHTVzLZ4X2oE6o+t+so+9LSzWnWDn3ZW61rw085BXKaBYZ5+HzWXDuLyLeZM0JyR1RucJzY/MARbTij3+o5Y/u7cEUE3ewfuCKxPbXFU4i5gbdtEwfRq49iXcQlfxrEvJgrGl/QdA7pW0umze5nlOwQ+xn/ts4+QdCJJKFqk7DCSto2bOFKAINHmWPAIVGyrR5rJZq3ONtZT5896r+aoROTpnRIThfI02hHL91hlvEtys73ZPPwdF/hbprdJcrP+lul/NkQkhbwNlKR5xq1I0FxJjxl6zqzTypf8KDEnhriUkh8BV1bJcRRbBsOCMhjOoQyGBWVg4lLK4KrA36s5+HtV4K+JS/FXlq85ZrUctKeKL/Uy1jB50zFTJlJ0hMBxYR9fFcc0umLy5Md5KjNGhyNi8nxKfRqRTUefYBRqtEAbWl5qQKNpeIWpTo7F7F74DFpFDHMFlbV9VhFqyXsnV8EZiJRV9bNaIuYUS0WMiYXLhWpN5r2MpAyuBWfX9u02sPOpPCW9koLJDdVCvj+17YCUm6XmLeRHCYp12mw1bwHugedZyhfGnF6U6BGLy5RuizJ+pfOVXe2zRfG881neNiXf8pmGfIuR6eXzJjn5NmHj0/zvG/zvz8n/vsF/Mz/57LXZwrLYU+1EaUWroMuIE+3O2Vm0Z9MTp40n13TTtCVWj0/7g06gPTf2I+cIj4bCC31meKiF7dGy2unWcE7UhnJhX29TUVVu1Q6a0YxKzYj1lWSZ68dJnNd5nFMA5zl9xTgPi764N6ScRdyaZXIWp/4vs0aZnGHaeZR+Mb3Gme8f57XGaraHq43sU2PzWu1N1tDrWE1O1th8VpPz/pvncq+z1Oz1Nj/Lel5TNF+fB7doJj57/SzFfi6V8G4p9spEwfhyFPtSbpUaqm/42QQ7X+V1Pqvg+TiZzyo4vrcqVx6ako/MeZTQRkEJbczB340Cf01cir9LBf6WaVuakfe3TGtTjC3hb97nLct1xv2SeZE28/mh2qTnE28Pz8Xng7bflPd3PvtN+J5hXKpnSEfW64indBy9jujZQvkPUXRdlrs/Aaq5cvWl7+wts/9oviMSvvvJfheiOXcYv9TzH8l7Z2XK7CWmngzJMo9LM09yzJPSzOQIbc87vjyPc0xT3vHMkxzTlHcK097j6Sd1oBWUGlPnDtcir9QKEHzvSo3JPZoaeuUoiK0FZGvq1LYAbc2PrGHu52kyzD1CKs5ambiD+EqumdHDWsPU9hGiDLUfWCJ2bUeVCP6OOy3ZzOniNDd521H9SvE1uJbEzygu0NQLVNwmucnT9iAudt6P9Zby5GoHTe2QyqCDLgPMids+U2vy3Wh13WPyk9DU6cy4Hdo9hnmelXLCfwdFpJwqvokiUuN8vrG4jCJiTv2x5RaTR1veKDnaZNhnkWklj6VSfF1FUymftXOIptI+HQJLxZwlAvmI8QwqPUyZ7QAMzJk3O5bIxehDEYuJU6h2MHXy2Hjd8xiha5udwPoHRtsHoO66UXcd1N0w6m6AulvR+VPZq4Kt6PQpSNe8D4El2GeEGIqev7kWf5JSWKI+zxzmps8+p9Fx60h4Cxi/8TR9SqBtnKWcJZg8hdrGLDqtGlOy9dx1Qb3w2pfy2TZ1do1iUj7z4DDn52FpPw9zfhYzMX4q/2b3Snkzuw+6tD1krXrEevWQNesR6zbvb3Ep0Pw9LPDXxMXVsWetZZx32qsyvlzzdPUJwtMoPrDypp7uOu437E8Sh3dcZe+TTp+KbfOkWFufVmXTXWFrfIawHZ2Opz+HXJbcipg9nObSD8HSrAHkWgm2Im+TfN4m+FxMNvsMs9etpbFeopzt5FoJdkhetnq9bCwPrOc2fq2khSU+f3sk6rSInnwX8tROquVYNlr6zsGngO2NqC9ZiufK6pmvsKdbEOn3wBwoytP404XTlDAdTxmKnJo8WiP6ZGP5GZp9/SOkdZhaH8v2xR0RR9Bqd6gvP+dcPinYETrP2c8Qs95JbFvPb09ZlWtDVvviUyayn30uo0u/B3mu7SefmbDXQZjmR5/LpS0nT1aWNZH1KnxiRteKOjuuwuuqG2lXeMsNS9Hlv1tiXOqzD8QJ3EpWtYQwVZ7yErYq/f4e//+I/VxYVidNhLLviZRw1fO9hPSayNUHp7fvLD6oiq+K+cXBwweLv3jwcOeXdz5aZvLrDfYjzltgi+x99hHb5K16n+fk37d+cKty691XW69evhq9mkjRr9yKdN5hqa9Xv/0/d1uOqg==</latexit>\nFigure 2: Semantic correspondence framework based on FROT.\nA potential problem with FROT is that the estimation depends significantly on the magnitude of the cost of each layer (also known as a group). Hence, normalizing each cost matrix is important. Therefore, we normalized each feature vector by f (`,s)i ← f (`,s) i /‖f (`,s) i ‖2. Consequently, the cost matrix is given by [C`]ij = 2 − 2f\n(`,s) i > f (`,s′) j . We can use dis-\ntances such as the L1 distance." }, { "heading": "Computation of a and b with", "text": "staircase re-weighting: For semantic correspondence, setting a ∈ Rhsws and b ∈ Rhs′ws′ is important because semantic correspondence can be affected by background clutter. Therefore, we generated the class activation maps (Zhou et al., 2016) for the source and target images and used them as a and b, respectively. For CAM, we chose the class with the highest classification probability and normalized it to the range [0, 1]." }, { "heading": "4 RELATED WORK", "text": "OT algorithms: The Wasserstein distance can be determined by solving the OT problem. An advantage of the Wasserstein distance is its robustness to noise; moreover, we can obtain the transport\nplan, which is useful for many machine learning applications. To reduce the computation cost for the Wasserstein distance, the sliced Wasserstein distance is useful (Kolouri et al., 2016). Recently, a tree variant of the Wasserstein distance was proposed (Evans & Matsen, 2012; Le et al., 2019; Sato et al., 2020); the sliced Wasserstein distance is a special case of this algorithm.\nThe approach most closely related to FROT is a robust variant of the Wasserstein distance, called the subspace robust Wasserstein distance (SRW) (Paty & Cuturi, 2019). SRW computes the OT problem in a discriminative subspace; this is possible by solving dimensionality-reduction problems. Owing to the robustness, SRW can successfully compute the Wasserstein distance from noisy data. The max–sliced Wasserstein distance (Deshpande et al., 2019) and its generalized counterpart (Kolouri et al., 2019) can also be regarded as subspace-robust Wasserstein methods. Note that SRW (Paty & Cuturi, 2019) is a min–max based approach, while the max–sliced Wasserstein distances (Deshpande et al., 2019; Kolouri et al., 2019) are max–min approaches. The FROT is a feature selection variant of the Wasserstein distance, whereas the subspace approaches are used for dimensionality reduction.\nAs a parallel work, a general minimax optimal transport problem called the robust Kantorovich problem (RKP) was recently proposed (Dhouib et al., 2020). RKP involves using a cutting-set method for a general minmax optimal transport problem that includes the FROT problem as a special case. The approaches are technically similar; however, our problem and that of Dhouib et al. (2020) are intrinsically different. Specifically, we aim to solve a high-dimensional OT problem using feature selection and apply it to semantic correspondence problems, while the RKP approach focuses on providing a general framework and uses it for color transformation problems. As a technical difference, the cutting-set method may not converge to an optimal solution if we use the regularized OT (Dhouib et al., 2020). By contrast, because we use a Frank–Wolfe algorithm, our algorithm converges to a true objective function with regularized OT solvers. The multiobjective optimal transport (MOT) is an approach (Scetbon et al., 2020) parallel to ours. The key difference between FROT and MOT is that MOT tries to use the weighted sum of cost functions, while FROT considers the worst case. Moreover, as applications, we focus on the cost matrices computed from subsets of features, while MOT considers cost matrices with different distance functions." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 SYNTHETIC DATA", "text": "We compare FROT with a standard OT using synthetic datasets. In these experiments, we initially generate two-dimensional vectors x ∼ N(µx,Σx) and y ∼ N(µy,Σy). Here, we set µx = (−5, 0)>, µy = (5, 0)>, Σx = Σy = ((5, 1)>, (4, 1)>). Then, we concatenate zx ∼ N(08, I8) and zy ∼ N(08, I8) to x and y, respectively, to give x̃ = (x>, z>x ), ỹ = (y>, z>y ). For FROT, we set η = 1.0 and the number of iterations of the Frank–Wolfe algorithm as T = 10. The regularization parameter is set to = 0.02 for all methods. To show the proof-of-concept, we set the true features as a group and the remaining noise features as another group.\nFig. 1a shows the correspondence from x and y with the vanilla OT algorithm. Figs. 1b and 1c show the correspondence of FROT and OT with x̃ and ỹ, respectively. Although FROT can identify\na suitable matching, the OT fails to obtain a significant correspondence. We observed that the α parameter corresponding to a true group is α1 = 0.9999. Moreover, we compared the objective scores of the FROT with LP, FW-EMD, and FW-Sinkhorn ( = 0.1). Figure 3a shows the objective scores of FROTs with the different solvers, and both FW-EMD and FW-Sinkhorn can achieve almost the same objective score with a relatively small η. Moreover, Figure 3b shows the mean squared error between the LP method and the FW counterparts. Similar to the objective score cases, it can yield a similar transport plan with a relatively small η. Finally, we evaluated the FW-Sinkhorn by changing the regularization parameter η. In this experiment, we set η = 1 and varied the values. The result shows that we can obtain an accurate transport plan with a relatively small ." }, { "heading": "5.2 SEMANTIC CORRESPONDENCE", "text": "We evaluated our FROT algorithm for semantic correspondence. In this study, we used the SPair71k (Min et al., 2019b). The SPair-71k dataset consists of 70, 958 image pairs with variations in viewpoint and scale. For evaluation, we employed a percentage of accurate key points (PCK), which counts the number of accurately predicted key points given a fixed threshold (Min et al., 2019b). All semantic correspondence experiments were run on a Linux server with NVIDIA P100.\nFor the optimal transport based frameworks, we employed ResNet101 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) for feature and activation map extraction. The ResNet101 consists of 34 convolutional layers and the entire number of features is d = 32, 576. Note that we did not fine-tune the network. We compared the proposed method with several baselines (Min et al., 2019b) and the SRW1. Owing to the computational cost and the required memory size for SRW, we used the first and the last few convolutional layers of ResNet101 as the input of SRW. In our experiments, we empirically set T = 3 and = 0.1 for FROT and SRW, respectively. For SRW, we set the number of latent dimension as k = 50 for all experiments. HPF (Min et al., 2019a) and OT-HPF (Liu et al., 2020) are state-of-the-art methods for semantic correspondence. HPF and OT-HPF required the validation dataset to select important layers, whereas SRW and FROT did not require the validation dataset. OT is a simple optimal transport-based method that does not select layers.\nTable 1 lists the per-class PCK results obtained using the SPair-71k dataset. FROT (η = 0.3) outperforms most existing baselines, including HPF and OT. Moreover, FROT (η = 0.3) is consistent with OT-HPF (Liu et al., 2020), which requires the validation dataset to select important layers. In this experiment, setting η < 1 results in favorable performance (See Table 3 in the Appendix). The computational costs of FROT is 0.29, while SRWs are 8.73, 11.73, 15.76, respectively. Surprisingly, FROT outperformed SRWs. However, this is mainly due to the used input layers. Therefore, scaling up SRW would be an interesting future work.\nWe further evaluated FROT by tuning hyperparameters η and using validation sets, where the maximum search ranges for η and are set to 0.2 to 2.0 and 0.1 to 0.6 with intervals of 0.1, respectively. Figure 6 in Appendix shows the average PCK scores for (η, ) pairs on the validation split of SPair71k. By using hyperparameter search, we selected (η = 0.2, = 0.4) as an optimal parameter. The FROT with optimal parameters outperforms the state-of-the-art method (Liu et al., 2020).\n1https://github.com/francoispierrepaty/SubspaceRobustWasserstein" }, { "heading": "5.3 FEATURE SELECTION EXPERIMENTS", "text": "Since FROT finds the transport plan and discriminative features between X and Y , we can use FROT as a feature-selection method. We considered X ∈ Rd×n and Y ∈ Rd×m as sets of samples from classes 1 and 2, respectively. The optimal important feature is given by\nα̂` = exp\n( 1 η 〈Π̂,C`〉 )\n∑d `′=1 exp ( 1 η 〈Π̂,C`′〉 ) , with Π̂ = argmin Π∈U(µ,ν) η log\n( d∑\n`=1\nexp\n( 1\nη 〈Π,C`〉\n)) ,\nwhere [C`]ij = (x (`) i − y (`) j ) 2. Finally, we selected the top K features by the ranking α̂. Hence, α changes to a one-hot vector for a small η and to αk ≈ 1L for a large η. Here, we compared FROT with several baseline algorithms in terms of solving feature-selection problems. In this study, we employed a high-dimensional and a few sample datasets with two class classification tasks (see Table 2). All feature selection experiments were run on a Linux server with an Intel Xeon CPU E7-8890 v4 with 2.20 GHz and 2 TB RAM.\nIn our experiments, we initially randomly split the data into two sets (75% for training and 25% for testing) and used the training set for feature selection and building a classifier. Note that we standardized each feature using the training set. Then, we used the remaining set for the test. The trial was repeated 50 times, and we considered the averaged classification accuracy for all trials. Considered as baseline methods, we computed the Wasserstein distance, maximum mean discrepancy (MMD) (Gretton et al., 2007), and linear correlation2 for each dimension and sorted them in descending order. Note that the Wasserstein distance is computed via sorting, which is computationally more efficient than the Sinkhorn algorithm when d = 1. Then, we selected the top K features as important features. For FROT, we computed the feature importance and selected the features that had significant importance scores. In our experiments, we set η = 1.0 and T = 10. Then, we trained a two-class SVM3 with the selected features.\nFig. 4 shows the average classification accuracy relative to the number of selected features. From Figure 4, FROT is consistent with the Wasserstein distance-based feature selection and outperforms the linear correlation method and the MMD for two datasets. Table 2 shows the computational time(s) of the methods. FROT is about two orders of magnitude faster than the Wasserstein distance and is also faster than MMD. Note that although MMD is as fast as the proposed method, it cannot determine the correspondence between samples." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed FROT for high-dimensional data. This approach jointly solves feature selection and OT problems. An advantage of FROT is that it is a convex optimization problem and can determine an accurate globally optimal solution using the Frank–Wolfe algorithm. We used FROT for high-dimensional feature selection and semantic correspondence problems. Through extensive experiments, we demonstrated that the proposed algorithm is consistent with state-of-theart algorithms in both feature selection and semantic correspondence.\n2https://scikit-learn.org/stable/modules/feature_selection.html 3https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" }, { "heading": "APPENDIX", "text": "" }, { "heading": "RELATED WORK", "text": "In addition to accelerating the computation, structured optimal transport incorporates structural information directly into OT problems (Alvarez-Melis et al., 2018). Specifically, they formulate the submodular optimal transport problem and solve the problem using a saddle-point mirror prox algorithm. Recently, more complex structured information was introduced in the OT problem, including the hierarchical structure (Alvarez-Melis et al., 2020; Yurochkin et al., 2019). These approaches successfully incorporate structured information into OT problems with respect to data samples. By contrast, FROT incorporates the structured information into features.\nOT applications: OT has received significant attention for use in several computer vision tasks. Applications include Wasserstein distance estimation (Peyré et al., 2019), domain adaptation (Yan et al., 2018), multitask learning (Janati et al., 2019), barycenter estimation (Cuturi & Doucet, 2014), semantic correspondence (Liu et al., 2020), feature matching (Sarlin et al., 2019), photo album summarization (Liu et al., 2019), generative model (Arjovsky et al., 2017; Bunne et al., 2019), graph matching (Xu et al., 2019a;b), and the semantic correspondence (Liu et al., 2020)." }, { "heading": "PROOF OF PROPOSITION 1", "text": "For the distance function d(x,y), we prove that\nFRWDp(µ, ν) = min\nΠ∈U(µ,ν) max α∈ΣL\nn∑\ni=1\nm∑\nj=1\nπij\nL∑\n`=1\nα`d(x (`) i ,y (`) j ) p\n 1/p\nis a distance for p ≥ 1. The symmetry can be read directly from the definition as we used distances that are symmetric. For the identity of indiscernibles, when FRWDp(µ, ν) = 0 with the optimal α and Π, there exists ` such that α` > 0 (as α is in the simplex set). As there is a max in the definition and∑ ij πijα`d(x (`) i ,y (`) j ) p = 0, this means that ∀`,∑ij πijd(x (`) i ,y (`) j )\np = 0 and ∀`, µ(`) = ν(`). Therefore, we have µ = ν when FRWDp(µ, ν) = 0.\nWhen µ = ν, this means that xi = yi, ai = bi,∀ i, and n = m, and we have d(xi,yj) = 0 for i = j. Thus, for any α` ≥ 0, the optimal transport plan is πii > 0 for d(xi,yj) = 0 and πij = 0 for d(xi,yj) > 0. Therefore, when µ = ν, we have FRWDp(µ, ν) = 0.\nTRIANGLE INEQUALITY\nLet µ = ∑n i=1 aiδxi , ν = ∑m j=1 bjδyj , γ = ∑u k=1 ckδzk and α ∈ ΣL, we prove that\nFRWDp(µ, γ) ≤ FRWDp(µ, ν) + FRWDp(ν, γ).\nTo simplify the notations in this proof, we define D` as the distance ”matrix” such that [D`]ij = d(x\n(`) i ,y (`) j ) is the ith-row and jth-column element of the matrix D`, [D`]jk = d(y (`) j , z (`) k ), and\n[D`]ik = d(x (`) i , z (`) k ). Moreover, note that D p ` is the ”matrix,” where each element is an element ofD` raised to the power p.\nConsider that P ∈ U(µ, ν) is the optimal transport plan of FRWDp(µ, ν), and Q ∈ U(ν, γ) is the optimal transport plan of FRWDp(ν, γ), where γ = ∑r k=1 ciδzi is a discrete measure. Similar to the proof for the Wasserstein distance in (Peyré et al., 2019), let S = P diag(1/b̃)Q with b̃ be a vector such that b̃j = bj if bj > 0, and bj = 1 otherwise. We can show that S ∈ U(µ, γ).\n( min\nR∈U(µ,γ)\nL∑\n`=1\nα`〈R,Dp` 〉 ) 1 p ≤ ( L∑\n`=1\nα`〈S,Dp` 〉 ) 1 p = ( L∑\n`=1\nα` ∑\nik\nSik[D`] p ik\n) 1 p\n≤\n L∑\n`=1\nα` ∑\nik\n[D`] p ik\n∑\nj\npijqjk\nb̃j\n 1 p\n=\n L∑\n`=1\nα` ∑\nijk\n[D`] p ik\npijqjk\nb̃j\n 1 p\n≤\n L∑\n`=1\nα` ∑\nijk\n([D`]ij + [D`]jk) p pijqjk\nb̃j\n 1 p\nBy letting gijk` = [D`]ij(α`pijqjk/b̃j)1/p and hijk` = [D`]ij(α`pijqjk/b̃j)1/p, the right-hand side of this inequality can be rewritten as\n L∑\n`=1\nα` ∑\nijk\n([D`]ij + [D`]jk) p pijqjk\nb̃j\n 1 p\n=\n L∑\n`=1\n∑\nijk\n(gijk` + hijk`) p\n 1 p\n≤\n L∑\n`=1\n∑\nijk\ngpijk`\n 1 p\n+\n L∑\n`=1\n∑\nijk\nhpijk`\n 1 p\n≤\n L∑\n`=1\nα` ∑\nijk\n[D`] p ij\npijqjk\nb̃j\n 1 p\n+\n L∑\n`=1\nα` ∑\nijk\n[D`] p jk\npijqjk\nb̃j\n 1 p\nby the Minkovski inequality.\n( min\nR∈U(µ,γ)\nL∑\n`=1\nα`〈R,Dp` 〉 ) 1 p ≤ L∑\n`=1\nα` ∑\nij\n[D`] p ijpij\n∑\nk\nqjk\nb̃j\n 1 p\n+\n L∑\n`=1\nα` ∑\nik\n[D`] p jkqjk\n∑\nj\npij\nb̃j\n 1 p\n≤\n L∑\n`=1\nα` ∑\nij\n[D`] p ijpij\n 1 p\n+\n( L∑\n`=1\nα` ∑\nik\n[D`] p jkqjk\n) 1 p\n≤ max α∈ΣL L∑\n`=1\nα` ∑\nij\n[D`] p ijpij\n 1 p\n+ ( max α∈ΣL L∑\n`=1\nα` ∑\nik\n[D`] p jkqjk\n) 1 p\n≤ FRWDp(µ, ν) + FRWDp(ν, γ)\nThis inequality is valid for all α. Therefore, we have\nFRWDp(µ, ν) ≤ FRWDp(µ, ν) + FRWDp(ν, γ)" }, { "heading": "FROT WITH LINEAR PROGRAMMING", "text": "Linear Programming: The FROT is a convex piecewise-linear minimization because the objective is the max of linear functions. Thus, we can solve the FROT problem via linear programming:\nmin Π∈U(µ,ν),t\nt, s.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L.\nThis optimization can be easily solved using an off-the-shelf LP package. However, the computational cost of this LP problem is high in general (i.e., O(n3), n = m).\nThe FROT problem can be written as\nmin Π max `∈{1,2,...,L}\n〈Π,C`〉,\ns.t. Π1m = a,Π>1n = b,Π ≥ 0.\nThis problem can be transformed to an equivalent linear program by first forming an epigraph problem:\nmin Π,t t,\ns.t. max `∈{1,2,...,L} 〈Π,C`〉 ≤ t\nΠ1m = a,Π >1n = b,Π ≥ 0.\nThus, the linear programming for FROT is given as\nmin Π,t t\ns.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L Π1m = a,Π >1n = b,Π ≥ 0.\nNext, we transform this linear programming problem into the canonical form. For matrix Π = (π1 π2 . . . πn)\n> ∈ Rn×m and πi ∈ Rn, we can vectorize the matrix using the following linewise operator:\nvec(Π) = (π>1 π > 2 . . . π > n ) > ∈ Rnm.\nUsing this vectorization operator, we can write 〈Π,C`〉 ≤ t as vec(C1)> −1 vec(C2)> −1 ... ...\nvec(CL)> −1\n ( vec(Π) t ) ≤ 0L,\nwhere 0L ∈ RL is a vector whose elements are zero. For the constraints Π1m = a and Π>1n = b, we can define vectors q1, . . . , qn ∈ Rnm and r1, . . . , rm ∈ Rnm such that q>i vec(Π) = ai and r>j vec(Π) = bj in this way:\nq1 = (1 > m,0 > m, . . . ,0 > m) >, q2 = (0 > m,1 > m, . . . ,0 > m) >,\n...\nqn = (0 > m,0 > m, . . . ,1 > m) >\nand\nr1 = (1,0 > m−1, 1,0 > m−1, . . . , 1,0 > m−1) >, r2 = (0, 1,0 > m−1, 1,0 > m−1, . . . , 1,0 > m−2) >,\n...\nrm = (0 > m−1, 1,0 > m−1, 1, . . .0 > m−1, 1) >\nWe can collect these vectors to obtain the vectorized constraints: q>1 0 q>2 0 ... ...\nq>n 0\n ( vec(Π) t ) = a, r>1 0 r>2 0 ...\n... r>m 0\n ( vec(Π) t ) = b,\nThus, we can rewrite the linear programming as\nmin u\ne>u\ns.t. (A> − 1L)u ≤ 0L, (Q> 0n)u = a, (R> 0m)u = b,u ≥ 0, where u = (vec(Π)> t)> ∈ Rnm+1, e = (0>nm 1)> ∈ Rnm+1 is the unit vector whose nm + 1- th element is 1, A = (vec(C1), . . . , vec(CL)) ∈ Rnm×L, Q = (q1, . . . , qn) ∈ Rnm×n, and R = (r1, . . . , rm) ∈ Rnm×m. Q = (In In . . . In)" }, { "heading": "PROOF OF LEMMA 2", "text": "We optimize the function with respect to α:\nmax α J(α)\ns.t. α>1K = 1, α1, . . . , αK ≥ 0, where\nJ(α) =\nL∑\n`=1\nα`φ` − η L∑\n`=1\nα`(logα` − 1). (8)\nBecause the entropic regularization is a strong convex function and its negative counterpart is a strong concave function, the maximization problem is a concave optimization problem.\nWe consider the following objective function with the Lagrange multiplier :\nJ̃(α) =\nL∑\n`=1\nα`φ` − η L∑\n`=1\nα`(logα` − 1) + (α>1K − 1)\nNote that owing to the entropic regularization, the nonnegative constraint is automatically satisfied.\nTaking the derivative with respect to α`, we have\n∂J̃(α)\n∂α` = φ` − η\n( logα` − 1 + α` 1\nα`\n) + = 0.\nThus, the optimal α` has the form\nα` = exp\n( 1\nη φ`\n) exp (\nη\n) .\nα` satisfies the sum to one constraint.\nexp\n(\nη\n) =\n1 ∑L `′=1 exp ( 1 ηφ`′ )\nHence, the optimal α` is given by\nα` = exp\n( 1 ηφ` )\n∑L `′=1 exp ( 1 ηφ`′ ) .\nSubstituting this into Eq.(8), we have\nJ(α∗) = L∑\n`=1\nexp (\n1 ηφ`\n)\n∑L `′=1 exp ( 1 ηφ`′\n)φ` − η L∑\n`=1\nexp (\n1 ηφ`\n)\n∑L `′=1 exp ( 1 ηφ`′ )\n log \nexp (\n1 ηφ`\n)\n∑L `′=1 exp ( 1 ηφ`′ )\n − 1 \n= η log\n( L∑\n`=1\nexp\n( 1\nη φ`\n)) + η\nTherefore, the final objective function is given by\nJ(α∗) = η log\n( L∑\n`=1\nexp\n( 1\nη φ`\n)) + η" }, { "heading": "PROOF OF PROPOSITION 3", "text": "Proof: For 0 ≤ θ ≤ 1 and η > 0, we have L∑\n`=1\nexp\n( 1\nη 〈θΠ1 + (1− θ)Π2,D`〉\n) = L∑\n`=1\nexp\n( θ\nη 〈Π1,D`〉+ (1− θ) η 〈Π2,D`〉\n)\n=\nL∑\n`=1\nexp\n( 1\nη 〈Π1,D`〉\n)θ exp ( 1\nη 〈Π2,D`〉\n)1−θ\n≤ ( L∑\n`=1\nexp\n( 1\nη 〈Π1,D`〉\n))θ ( L∑\n`=1\nexp\n( 1\nη 〈Π2,D`〉\n))1−θ\nHere, we use Hölder’s inequality with p = 1/θ, q = 1/(1− θ), and 1/p+ 1/q = 1. Applying a logarithm on both sides of the equation and then premultiplying η, we have\nη log\n( L∑\n`=1\nexp\n( 1\nη 〈θΠ1 + (1− θ)Π2,D`〉\n)) ≤ θη log ( L∑\n`=1\nexp\n( 1\nη 〈Π1,D`〉\n))\n+ (1− θ)η log ( L∑\n`=1\nexp\n( 1\nη 〈Π2,D`〉\n))" }, { "heading": "PROOF OF PROPOSITION 4", "text": "Theorem 5 (Jaggi, 2013) For each t ≥ 1, the iterates Π(t) of Algorithms 1, 2, 3, and 4 in (Jaggi, 2013) satisfy\nf(Π(t))− f(Π∗) ≤ 2Cf t+ 2 (1 + δ),\nwhere Π∗ ∈ D is an optimal solution to problem Π∗ = argmin\nΠ∈D f(Π)," }, { "heading": "Cf is the curvature constant defined as", "text": "Cf := sup Π,Π̂,γ\n2\nγ2 (f(Π′)− f(Π)− 〈Π′ −Π,∇f(Π)〉\ns.t. Π, Π̂ ∈ D, γ ∈ [0, 1],Π′ = Π + γ(Π̂−Π), δ ≥ 0 is the accuracy with which internal linear subproblems are solved.\nLemma 6 (Jaggi, 2013) Let f be a convex and differentiable function with its gradient ∇f being Lipschitz-continuous w.r.t. some norm ‖ · ‖ over the domain D with Lipschitz-constant L > 0. Then,\nCf ≤ diam‖·‖(D)2L\nDefinition 7 The softmax function is given by\nσ(z) = 1\n∑L `′=1 exp(λz`′)\n exp(λz1) exp(λz2)\n... exp(λzL)\n ,\nwhere λ > 0 is referred to as the inverse temperature constant.\nLemma 8 (Gao & Pavel, 2017) The softmax function σ(·) is L-Lipschitz with respect to ‖ · ‖2 with L = λ, that is for all z, z′ ∈ Rn,\n‖σ(z)− σ(z′)‖2 ≤ λ‖z − z′‖2, where λ is the inverse temperature constant.\nThe derivative of Gη(Π) is given as\n∂Gη(Π)\n∂Π =\nL∑\n`=1\nexp (\n1 η 〈Π,C`〉\n)\n∑L `′=1 exp ( 1 η 〈Π,C`′〉 )C` =MΠ.\nThus, we have\nvec(MΠ) = ΦpΠ,\nwhere\nΦ = (vec(C1), vec(C2), . . . , vec(CL)) ∈ Rnm×L,\npΠ =\n \nexp (\n1 η 〈Π,C1〉\n)\n∑L `′=1 exp ( 1 η 〈Π,C`′〉\n) , . . . , exp\n( 1 η 〈Π,CL〉 )\n∑L `′=1 exp ( 1 η 〈Π,C`′〉 )\n >\n∈ RL.\nHere, pΠ is the softmax function σ(z) with z` = 〈Π,C`〉. We have\n‖∇Gη(Π)−∇Gη(Π′)‖2 = ‖ΦpΠ −ΦpΠ′‖2 ≤ ‖Φ‖op‖pΠ − pΠ′‖2,\n≤ 1 η ‖Φ‖op‖Φ>vec(Π)−Φ>vec(Π′)‖2 (Lemma 8 with λ = 1 η )\n≤ 1 η ‖Φ‖op‖Φ>‖op‖vec(Π)− vec(Π′)‖2\nwhere ‖ · ‖op is the operator norm. We have ‖Φ‖op = ‖Φ>‖op, ‖Φ>Φ‖op = ‖Φ‖2op, and ‖vec(Π) − vec(Π′)‖2 ≤ √ 2. Therefore, the Lipschitz constant for the gradient is L = 1η‖Φ‖2op = 1 ησmax(Φ\n>Φ), and the curvature constant is bounded above by Cf ≤ 2L, where σmax(Φ>Φ) is the largest eigenvalue of the matrix Φ>Φ. By plugging Cf in Theorem 5, we have\nGη(Π (t))−Gη(Π∗) ≤\n4σmax(Φ >Φ)\nη(t+ 2) (1 + δ)." }, { "heading": "MAX/MIN FORMULATION", "text": "We define the max–min formulation of the FROT as\nmax α∈ΣL\nL∑\n`=1\nα` min Π∈U(a`,b`)\nn∑\ni=1\nm∑\nj=1\nπijc(x (`) i ,y (`) j ),\nwhere ΣL = {α ∈ RL+ : α>1L = 1} is the probability simplex, the set of probability vectors in RL.\nThis problem can be solved by computing the group that maximizes the optimal transport distance k∗ = argmax kW1(µ (`), ν(`)) and then by considering α∗ = δk∗ as a one-hot vector.\nThe result of this formulation provides an intuitive idea (the same as for the robust Wasserstein method). Hence, we maximize the group (instead of the subspace) that provides the optimal result. However, the formulation requires solving the OT problem L times. This approach may not be suitable if we have a large L. Moreover, the argmax function is generally not differentiable.\nRelation to the max-sliced Wasserstein distance: The max-sliced Wasserstein-2 distance can be defined as (Deshpande et al., 2019)\nmax-W2(µ, ν) = max w∈Ω min Π∈U(a`,b`) n∑\ni=1\nm∑\nj=1\nπij(w >xi −w>yj)2\n 1 2\n,\nwhere Ω ⊂ Rd is the set of all possible directions on the unit sphere. The max–sliced Wasserstein is a max–min approach. That is, for each w, it requires solving the OT problem. The max–min approach is suited for simply measuring the divergence between two distributions. However, it is difficult to interpret features using the max–sliced Wasserstein, where it is the key motivation of FROT.\nRelation to Subspace Robust Wasserstein (Paty & Cuturi, 2019): Here, we show that 2-FRWD with d(x,y) = ‖x − y‖2 is a special case of SRW. Let us define U = ( √ α1e1, √ α2e2, . . . , √ αded)\n> ∈ Rd×d, where e` ∈ Rd is the one-hot vector whose `th element is 1 and α>1 = 1, α` ≥ 0. Then, the objective function of SRW can be written as\nn∑\ni=1\nm∑\nj=1\nπij‖U>xi −U>yj‖22 = n∑\ni=1\nm∑\nj=1\nπij(xi − yj)>UU>(xi − yj)\n=\nn∑\ni=1\nm∑\nj=1\nπij(xi − yj)>diag(α)(xi − yj)\n=\nn∑\ni=1\nm∑\nj=1\nπij\nd∑\n`=1\nα`(x (`) i − y (`) j ) 2.\nTherefore, SRW and 2-FRWD are equivalent if we set U = ( √ α1e1, √ α2e2, . . . , √ αded)\n> and d(x,y) = ‖x− y‖2." }, { "heading": "ADDITIONAL SEMANTIC CORRESPONDENCE EXPERIMENTS", "text": "Figure 5a shows an example of key points matched using the FROT algorithm. Fig.5b shows the corresponding feature importance. The lower the η value, the smaller the number of layers used. The interesting finding here is that the selected important layer in this case is the third layer from the last. More qualitative results are presented in the Figure 7." } ]
2,020
FEATURE-ROBUST OPTIMAL TRANSPORT
SP:0b4980e5cb0ed470b3a93111e76fef1035438077
[ "The paper describes an approach to counteract Byzantine attacks for distributed stochastic gradient descent by using the momentum of the gradient computed at the workers, which relies only on memory of the previous momentum. This seems to thwart current attacks in the majority of scenarios tested. The theoretical analysis seems appropriate. The empirical results are accompanied by precise details and should facilitate reproducibility (given a week of computation time). " ]
Byzantine-resilient Stochastic Gradient Descent (SGD) aims at shielding model training from Byzantine faults, be they ill-labeled training datapoints, exploited software/hardware vulnerabilities, or malicious worker nodes in a distributed setting. Two recent attacks have been challenging state-of-the-art defenses though, often successfully precluding the model from even fitting the training set. The main identified weakness in current defenses is their requirement of a sufficiently low variance-norm ratio for the stochastic gradients. We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness. We assess the effectiveness of our method over 736 different training configurations, comprising the 2 state-of-the-art attacks and 6 defenses. For confidence and reproducibility purposes, each configuration is run 5 times with specified seeds (1 to 5), totalling 3680 runs. In our experiments, when the attack is effective enough to decrease the highest observed top-1 cross-accuracy by at least 20% compared to the unattacked run, our technique systematically increases back the highest observed accuracy, and is able to recover at least 20% in more than 60% of the cases.
[ { "affiliations": [], "name": "El-Mahdi El-Mhamdi" } ]
[ { "authors": [ "Dan Alistarh", "Zeyuan Allen-Zhu", "Jerry Li" ], "title": "Byzantine stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Moran Baruch", "Gilad Baruch", "Yoav Goldberg" ], "title": "A little is enough: Circumventing defenses for distributed learning", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Jeremy Bernstein", "Jiawei Zhao", "Kamyar Azizzadenesheli", "Anima Anandkumar" ], "title": "signsgd with majority vote is communication efficient and fault tolerant", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Peva Blanchard", "El-Mahdi El-Mhamdi", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine learning with adversaries: Byzantine tolerant gradient descent", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Lingjiao Chen", "Hongyi Wang", "Zachary B. Charles", "Dimitris S. Papailiopoulos" ], "title": "DRACO: byzantine-resilient distributed training via redundant gradients", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yudong Chen", "Lili Su", "Jiaming Xu" ], "title": "Distributed statistical machine learning in adversarial settings: Byzantine gradient descent", "venue": null, "year": 2017 }, { "authors": [ "Ashok Cutkosky", "Francesco Orabona" ], "title": "Momentum-based variance reduction in nonconvex SGD", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Georgios Damaskinos", "El-Mahdi El-Mhamdi", "Rachid Guerraoui", "Rhicheek Patra", "Mahsa Taziki" ], "title": "Asynchronous byzantine machine learning (the case of SGD)", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "El-Mahdi El-Mhamdi", "Rachid Guerraoui", "Sébastien Rouault" ], "title": "The hidden vulnerability of distributed learning in byzantium", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "El-Mahdi El-Mhamdi", "Rachid Guerraoui", "Arsany Guirguis", "Lê Nguyên Hoang", "Sébastien Rouault" ], "title": "Genuinely distributed byzantine machine learning", "venue": "PODC ’20: ACM Symposium on Principles of Distributed Computing, Virtual Event,", "year": 2020 }, { "authors": [ "Gabriel Goh" ], "title": "Why momentum really works. Distill, 2017", "venue": "doi: 10.23915/distill.00006. URL http://distill.pub/2017/momentum", "year": 2017 }, { "authors": [ "Bumsoo Kim" ], "title": "Best cifar-10, cifar-100 results with wide-residual networks using pytorch", "venue": "https://github.com/meliketoy/wide-resnet.pytorch,", "year": 2020 }, { "authors": [ "Leslie Lamport", "Robert E. Shostak", "Marshall C. Pease" ], "title": "The byzantine generals problem", "venue": "ACM Trans. Program. Lang. Syst.,", "year": 1982 }, { "authors": [ "Mu Li", "David G. Andersen", "Jun Woo Park", "Alexander J. Smola", "Amr Ahmed", "Vanja Josifovski", "James Long", "Eugene J. Shekita", "Bor-Yiing Su" ], "title": "Scaling distributed machine learning with the parameter server", "venue": "In 11th USENIX Symposium on Operating Systems Design and Implementation,", "year": 2014 }, { "authors": [ "Yujun Lin", "Song Han", "Huizi Mao", "Yu Wang", "Bill Dally" ], "title": "Deep gradient compression: Reducing the communication bandwidth for distributed training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kuang Liu" ], "title": "Train cifar-10 with pytorch, 2019", "venue": "URL https://github.com/kuangliu/ pytorch-cifar/blob/ab908327d44bf9b1d22cd333a4466e85083d3f21/main.py#L33", "year": 2019 }, { "authors": [ "Luis Muñoz-González", "Kenneth T Co", "Emil C Lupu" ], "title": "Byzantine-robust federated machine learning through adaptive model averaging", "venue": "arXiv preprint arXiv:1909.05125,", "year": 2019 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for solving a convex programming problem with convergence rate o(1/k2)", "venue": "Soviet Mathematics Doklady,", "year": 1983 }, { "authors": [ "Boris Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics, 4:1–17,", "year": 1964 }, { "authors": [ "Shashank Rajput", "Hongyi Wang", "Zachary Charles", "Dimitris Papailiopoulos" ], "title": "Detox: A redundancy-based framework for faster and more robust gradient aggregation", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fred B Schneider" ], "title": "Implementing fault-tolerant services using the state machine approach: A tutorial", "venue": "ACM Computing Surveys (CSUR),", "year": 1990 }, { "authors": [ "WANG TianXiang", "ZhongLong ZHENG", "TANG ChangBing", "PENG Hao" ], "title": "Aggregation rules based on stochastic gradient descent in byzantine consensus", "venue": "IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC),", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Cong Xie" ], "title": "Zeno++: robust asynchronous SGD with arbitrary number of byzantine workers", "venue": "CoRR, abs/1903.07020,", "year": 2019 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Generalized byzantine-tolerant SGD", "venue": "CoRR, abs/1802.10116,", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Phocas: dimensional byzantine-resilient stochastic gradient descent", "venue": "CoRR, abs/1805.09682,", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta" ], "title": "Fall of empires: Breaking byzantinetolerant SGD by inner product manipulation", "venue": "In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Cong Xie", "Sanmi Koyejo", "Indranil Gupta" ], "title": "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhixiong Yang", "Waheed U Bajwa" ], "title": "Bridge: Byzantine-resilient decentralized gradient descent", "venue": "arXiv preprint arXiv:1908.08098,", "year": 2019 }, { "authors": [ "Zhixiong Yang", "Waheed U Bajwa" ], "title": "Byrdie: Byzantine-resilient distributed coordinate descent for decentralized learning", "venue": "IEEE Transactions on Signal and Information Processing over Networks,", "year": 2019 }, { "authors": [ "Zhixiong Yang", "Arpita Gang", "Waheed U Bajwa" ], "title": "Adversary-resilient inference and machine learning: From distributed to decentralized", "venue": "arXiv preprint arXiv:1908.08649,", "year": 2019 }, { "authors": [ "Dong Yin", "Yudong Chen", "Kannan Ramchandran", "Peter L. Bartlett" ], "title": "Byzantine-robust distributed learning: Towards optimal statistical rates", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Baruch" ], "title": "Which defense: Krum, Median, Trimmed Mean, Phocas, MeaMed , or Bulyan • How many Byzantine workers (an half or a quarter) • Where momentum is computed (server or workers", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "2019a), [d] nesterov momentum under attack", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "Momentum at the server", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "Momentum at the server", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "CIFAR-100 dataset and convolutional model, with n = 25, f = 11 and αt = 0.01 if t < 1500 else αt = 0.001, under attack", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "Momentum at the server", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "Momentum at the server", "venue": null, "year": 2019 }, { "authors": [ "Xie" ], "title": "Momentum at the server", "venue": null, "year": 2019 } ]
[ { "heading": "1 Introduction", "text": "Stochastic Gradient Descent (SGD) is one of the main optimization algorithm used throughout machine learning. Scaling SGD can mean aggregating more but inevitably less well-sanitized data, and distributing the training over several machines, making SGD even more vulnerable to Byzantine faults: corrupted/malicious training datapoints, software vulnerabilities, etc. Many Byzantine-resilient techniques have been proposed to keep SGD safer from these faults, e.g. Alistarh et al. (2018); Damaskinos et al. (2018); Yang & Bajwa (2019b); TianXiang et al. (2019); Bernstein et al. (2019); Yang & Bajwa (2019a); Yang et al. (2019); Rajput et al. (2019); Muñoz-González et al. (2019). These techniques mainly use the same adversarial model (Figure 2): a central, trusted parameter server distributing gradient computations to several workers, a minority of which is controlled by an adversary and can submit arbitrary gradients.\nTwo families of defense techniques can be distinguished. The first employs redundancy schemes, inspired by coding theory. This approach has strong resilience guarantees, but\n*Author list written in alphabetical order, as for all the papers from the DCL at EPFL.\nits requirement to share data between workers makes this approach unsuitable for several classes of applications, e.g. when data cannot be shared for privacy, scalability or legal reasons. The second family uses statistically-robust aggregation schemes, and is the focus of this paper. The underlying idea is simple. At each training step, the server aggregates the stochastic gradients computed by the workers into one gradient, using a function called a Byzantine-resilient Gradient Aggregation Rule (GAR). These statistically-robust GARs are designed to produce at each step a gradient that is expected to decrease the loss.\nIntuitively, one can think of this second family as different formulations of the multivariate median. In particular, if the non-Byzantine gradients were all equal at each step, any different (adversarial) gradient would be rejected by each of these medians, and no attack would succeed. But due to their stochastic nature, the non-Byzantine gradients are different: their variance is strictly positive. Formal guarantees on any given statistically-robust GAR typically require that the variance-norm ratio, the ratio between the variance of the nonByzantine gradients and the norm of the expected non-Byzantine gradient, remains below a certain constant (constant which depends on the GAR itself and fixed hyperparameters). Intuitively, this notion of variance-norm ratio can be comprehended quite analogously to the inverse of the signal-to-noise ratio (i.e. the “noise-to-signal” ratio) in signal processing.\nHowever, Baruch et al. (2019) noted that an attack could send gradients that are close to non-Byzantine outlier gradients, building an apparent majority of gradients that could be sufficiently far from the expected non-Byzantine gradient to increase the loss. This can happen against most statistically-robust GARs in practice, as the variance-norm ratio is often too large for them. Two recent attacks (Baruch et al., 2019; Xie et al., 2019a) were able to exploit this fact to substantially hamper the training process (which our experiments confirm).\nThe work presented here aims at (substantially) improving the resilience of statistically robust GARs “also in practice”, by reducing the variance-norm ratio of the gradients received by the server. We do that by taking advantage of an old technique normally used for acceleration: momentum. This technique is regularly applied at the server, but instead we propose to confer it upon each distributed worker, effectively making the Byzantine-resilient GAR aggregate accumulated gradients. Crucially, there is no computational complexity attached to our reformulation: it only reorders operations in existing (distributed) algorithms.\nContributions. Our main contributions can be summarized as follows:\n• A reformulation of classical/Nesterov momentum which can significantly improve the effectiveness (Figure 1) of any statistically-robust Gradient Aggregation Rule (GAR). We formally analyze the impact of our reformulation on the variance-norm ratio of the aggregated gradients, ratio on which the studied GARs assume an upper bound.\n• An extensive and reproducible1 set of experiments substantiating the effectiveness of our reformulation of momentum in improving existing defenses against state-of-the-art attacks.\nPaper Organization. Section 2 provides the necessary background. Section 3 presents our distributed momentum scheme and provides some intuitions on its effects. Formal developments of these intuitions are given in the appendix. Section 4 describes our experimental settings in details, before presenting and analysing some of our experimental results. The appendix reports on the entirety of our experiments, and details how they can be reproduced (in one command, graphs included). Section 5 discusses related and future work." }, { "heading": "2 Background", "text": "" }, { "heading": "2.1 Byzantine Distributed SGD", "text": "Stochastic Gradient Descent (SGD). We consider the classical problem of optimizing a non-convex, differentiable loss function Q : Rd → R, where Q (θt) , E x∼D [q (θt, x)] for a fixed data distribution D. Ideally, we seek θ∗ such that θ∗ = arg minθ (Q (θ)). We employ mini-batch SGD optimization. Starting from initial parameter θ0 ∈ Rd, at every step t ≥ 0, b samples ( x (1) t . . . x (b) t ) are sampled from D to estimate one stochastic gradient gt , 1 b ∑b k=1∇q ( θt, x (k) t ) ≈ ∇Q (θt). This stochastic gradient is then used to update the parameters θt, with: θt+1 = θt − αt gt. The sequence αt > 0 is called the learning rate.\nClassical and Nesterov momentum One field-tested amendment to mini-batch SGD is classical momentum (Polyak, 1964), where each gradient keeps an exponentially-decreasing effect on every subsequent update. Formally: θt+1 = θt − αt ∑t u=0 µ t−ugu, with 0 < µ < 1.\nNesterov (1983) proposed another revision. Noting vt the velocity vector, v0 = 0, formally:\nvt+1 = µ vt + 1\nb b∑ k=1 ∇q ( θt − αt µ vt, x (k) t ) θt+1 = θt − αt vt+1\nCompared to classical momentum, the gradient is estimated at θt − αt µ vt instead of θt.\nDistributed SGD with Byzantine workers. We follow the parameter server model (Li et al., 2014): one single process (the parameter server) holding the parameter vector θt ∈ Rd, and n other (the workers) estimating gradients. Among these n workers, up to f < n are said Byzantine, i.e. adversarial. Unlike the other n − f honest workers, these f Byzantine workers can submit arbitrary gradients (Figure 2).\nAt each step t, the parameter server receives n different gradients g (1) t . . . g (n) t , among which f are arbitrary (submitted by the Byzantine workers). So the update equation becomes: θt+1 = θt − αtGt, where:\nGt , t∑\nu=0\nµt−uF ( g(1)u , . . . , g (n) u ) (1)\n1Namely: 736 × 5 = 3680 seeded runs, and one single script to reproduce all of our results.\nFunction F is called a Gradient Aggregation Rule (GAR). In non-Byzantine settings, averaging is used; formally: F ( g (1) t , . . . , g (n) t ) = 1n ∑n i=1 g (i) t . In the presence of Byzantine workers, a more robust aggregation is performed with a Byzantine-resilient GAR. Sections 2.2 and 2.3 respectively describe the 6 existing GARs and 2 attacks studied in this paper.\nAdversarial Model. The goal of the adversary is to impede the learning process, which is defined as the maximization of the loss Q or, more judiciously for the image classification tasks tackled in this paper, as the minimization2 of the model’s top-1 cross-accuracy. The adversary cannot directly overwrite θt at the parameter server. The adversary only submits f arbitrary gradients to the server per step, via the f Byzantine workers it controls3. We assume an omniscient adversary. In particular, the adversary knows the GAR used by the parameter server and, at each step, the adversary can generate Byzantine gradients dependent on the honest gradients submitted at the same step and any previous step." }, { "heading": "2.2 Byzantine-resilient GARs", "text": "We briefly present below the 6 studied Gradient Aggregation Rules (GARs). These GARs are Byzantine-resilient (Section A), a notion first introduced by Blanchard et al. (2017) under the name (α, f)-Byzantine-resilience. When used within its operating assumptions, a Byzantine-resilient GAR guarantees convergence even in an adversarial setting.\nLet n be the number of gradients the parameter server received from the n workers (Figure 2), and let f be the maximum number of Byzantine gradients the GAR must tolerate.\nKrum (Blanchard et al., 2017). Each received gradient is assigned a score. The score of gradient x is the sum of the squared `2-distances between x and the n−f−2 closest gradients to x. The aggregated gradient is then the arithmetic mean of the n− f − 2 gradients with the smallest scores. This variant is called Multi-Krum in the original paper.\nTo be proven (α, f)-Byzantine resilient, Krum requires the variance of the honest gradients E ‖Gt − EGt‖ 2 to be bounded above as follows:\n2 · ( n−f+ f (n−f−2)+f\n2 (n−f−1) n−2f−2\n) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 (2)\nMedian (Yin et al., 2018). The coordinate-wise median of the n received gradients. Median is proven (α, f)-Byzantine resilience with the following condition on the variance-norm ratio:\n(n− f) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 (3)\nTrimmed Mean (Yin et al., 2018). The coordinate-wise trimmed-mean of the n received gradients. The trimmed-mean of n values is the arithmetic mean, after the f smallest and the f largest values have been discarded, of the remaining values. From Theorem 1 of Xie et al. (2018b), we can derive the following condition on the variance-norm ratio:\n2 (f+1) (n−f) (n−2f)2 E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 (4)\nPhocas (Xie et al., 2018b). The coordinate-wise arithmetic mean of the n − f closest values to the coordinate-wise trimmed-mean. From Theorem 2 of Xie et al. (2018b):(\n4 + 12 (f+1) (n−f)\n(n−2f)2\n) E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 (5)\n2Exempli gratia, with 10 classes, the worst possible final accuracy is arguably 0.1. 3Said otherwise, the f Byzantine workers can collude.\nMeaMed (Xie et al., 2018a). Same as Phocas, but with median replacing trimmed-mean. Theorem 5 of Xie et al. (2018a) provides the following condition:\n10 (n− f) · E ‖Gt − EGt‖ 2 < ‖EGt‖ 2 (6)\nBulyan (El-Mhamdi et al., 2018). This is a composite GAR, iterating on another GAR in a first selection phase. In the remaining of this paper, Bulyan will use Krum, so the first phase selects n− 2 f − 2 gradients, at each iteration removing the highest scoring gradient. The aggregated gradient is the coordinate-wise arithmetic mean of the n− 4 f − 2 closest values to the (coordinate-wise) median of the selected gradients.\nThe theoretical requirement on the variance-norm ratio are the same as the ones of the underlying GAR. That is, in this paper, they are the same as Krum (Equation 2)." }, { "heading": "2.3 Studied Attacks", "text": "The two state-of-the-art attacks, that recently appeared in the literature, follow the same core principle. Let ε ∈ R≥0 be a non-negative factor, and at ∈ Rd an attack vector which value depends on the actual attack used (see below for possible values of at). At each step t, each of the f Byzantine workers submits the same Byzantine gradient: gt + ε at, where gt is an approximation of the real gradient ∇Q (θt) at step t. The value of ε is fixed (see below). A Little is Enough (Baruch et al., 2019). In this attack, each Byzantine worker submits gt + ε at, with at , −σt the opposite of the coordinate-wise standard deviation of the honest gradient distribution Gt. Our experiments use ε = 1.5, as proposed by the original paper.\nFall of Empires (Xie et al., 2019a). Each Byzantine worker submits (1− ε) gt, i.e., at , −gt. The original paper tested ∈ {−10,−1, 0, 0.1, 0.2, 0.5, 1, 10, 100}, our experiments use4 ε = 1.1, corresponding in the notation of the original paper to , − (1− ε) = − (1− 1.1) = 0.1." }, { "heading": "3 Momentum at the Workers", "text": "Intuitively, the Byzantine-resilient GARs (Section 2.2) rely on the honest gradients being sufficiently clumped (formalized in e.g. Equation 2 to Equation 6). In the edge case where every honest gradient is equal (i.e. no stochastic noise), no attack can affect the learning: there is by assumption a strict majority of identical honest gradients. On the contrary when the honest gradients are “spread”, i.e. their variance is large enough compared to their norms, the attack vectors can form a majority by relying on a few outlier (but honest) gradients (Baruch et al., 2019), and so substantially influence the aggregated gradient.\nMomentum makes the parameters θt travel down the loss function with inertia, accumulating both the real gradient ∇Q (t) and the error (i.e. here, the stochastic noise) gt − ∇Q (t). Intuitively, the accumulation of errors grows at a moderate rate, as past errors can be partially compensated by future ones. But when consecutive ∇Q (t) have sufficiently low solid angles, past real gradients do not compensate future real gradients: the norm of Gt can grow “faster” (for each new step) than its variance, mitigating the potential impact of an attack." }, { "heading": "3.1 Formulation", "text": "From the formulation of momentum SGD in a distributed setting (Equation 1): Gt , t∑\nu=0\nµt−uF ( g(1)u , . . . , g (n) u ) we instead confer the momentum computation on the workers:\nGt , F ( t∑ u=0\nµt−ug(1)u︸ ︷︷ ︸ G\n(1) t\n, . . . , t∑ u=0\nµt−ug(n)u︸ ︷︷ ︸ G\n(n) t\n) (7)\n4This factor made this attack consistently successful in the original paper.\nNotations. In the remaining of this paper, we call the original formulation (momentum) at the server, and the proposed, revised formulation (momentum) at the worker(s). The quantities G (1) t . . . G (n) t will be called the submitted gradients (at step t). At step t, the variance-norm ratio is computed on the honest subset of: g (1) t . . . g (n) t , if momentum at the server is employed, otherwise G (1) t . . . G (n) t , if momentum at the workers is used instead.\nFormal analysis. The formal analysis of the impact of our technique on the variance-norm ratio of the aggregated gradients is available in the appendix, Section B." }, { "heading": "4 Experiments", "text": "Our experiments cover 2 models, 4 datasets, the 6 studied defenses under each of the 2 stateof-the-art attacks5, different fractions of Byzantine workers (either half or a quarter), using Nestorov instead of classical momentum, plus unattacked settings where each worker is honest and the GAR is mere averaging. Since our theoretical results (Section B) suggest that smaller learning rates may reduce the variance-norm ratio, two learning rate schedules (an optimal and a smaller one) are also tested. For reproducibility and confidence in the empirical benefits of our reformulation, we test every combination of the hyperparameters mentioned above, and each combination is repeated 5 times with specified seeds (1 to 5, totally 3680 runs).\nThe tools we developed to implement our reformulation captures ∼20 metrics, including the evolution of the average loss, top-1 cross-accuracy and variance-norm ratio of the submitted gradients. In this section and Section E, we specifically report on these 3 metrics." }, { "heading": "4.1 Experimental Setup", "text": "We use a compact notation to define the models: L(#outputs) for a fully-connected linear layer, R for ReLU activation, S for log-softmax, C(#channels) for a fully-connected 2D-convolutional layer (kernel size 3, padding 1, stride 1), M for 2D-maxpool (kernel size 2), B for batch-normalization, and D for dropout (with fixed probability 0.25).\nWe use the models from respectively Baruch et al. (2019) and Xie et al. (2019a):\nFully connected Convolutional\nModel (784)-L(100)-R-L(10)-R-S (3, 32×32)-C(64)-R-B-C(64)-R-B-M-D-C(128)-R-B-C(128)-R-B-M-D-L(128)-R-D-L(10)-S\nDatasets MNIST, Fashion MNIST CIFAR-10, CIFAR-100 (83 samples/gradient) (50 samples/gradient)\n#workers n = 51 f ∈ {24, 12} n = 25 f ∈ {11, 5}\nFor model training, we use the negative log likelihood loss and respectively 10−4 and 10−2 `2-regularization for the fully connected and convolutional models. We also clip gradients, ensuring their norms remain respectively below 2 and 5 for the fully connected and convolutional models. Regarding evaluation, we use the top-1 cross-accuracy over the whole test set.\nDatasets are pre-processed before training. MNIST receives the same pre-processing as in Baruch et al. (2019): an input image normalization with mean 0.1307 and standard deviation 0.3081. Fashion MNIST, CIFAR-10 and CIFAR-100 are all expanded with horizontally flipped images. For both CIFAR-10 and CIFAR-100, a per-channel normalization with means 0.4914, 0.4822, 0.4465 and standard deviations 0.2023, 0.1994, 0.2010 (Liu, 2019) has been applied.\nWe denote by f the number of Byzantine workers either to the maximum for which Krum can be used (roughly an half: f = bn−32 c), or the maximum for Bulyan (roughly a quarter, f = bn−34 c). The attack factors εt (Section 2.3) are set to constants proposed in the literature, namely εt = 1.5 for Baruch et al. (2019) and εt = 1.1 for Xie et al. (2019a).\n5To the best of our knowledge, putting aside simple attacks (e.g. sending attack gradients sampled from a Gaussian distribution) tested in each defense papers, no other attack has been published.\nWe also experiment two different learning rates. The first and largest is selected so as to maximize the performance (highest final cross-accuracy and accuracy gain per step) of the model trained without Byzantine workers. The second and smallest is chosen so as to minimize the performance loss under attack, without substantially impacting the final accuracy when trained without Byzantine workers. The fully connected and convolutional models are trained respectively with µ = 0.9 and µ = 0.99. These values were obtained by trial and error, to maximize the accuracy gain per step when there is no attack.\nReproducibility. Particular care has been taken to make our results reproducible. Each of the 5 runs per experiment are respectively seeded with seed 1 to 5. For instance, this implies that two experiments with same seed and same model also starts with the same parameters θ0. To further reduce the sources of non-determinism, the CuDNN backend is configured in deterministic mode (our experiments ran on two GeForce GTX 1080 Ti) with benchmark mode turned off. We also used log-softmax + nll loss, which is equal to softmax + cross-entropy loss, but with improved numerical stability on PyTorch. We provide our code along with a script reproducing all of our results, both the experiments and the graphs, in one command. Details, including software and hardware dependencies, are available in Section C." }, { "heading": "4.2 Experimental Results", "text": "This section reports on the evolution of the average loss, top-1 cross-accuracy and variancenorm ratio of the submitted gradients. Section E in the appendix reports on the entirety of our experimental results, and Section D additionally experiments with a much larger model.\nOne first important remark is that our new formulation either obtain similar, or (subtantially) increased maximum top-1 cross-accuracy measured, compared to the standard formulation in the exact same settings. Namely, in only 4 pairs of runs (0.23% of all the tested pairs) did our formulation lead to a decreased maximum top-1 cross-accuracy. Also, these decreases were only observed with the fully connected model, using Krum against Xie et al. (2019a), and for each of these 4 runs using any of the 4 other seeds made the decrease disappear.\nIn all of our experiments, we observe a strong correlation between higher top-1 cross-accuracies and lower average losses; e.g. see Figure 4. The two state-of-the-art attacks decreased the accuracy by at least 20%, compared to the unattacked case (see “No attack” in Figure 3), in 25.80% and 70.80% of the runs with respectively the fully connected and convolutional models.\nFocusing on the convolutional model, when roughly an half of the workers are Byzantine, both attacks actually succeed in decreasing the accuracy by at least 20% in 100% of our runs. Our technique manages to recover at least 10% and 20% in respectively 79.75% and 49.25% of these runs. When roughly a quarter of the workers are Byzantine, the attacks decrease the accuracy by at least 20% in 46.46% of our runs. Our technique then manages to recover at least 20% in 95.07% of these runs. Figure 3 shows a fraction of these runs.\nTechnically, our reformulation aims at reducing the variance-norm ratio of the aggregated gradients. Intuitively, this ratio is expected to increase as the loss decreases; more correctly as the norm of the gradient decreases. For instance, Figure 5 displays the variance-norm ratios of Trimmed Mean and Bulyan using the same settings as in Figure 4. At least before the final cross-accuracy is reached, our technique consistently decreases the variance-norm ratio of the aggregated gradients. Also, we consistently observed in the experiments that reducing the learning rate indeed reduces the variance-norm ratio (e.g. Figure 5, t ≥ 1500)." }, { "heading": "5 Related and future work", "text": "Alternative Byzantine-resilient Approaches. The Byzantine abstraction is a very general fault model that has long been studied in distributed computing (Lamport et al., 1982). The standard, golden solution for Byzantine fault tolerance is the state machine replication approach (Schneider, 1990). This approach is however based on replication, which is known to be unsuitable for distributed machine learning and stochastic gradient descent.\nChen et al. (2018) was the first Byzantine-resilient mechanism based on a redundancy scheme rather than statistical robustness. While the proposed mechanism is not vulnerable to the\nattacks discussed in this paper, it induces (due to the redundancy) substantial computational costs compared to statistically-robust techniques. Rajput et al. (2019) combines Chen et al. (2018) with statistically-robust GARs into a hybrid system, and achieves an improved aggregation time. Under the requirements of both redundancy-based schemes and statisticallyrobust GARs, Rajput et al. (2019) can significantly decrease the voting power of the adversary, and can consequently also deflect attacks in these cases. Xie et al. (2019b), and its follow-up (Xie, 2019), introduced the concept of suspicion-based fault-tolerance: each gradient is assigned a score based on the loss obtained by descending with this gradient only. The lowest scoring gradients are then filtered-out, and the remaining gradients are averaged and used to update the model. Compared to statistically-robust approaches, such a scheme does not rely on sufficiently low variance-norm ratios, and gradients could be processed independently. This advantage comes at the expense of a substantial computational load on the parameter server, as the server has to compute several forward passes for each received gradients, basically doing half the work of each worker again (the workers also have backpropagation passes). Most importantly, the success of such a technique is conditioned to the quality of the loss estimations at the server: if the estimation is biased (e.g. the server’s dataset is “different” than the workers’ ones) , has a high variance (e.g. the server’s batch size is “too” small), or if the “hard threshold” used is too large (would accept Byzantine gradients)/too small (would refuse even honest gradients) the defense might be ineffective/harmful.\nMomentum-based Variance Reduction. Our algorithm is different from Cutkosky & Orabona (2019), as instead of reducing the variance of the gradients, we actually increase it (Equation 8). What we seek to reduce is the variance-norm ratio, which is the key quantity for any Byzantine-resilient GAR approximating a high-dimensional median, e.g. Krum, Median, as well as Yang & Bajwa (2019b;a); Chen et al. (2017); Muñoz-González et al. (2019)6. Some of the ideas introduced in Cutkosky & Orabona (2019) could nevertheless help further improve Byzantine resilience. For instance, introducing an adaptive learning rate which decreases depending on the curvature of the parameter trajectory is an appealing approach to further reduce the variance-norm ratio (Equation 10). The computation of momentum at the workers has also been used in the literature for the purpose of gradient compression (Lin et al., 2018). These techniques are nevertheless not (meant to be) Byzantine-resilient.\nFuture Work. The theoretical condition to reduce the variance-norm ratio of the submitted gradients (compared to the variance-norm ratio of the sampled gradients at the same step), in Section B, shows that momentum at the workers is a double-edged sword. The problem is that st can become negative: the norm of the momentum gradient would then be decreased, increasing the variance-norm ratio. While the ability to cross narrow, local minima is recognized as an accelerator (Goh, 2017), for the purpose of Byzantine-resilience we want to ensure momentum at the workers does not increase the variance-norm ratio (compared to the variance-norm ratio of the sampled gradients at the same step). The theoretical condition for this purpose is given in Equation 9. One simple amendment would then be to use momentum at the workers when Equation 9 is satisfied, and fallback to computing it at the server otherwise. Also, a more complex, possible future approach could be to dynamically adapt the momentum factor µ, decreasing it as the curvature increases.\nAsynchronous SGD. We focused in this work on the synchronous setting, which received most of the attention in the Byzantine-resilient literature. Yet, we do not see any issue that would prevent our work from being applied in asynchronous settings. Specifically, combining our idea with a filtering scheme such as Kardam (Damaskinos et al., 2018) is in principle possible, as this filter and momentum commute. However, further analysis of the interplay between the dynamics of stale gradients and the dynamics of momentum remain necessary.\nByzantine Servers. While most of the research on Byzantine-resilience gradient descent has focused on the workers’ side, assuming a reliable server, recent efforts have started tackling Byzantine servers (El-Mhamdi et al., 2020). Our reduction of the variance-norm ratio strengthens the gradient aggregation phase, which is necessary whether we deal with Byzantine workers or Byzantine servers. An interesting open question is whether the dynamics of momentum could positively affect the model drift between different parameter servers in a Byzantine context. Any quantitative answer to this question could enable the use of our method in fully decentralised Byzantine resilient gradient descent." }, { "heading": "6 Acknowledgments", "text": "This work has been supported in part by the Swiss National Science Foundation (FNS grant N°200021 182542).\nWe would also like to acknowledge here and thank the anonymous individuals who partook in the review of this work and its code, in this final version and previous drafts, for their valuable time and inputs." }, { "heading": "A Byzantine Resilience: Definition", "text": "Let n be the number of gradients the parameter server received from the n workers (Figure 2), and let f be the maximum number of Byzantine gradients the GAR must tolerate. Some other notations below are reused from Section 2.1.\nDefinition 1. Without loss of generality, let g (1) t . . . g (n−f) t ∼ Gt be n − f independent,\n“honest” gradients following the same distribution Gt, and let g (n−f+1) t . . . g (n) t ∈ ( Rd )f\nbe arbitrary gradients, each possibly dependent on Gt and the “honest” gradients. A GAR F is said (α, f)-Byzantine resilient if and only if gt , F ( g (1) t , . . . , g (n) t ) satisfies:\n1. 〈 E gt,E g (1) t 〉 > 0 2. ∀r ∈ {2, 3, 4}, E ‖gt‖ r is bounded above by a linear combination of the terms\nE ∥∥∥g(1)t ∥∥∥r1 . . . E ∥∥∥g(1)t ∥∥∥rk , with (k, r1 . . . rk) ∈ (N∗)k+1 and r1 + . . . + rk = r." }, { "heading": "B Momentum at the Workers: Effects", "text": "We compare the variance-norm ratio of the non-Byzantine subset of the sampled gradients g (1) t . . . g (n) t against the variance-norm ratio of the non-Byzantine subset of the submitted gradients G (1) t . . . G (n) t when classical momentum is computed at the workers.\nWe denote by EGt , ∇Q (θt) the “real” gradient7 at step t.\nLet λt , ‖EGt‖ > 0 be the real gradient’s norm at step t. Let σt , √ E ‖Gt − EGt‖ 2 be the standard deviation of the real gradient at step t. The variance-norm ratio of the non-Byzantine subset of the sampled gradients at step t is:\nr (s) t ,\nσt 2 λt 2\n7In this analysis, the expectations are by default conditioned on the past randomness (all what happened up to step t) from the (n− f) · b data-points sampled, at each past step, by the (n− f) honest workers to estimate their respective gradients.\nWe will now compute the variance-norm ratio of the non-Byzantine subset of the submitted gradients. Let G (i) t , with G (i) −1 , 0, be the gradient sent by any honest worker i at step t, i.e.:\nG (i) t , t∑ u=0 µt−ug(i)u\nThe numerator of the variance-norm ratio is, for any two honest worker identifiers i 6= j: E ∥∥∥G(i)t −G(j)t ∥∥∥2\n= E ∥∥∥g(i)t + µG(i)t−1 − g(j)t − µG(j)t−1∥∥∥2 = E ∥∥∥g(i)t − g(j)t ∥∥∥2 + µ2 E∥∥∥G(i)t−1 −G(j)t−1∥∥∥2 + 2µ\nE g(i)t − E g(j)t︸ ︷︷ ︸ =EGt−EGt · (EG(i)t−1 − EG(j)t−1) ︸ ︷︷ ︸\n=0 = E ∥∥∥g(i)t − g(j)t ∥∥∥2 + µ2 E∥∥∥G(i)t−1 −G(j)t−1∥∥∥2\n= 2σt 2 + µ2 ( 2σt−1 2 + µ2 ( 2σt−2 2 + µ2 (...) ))\n= 2 t∑ u=0 µ2(t−u)σu 2 (8)\n= 2 E ∥∥∥G(i)t − EG(i)t ∥∥∥2\nAnd the denominator of the variance-norm ratio is:∥∥∥EG(i)t ∥∥∥2 = ∥∥∥E g(i)t + µ EG(i)t−1∥∥∥2 = ∥∥∥E g(i)t ∥∥∥2 + 2µ E g(i)t · EG(i)t−1 + µ2 ∥∥∥EG(i)t−1∥∥∥2\n= λt 2 + 2µ E g(i)t · ( E g(i)t−1 + µ ( E g(i)t−2 + µ (...) )) + µ2 ( λt−1 2 + 2µ E g(i)t−1 · ( E g(i)t−2 + µ (...)\n) +µ2 E ∥∥∥G(i)t−2∥∥∥2)\n= t∑ u=0 µ2(t−u) λu2 + 2 u−1∑ v=0\nµu−v E g(i)u · E g(i)v︸ ︷︷ ︸ =EGu·EGv Thus, assuming honest gradients EG(i)t do not become null:\nr (w) t ,\nΩt 2 Λt 2 =\n∑t u=0 µ\n2(t−u)σu 2∑t\nu=0 µ 2(t−u) ( λu 2 + su )\nwhere the expected “straightness” of the gradient trajectory at step u is defined by:\nsu , 2 u−1∑ v=0 µu−v EGu · EGv\nsu quantifies what is intuitively the curvature of the gradient trajectory. Straight trajectories can make su grow up to (1− µ)−1>1 times the expected squared-norm of the honest gradients, while highly “curved” trajectories (e.g. close to a local minimum) can make su negative.\nThis observation stresses that this formulation of momentum can sometimes be harmful for the purpose of Byzantine resilience. We measured su for every step u>0 in our experiments, and we always observed that this quantity is positive and increases for a short window of (dozen) steps (depending on αt), and then oscillates between positive and negative values. While the empirical impact (decreased or cancelled loss in accuracy) is concrete, we believe there is room for further improvements, as discussed in Section 5.\nThe purpose of using momentum at the workers is to reduce the variance-norm ratio r (w) t , compared to r (s) t . Since g (i) 0 = G (i) 0 , we verify that r (u) 0 = r (w) 0 . Then ∀t > 0, assuming Ωt−1 > 0 and σt > 0, we have:\nr (w) t ≤ r (s) t ⇔\nσt 2 + µ2 Ωt−1\n2\nλt 2 + st + µ2 Λt−1\n2 ≤ σt\n2\nλt 2\n⇔ µ2 Ωt−1 2 λt 2 ≤ ( st + µ 2 Λt−1 2 ) σt 2\n⇔ st ≥ µ2 Λt−1 2\n( r (w) t−1\nr (s) t\n− 1 ) (9)\nThe condition for decreasing r (w) t can be obtained similarly, assuming Ωt−1 > 0 and σt > 0:\nr (w) t ≤ r (w) t−1 ⇔ st ≥ λt 2\n( r (s) t\nr (w) t−1\n− 1\n)\nTo study the impact of a lower learning rate αt on st, we will assume that the real gradient ∇Q is l-Lipschitz. Namely:\n∀ (t, u) ∈ N2, u < t, ‖EGt − EGu‖ 2 ≤ l2 ‖θt − θu‖ 2 ≤ l2 ∥∥∥∥∥ t−1∑ v=u αv Gv ∥∥∥∥∥ 2\nThen, ∀ (t, u) ∈ N2, u < t, we can rewrite: ‖EGt − EGu‖ 2 = ‖EGt‖\n2︸ ︷︷ ︸ λt 2 + ‖EGu‖ 2︸ ︷︷ ︸ λu 2 −2 EGt · EGu\nAnd finally, we can lower-bound st as: t−1∑ u=0 µt−u ‖EGt − EGu‖ 2\n= t−1∑ u=0 µt−u ( λt 2 + λu 2 ) − 2 t−1∑ u=0\nµt−u EGt · EGu︸ ︷︷ ︸ st\n≤ t−1∑ u=0 µt−u l2 ∥∥∥∥∥ t−1∑ v=u αv Gv ∥∥∥∥∥ 2\n⇔ st ≥ t−1∑ u=0 µt−u λt2 + λu2 − l2 ∥∥∥∥∥ t−1∑ v=u αv Gv ∥∥∥∥∥ 2 (10)\n≥ 1− µ t\n1− µ λt\n2 + t−1∑ u=0 µt−u λu2 − l2 ∥∥∥∥∥ t−1∑ v=u αv Gv ∥∥∥∥∥ 2 \nWhen the real gradient ∇Q is (locally) Lipschitz continuous, reducing the learning rate αt can suffice to ensure st satisfies the conditions laid above for decreasing the variance-norm ratio r (w) t ; the purpose of momentum at the workers. Importantly this last lower bound, namely Equation 10, sets how the practitioner should choose two hyperparameters, µ and αt, for the purpose of Byzantine-resilience. Basically, as long as it does not harm the training without adversary, µ should be set as high and αt as low as possible." }, { "heading": "C Reproducing the results", "text": "Our contributed code is available at https://github.com/LPD-EPFL/ByzantineMomentum, or as a ZIP archive from OpenReview (https://openreview.net/forum?id=H8UHdhWG6A3).\nSoftware dependencies. Python 3.7.3 has been used, over several GNU/Linux distributions (Debian 10, Ubuntu 18). Besides the standard libraries associated with Python 3.7.3, our scripts also depend on8:\nLibrary Version numpy 1.19.1 torch 1.6.0 torchvision 0.7.0 pandas 1.1.0 matplotlib 3.0.2 PIL 7.2.0\nLibrary Version requests 2.21.0 urllib3 1.24.1 chardet 3.0.4 certifi 2018.08.24 idna 2.6 six 1.15.0\nLibrary Version pytz 2020.1 dateutil 2.8.1 pyparsing 2.2.0 cycler 0.10.0 kiwisolver 1.0.1 cffi 1.13.2\nHardware dependencies. We list below the hardware components used:\n• 1 Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz • 2 Nvidia GeForce GTX 1080 Ti • 64 GB of RAM\nC.1 Command\nOur results are reproducible in one command. In the root directory of the ZIP file:\n$ python3 reproduce.py\nOn our hardware, reproducing the results takes a bit less than a week. Please be aware this script will require non-negligible disk space: 2.1 GB of run data, and 132 MB of graphs.\nDepending on the hardware, instructing the script to launch several runs per available GPU may reduce the total runtime. For instance, to push up to 4 concurrent runs per GPU:\n$ python3 reproduce.py --supercharge 4" }, { "heading": "D Larger models", "text": "To assess our method on even larger models, we consider the “wide-resnet” model family implemented by Kim (2020). We use the same model-specific parameters as the ones proposed by the original author, namely: 28 (depth), 10 (widen factor), 0.3 (dropout rate), and 10 output classes (for CIFAR-10). This model contains 36 489 290 trainable parameters, almost 28 times more than the 1 310 922 trainable parameters of the convolutional model.\nWe employ the same hyperparameters as in our main experiments with the convolutional model (Section 4.1), except for the number of workers (set to n = 11), the mini-batch size per worker (set to 20), and the learning rate schedule (0.02 for t < 8000, 0.004 for 8000 ≤ t < 16000, 0.0008 for t ≥ 16000). The training procedure lasts for 20 000 steps and only employs Nesterov momentum, as proposed by the original author (Kim, 2020). We report on the maximum observed top-1 cross-accuracy in Figure 6 and evolution of the top-1 cross-accuracy in figures 7 and 8.\nThese results are also reproducible in one command. In the root directory of the ZIP file:\n$ python3 reproduce-appendix.py\nOn our hardware, reproducing these results takes several weeks. Some of the 6 presented GARs could not be employed, as they repeatedly trigger out-of-memory errors on our GPUs.\n8This list was automatically generated (see get loaded dependencies() in tools/misc.py)." }, { "heading": "E More experimental results", "text": "This section reports on the entirety of the main experiments, completing Section 4 of the main paper. For every pair model-dataset, the following parameters vary:\n• Which attack: Baruch et al. (2019) or Xie et al. (2019a) • Which defense: Krum, Median, Trimmed Mean, Phocas, MeaMed , or Bulyan • How many Byzantine workers (an half or a quarter) • Where momentum is computed (server or workers) • Which flavor of momentum is used (classical or Nesterov) • Which learning rate is used (larger or smaller)\nEvery possible combination is tested9, leading to a total of 736 different experiment setups. Each setup is tested 5 times, each run with a fixed seed from 1 to 5, enabling verbatim reproduction of our results10. In this specific section, we report on:\n• the maximum observed top-1 cross-accuracy with each of the 6 studied GARs, • the evolution of the average and standard deviation of the top-1 cross-accuracy for\nevery tested setup.\nThe results regarding the maximum observed top-1 cross-accuracy are layed out by “block” of 4 experiment setups, among which only the flavor of momentum and the attack used are different. Namely: [a] classical momentum under attack from Baruch et al. (2019), [b] nesterov momentum under attack from Baruch et al. (2019), [c] classical momentum under attack from Xie et al. (2019a), [d] nesterov momentum under attack from Xie et al. (2019a).\nThe results regarding the evolution of the top-1 cross-accuracy are layed out by “blocks” of 4 experiment setups presenting the same model, dataset, learning rate schedule, number of Byzantine workers and attack. These results are presented from Figure 25 to Figure 56.\n9Along with baselines using averaging without attack. 10Despite our best efforts, there may still exist minor sources of non-determinism, like raceconditions in the evaluation of certain functions (e.g., parallel additions) in a GPU. Nevertheless we believe these should not affect the results in any significant way." } ]
2,021
Distributed Momentum for Byzantine- resilient Stochastic Gradient Descent
SP:5c55df2071e1ac64d07205434359816ed60f92f9
[ "The paper proposed Online Push-Sum (OPS) method, which aims at solving decentralized federated optimization problems under a social network scenario where the centralized authority does not exist in a federated learning (FL) system. A social network application scenario is assumed by OPS where the graph is of single-sided trust. The author further extends the proposed OPS method to the online setting and provide regret analysis. The experimental study indicates that OPS is effective and converges faster than other decentralized online methods." ]
Federated learning has become increasingly important for modern machine learning, especially for data privacy-sensitive scenarios. Existing federated learning mostly adopts the central server-based architecture or centralized architecture. However, in many social network scenarios, centralized federated learning is not applicable (e.g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable). In this paper, we consider a generic setting: 1) the central server may not exist, and 2) the social network is unidirectional or of single-sided trust (i.e., user A trusts user B but user B may not trust user A). We propose a central server free federated learning algorithm, named Online Push-Sum (OPS) method, to handle this challenging but generic scenario. A rigorous regret analysis is also provided, which shows interesting results on how users can benefit from communication with trusted users in the federated learning scenario. This work builds upon the fundamental algorithm framework and theoretical guarantees for federated learning in the generic social network scenario.
[]
[ { "authors": [ "Y. Aono", "T. Hayashi", "L. Wang", "S Moriai" ], "title": "Privacy-preserving deep learning via additively homomorphic encryption", "venue": "IEEE Transactions on Information Forensics and Security,", "year": 2017 }, { "authors": [ "M. Assran", "N. Loizou", "N. Ballas", "M. Rabbat" ], "title": "Stochastic gradient push for distributed deep learning", "venue": "arXiv preprint arXiv:1811.10792.", "year": 2018 }, { "authors": [ "M. Assran", "M. Rabbat" ], "title": "Asynchronous subgradient-push", "venue": "arXiv preprint arXiv:1803.08950.", "year": 2018 }, { "authors": [ "A. Bellet", "R. Guerraoui", "M. Taziki", "M. Tommasi" ], "title": "Personalized and private peer-to-peer machine learning", "venue": "arXiv preprint arXiv:1705.08435.", "year": 2017 }, { "authors": [ "A. Bellet", "R. Guerraoui", "M. Taziki", "M. Tommasi" ], "title": "Personalized and private peer-topeer machine learning", "venue": "International Conference on Artificial Intelligence and Statistics, pages 473–481.", "year": 2018 }, { "authors": [ "S. Caldas", "V. Smith", "A. Talwalkar" ], "title": "Federated Kernelized Multi-Task Learning", "venue": "The Conference on Systems and Machine Learning, page 3.", "year": 2018 }, { "authors": [ "J.C. Duchi", "A. Agarwal", "M.J. Wainwright" ], "title": "Dual averaging for distributed optimization: Convergence analysis and network scaling", "venue": "IEEE Transactions on Automatic control, 57(3):592–606.", "year": 2011 }, { "authors": [ "B. Gharesifard", "J. Cortés" ], "title": "When does a digraph admit a doubly stochastic adjacency matrix? In Proceedings of the 2010 American Control Conference, pages 2440–2445", "venue": "IEEE.", "year": 2010 }, { "authors": [ "E Hazan" ], "title": "Introduction to online convex optimization", "venue": "Foundations and Trends R", "year": 2016 }, { "authors": [ "L. He", "A. Bian", "M. Jaggi" ], "title": "Cola: Decentralized linear learning", "venue": "Advances in Neural Information Processing Systems, pages 4541–4551.", "year": 2018 }, { "authors": [ "M. Jaggi", "V. Smith", "M. Takác", "J. Terhorst", "S. Krishnan", "T. Hofmann", "M.I. Jordan" ], "title": "Communication-efficient distributed dual coordinate ascent", "venue": "Advances in neural information processing systems, pages 3068–3076.", "year": 2014 }, { "authors": [ "M. Kamp", "M. Boley", "D. Keren", "A. Schuster", "I. Sharfman" ], "title": "Communication-efficient distributed online prediction by dynamic model synchronization", "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 623–639. Springer.", "year": 2014 }, { "authors": [ "J. Konečnỳ", "H.B. McMahan", "F.X. Yu", "P. Richtárik", "A.T. Suresh", "D. Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492.", "year": 2016 }, { "authors": [ "J. Konečný", "B. McMahan", "D. Ramage" ], "title": "Federated Optimization:Distributed Optimization Beyond the Datacenter", "venue": "arXiv:1511.03575 [cs, math]. arXiv: 1511.03575.", "year": 2015 }, { "authors": [ "J. Konečný", "H.B. McMahan", "F.X. Yu", "P. Richtárik", "A.T. Suresh", "D. Bacon" ], "title": "Federated Learning: Strategies for Improving Communication Efficiency", "venue": "arXiv:1610.05492 [cs]. arXiv: 1610.05492.", "year": 2016 }, { "authors": [ "S. Lee", "A. Nedić", "M. Raginsky" ], "title": "Coordinate dual averaging for decentralized online optimization with nonseparable global objectives", "venue": "IEEE Transactions on Control of Network Systems, 5(1):34–44.", "year": 2016 }, { "authors": [ "Y. Li", "M. Yu", "S. Li", "S. Avestimehr", "N.S. Kim", "A. Schwing" ], "title": "Pipe-sgd: A decentralized pipelined sgd framework for distributed deep net training", "venue": "Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31, pages 8056–8067. Curran Associates, Inc.", "year": 2018 }, { "authors": [ "X. Lian", "C. Zhang", "H. Zhang", "Hsieh", "C.-J.", "W. Zhang", "J. Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "Advances in Neural Information Processing Systems, pages 5330–5340.", "year": 2017 }, { "authors": [ "X. Lian", "W. Zhang", "C. Zhang", "J. Liu" ], "title": "Asynchronous decentralized parallel stochastic gradient descent", "venue": "International Conference on Machine Learning.", "year": 2018 }, { "authors": [ "T. Lin", "S.U. Stich", "M. Jaggi" ], "title": "Don’t Use Large Mini-Batches, Use Local SGD", "venue": "arXiv:1808.07217 [cs, stat]. arXiv: 1808.07217.", "year": 2018 }, { "authors": [ "C. Ma", "V. Smith", "M. Jaggi", "M.I. Jordan", "P. Richtárik", "M. Takáč" ], "title": "Adding vs", "venue": "Averaging in Distributed Primal-Dual Optimization. arXiv:1502.03508 [cs]. arXiv: 1502.03508.", "year": 2015 }, { "authors": [ "B. McMahan", "D. Ramage" ], "title": "Google AI Blog: Federated Learning: Collaborative Machine Learning without Centralized Training Data", "venue": null, "year": 2017 }, { "authors": [ "H.B. McMahan", "E. Moore", "D. Ramage", "S. Hampson", "Arcas", "B.A. y." ], "title": "CommunicationEfficient Learning of Deep Networks from Decentralized Data", "venue": "arXiv:1602.05629 [cs]. arXiv: 1602.05629.", "year": 2016 }, { "authors": [ "M. Nasr", "R. Shokri", "A. Houmansadr" ], "title": "Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inference attacks", "venue": "arXiv preprint arXiv:1812.00910.", "year": 2018 }, { "authors": [ "A. Nedić", "A. Olshevsky" ], "title": "Distributed optimization over time-varying directed graphs", "venue": "IEEE Transactions on Automatic Control, 60(3):601–615.", "year": 2014 }, { "authors": [ "A. Nedić", "A. Olshevsky" ], "title": "Distributed optimization over time-varying directed graphs", "venue": "IEEE Transactions on Automatic Control, 60(3):601–615.", "year": 2015 }, { "authors": [ "A. Nedić", "A. Olshevsky" ], "title": "Stochastic gradient-push for strongly convex functions on time-varying directed graphs", "venue": "IEEE Transactions on Automatic Control, 61(12):3936–3947.", "year": 2016 }, { "authors": [ "S.S. Ram", "A. Nedić", "V.V. Veeravalli" ], "title": "Distributed stochastic subgradient projection algorithms for convex optimization", "venue": "Journal of optimization theory and applications, 147(3):516– 545.", "year": 2010 }, { "authors": [ "S. Shahrampour", "A. Jadbabaie" ], "title": "Distributed online optimization in dynamic environments using mirror descent", "venue": "IEEE Transactions on Automatic Control, 63(3):714–725.", "year": 2017 }, { "authors": [ "S Shalev-Shwartz" ], "title": "Online learning and online convex optimization", "venue": "Foundations and Trends R", "year": 2012 }, { "authors": [ "S. Shalev-Shwartz", "T. Zhang" ], "title": "Stochastic dual coordinate ascent methods for regularized loss minimization", "venue": "Journal of Machine Learning Research, 14(Feb):567–599.", "year": 2013 }, { "authors": [ "Z. Shen", "A. Mokhtari", "T. Zhou", "P. Zhao", "H. Qian" ], "title": "Towards more efficient stochastic decentralized learning: Faster convergence and sparse communication", "venue": "Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4624–4633, Stockholmsmässan, Stockholm Sweden. PMLR.", "year": 2018 }, { "authors": [ "R. Shokri", "V. Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1310–1321. ACM.", "year": 2015 }, { "authors": [ "V. Smith", "Chiang", "C.-K.", "M. Sanjabi", "A.S. Talwalkar" ], "title": "Federated multi-task learning", "venue": "Advances in Neural Information Processing Systems, pages 4424–4434.", "year": 2017 }, { "authors": [ "V. Smith", "Chiang", "C.-K.", "M. Sanjabi", "A.S. Talwalkar" ], "title": "Federated multi-task learning", "venue": "Advances in Neural Information Processing Systems, pages 4424–4434.", "year": 2017 }, { "authors": [ "V. Smith", "S. Forte", "C. Ma", "M. Takac", "M.I. Jordan", "M. Jaggi" ], "title": "CoCoA: A General Framework for Communication-Efficient Distributed Optimization", "venue": "arXiv:1611.02189 [cs]. arXiv: 1611.02189.", "year": 2016 }, { "authors": [ "S.U. Stich" ], "title": "Local SGD Converges Fast and Communicates", "venue": null, "year": 2018 }, { "authors": [ "K.I. Tsianos", "S. Lawlor", "M.G. Rabbat" ], "title": "Push-sum distributed dual averaging for convex optimization", "venue": "2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 5453– 5458. IEEE.", "year": 2012 }, { "authors": [ "K.I. Tsianos", "M.G. Rabbat" ], "title": "Distributed dual averaging for convex optimization under communication delays", "venue": "2012 American Control Conference (ACC), pages 1067–1072. IEEE.", "year": 2012 }, { "authors": [ "P. Vanhaesebrouck", "A. Bellet", "M. Tommasi" ], "title": "Decentralized collaborative learning of personalized models over networks", "venue": "International Conference on Artificial Intelligence and Statistics (AISTATS).", "year": 2017 }, { "authors": [ "J. Wang", "G. Joshi" ], "title": "Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms", "venue": null, "year": 2018 }, { "authors": [ "T. Wu", "K. Yuan", "Q. Ling", "W. Yin", "A.H. Sayed" ], "title": "Decentralized consensus optimization with asynchrony and delays", "venue": "IEEE Transactions on Signal and Information Processing over Networks, PP:1–1.", "year": 2017 }, { "authors": [ "Q. Yang", "Y. Liu", "T. Chen", "Y. Tong" ], "title": "Federated machine learning: Concept and applications", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):12.", "year": 2019 }, { "authors": [ "T. Yang" ], "title": "Trading computation for communication: Distributed stochastic dual coordinate ascent", "venue": "Advances in Neural Information Processing Systems, pages 629–637.", "year": 2013 }, { "authors": [ "T. Yang", "S. Zhu", "R. Jin", "Y. Lin" ], "title": "Analysis of distributed stochastic dual coordinate ascent", "venue": "arXiv preprint arXiv:1312.1031.", "year": 2013 }, { "authors": [ "H. Yu", "S. Yang", "S. Zhu" ], "title": "Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning", "venue": "arXiv:1807.06629 [cs, math]. arXiv: 1807.06629.", "year": 2018 }, { "authors": [ "Y. Zhao", "C. Yu", "P. Zhao", "J. Liu" ], "title": "Decentralized online learning: Take benefits from others’ data without sharing your own to track global trend", "venue": "arXiv preprint arXiv:1901.10593.", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning has been well recognized as a framework able to protect data privacy Konečnỳ et al. (2016); Smith et al. (2017a); Yang et al. (2019). State-of-the-art federated learning adopts the centralized network architecture where a centralized node collects the gradients sent from child agents to update the global model. Despite its simplicity, the centralized method suffers from communication and computational bottlenecks in the central node, especially for federated learning, where a large number of clients are usually involved. Moreover, to prevent reverse engineering of the user’s identity, a certain amount of noise must be added to the gradient to protect user privacy, which partially sacrifices the efficiency and the accuracy Shokri and Shmatikov (2015).\nTo further protect the data privacy and avoid the communication bottleneck, the decentralized architecture has been recently proposed Vanhaesebrouck et al. (2017); Bellet et al. (2018), where the centralized node has been removed, and each node only communicates with its neighbors (with mutual trust) by exchanging their local models. Exchanging local models is usually favored to the data privacy protection over sending private gradients because the local model is the aggregation or mixture of quite a large amount of data while the local gradient directly reflects only one or a batch of private data samples. Although advantages of decentralized architecture have been well recognized over the state-of-the-art method (its centralized counterpart), it usually can only be run on the network with mutual trusts. That is, two nodes (or users) can exchange their local models only if they trust each other reciprocally (e.g., node A may trust node B, but if node B does not trust node A, they cannot communicate). Given a social network, one can only use the edges with mutual trust to run decentralized federated learning algorithms. Two immediate drawbacks will be: (1) If all mutual trust edges do not form a connected network, the federated learning does not apply; (2) Removing all single-sided edges from the communication network could significantly reduce the efficiency of communication. These drawbacks lead to the question: How do we effectively utilize the single-sided trust edges under decentralized federated learning framework?\nIn this paper, we consider the social network scenario, where the centralized network is unavailable (e.g., there does not exist a central node that can build up the connection with all users, or the centralized communication cost is not affordable). We make a minimal assumption on the social\nnetwork: The data may come in a streaming fashion on each user node as the federated learning algorithm runs; the trust between users may be single-sided, where user A trusts user B, but user B may not trust user A (“trust” means “would like to send information to”).\nFor the setting mentioned above, we develop a decentralized learning algorithm called online pushsum (OPS) which possesses the following features:\n• Only models rather than local gradients are exchanged among clients in our algorithm. This scheme can reduce the risk of exposing clients’ data privacy Aono et al. (2017).\n• Our algorithm removes some constraints imposed by typical decentralized methods, which makes it more flexible in allowing arbitrary network topology. Each node only needs to know its out neighbors instead of the global topology.\n• We provide the rigorous regret analysis for the proposed algorithm and specifically distinguish two components in the online loss function: the adversary component and the stochastic component, which can model clients’ private data and internal connections between clients, respectively.\nNotation We adopt the following notation in this paper:\n• For random variable ξ(i)t subject to distribution D (i) t , we use Ξn,T and Dn,T to denote the\nset of random variables and distributions, respectively: Ξn,T = { ξ (i) t } 1≤i≤n,1≤t≤T , Dn,T = { D (i) t } 1≤i≤n,1≤t≤T .\nNotation Ξn,T ∼ Dn,T implies ξ(i)t ∼ D (i) t for any i ∈ [n] and t ∈ [T ].\n• For a decentralized network with n nodes, we use W ∈ Rn×n to present the confusion matrix, where Wij ≥ 0 is the weight that node i sends to node j (i, j ∈ [n]). N outi = {j ∈ [n] : Wij > 0} and N ini = {k ∈ [n] : Wki > 0} are also used for denoting the sets of in neighbors of and out neighbors of node i respectively.\n• Norm ‖ · ‖ denotes the `2 norm ‖ · ‖2 by default." }, { "heading": "2 RELATED WORK", "text": "The concept of federated learning was first proposed in McMahan et al. (2016), which advocates a novel learning setting that learns a shared model by aggregating locally-computed gradient updates without centralizing distributed data on devices. Early examples of research into federated learning also include Konečný et al. (2015; 2016), and a widespread blog article posted by Google AI McMahan and Ramage (2017). To address both statistical and system challenges, Smith et al. (2017b) and Caldas et al. (2018) propose a multi-task learning framework for federated learning and its related optimization algorithm, which extends early works SDCA Shalev-Shwartz and Zhang (2013); Yang (2013); Yang et al. (2013) and COCOA Jaggi et al. (2014); Ma et al. (2015); Smith et al. (2016) to the federated learning setting. Among these optimization methods, Federated Averaging (FedAvg), proposed by McMahan et al. (2016), beats conventional synchronized mini-batch SGD regarding communication rounds as well as converges on non-IID and unbalanced data. Recent rigorous theoretical analysis Stich (2018); Wang and Joshi (2018); Yu et al. (2018); Lin et al. (2018) shows that FedAvg is a special case of averaging periodic SGD (also called “local SGD”) which allows nodes\nto perform local updates and infrequent synchronization between them to communicate less while converging quickly. However, they cannot be applied to the single-sided trust network (asymmetric topology matrix).\nDecentralized learning is a typical parallel strategy where each worker is only required to communicate with its neighbors, which means the communication bottleneck (in the parameter server) is removed. It has already been proved that decentralized learning can outperform the traditional centralized learning when the worker number is comparably large under a poor network condition Lian et al. (2017). There are two main types of decentralized learning algorithms: fixed network topology He et al. (2018), and time-varying Nedić and Olshevsky (2015); Lian et al. (2018) during training. Wu et al. (2017); Shen et al. (2018) shows that the decentralized SGD would converge with a comparable convergence rate to the centralized algorithm with less communication to make large-scale model training feasible. Li et al. (2018) provides a systematic analysis of the decentralized learning pipeline.\nOnline learning has been studied for decades. It is well known that the lower bounds of online optimization methods are O( √ T ) and O(log T ) for convex and strongly convex loss functions respectively Hazan et al. (2016); Shalev-Shwartz et al. (2012). In recent years, due to the increasing volume of data, distributed online learning, especially decentralized methods, has attracted much attention. Examples of these works include Kamp et al. (2014); Shahrampour and Jadbabaie (2017); Lee et al. (2016). Notably, Zhao et al. (2019) shares a similar problem definition and theoretical result as our paper. However, single-sided communication is not allowed in their setting, restricting their results." }, { "heading": "3 PROBLEM SETTING", "text": "In this paper, we consider federated learning with n clients (a.k.a., nodes). Each client can be either an edge server or some other kind of computing device such as smart phone, which has local private data and the local machine learning model xi stored on it. We assume the topological structure of the network of these n nodes can be represented by a directed graph G = (nodes : [n], edges : E) with vertex set [n] = {1, 2, . . . , n} and edge set E ⊂ [n]× [n]. If there exist an edge (u, v) ∈ E, it means node u and node v have network connection and u can directly send messages to v.\nLet x(i)t denote the local model on the i-th node at iteration t. In each iteration, node i receives a new sample and computes a prediction for this new sample according to the current model x(i)t (e.g., it may recommend some items to the user in the online recommendation system). After that, a loss function, fi,t(·) associated with that new sample is received by node i. The typical goal of online learning is to minimize the regret, which is defined as the difference between the summation of the losses incurred by the nodes’ prediction and the corresponding loss of the global optimal model x∗:\nR̃T := T∑ t=1 n∑ i=1 ( fi,t(x (i) t )− fi,t(x∗) ) ,\nwhere x∗ = arg minx ∑T t=1 ∑n i=1 fi,t(x) is the optimal solution.\nHowever, here we consider a more general online setting: the loss function of the i-th node at iteration t is fi,t(·; ξi,t), which is additionally parametrized by a random variable ξi,t. This ξi,t is drawn from the distribution Di,t, and is mutually independent in terms of i and t, and we call this part as the stochastic component of loss function fi,t(·; ξi,t). The stochastic component can be utilized to characterize the internal randomness of nodes’ data, and the potential connection among different nodes. For example, music preference may be impacted by popular trends on the Internet, which can be formulated by our model by letting Di,t ≡ Dt for all i ∈ [n] with some time-varying distribution Dt. On the other hand, function fi,t(·; ·) is the adversarial component of the loss, which may include, for example, user’s profile, location, etc. Therefore, the objective regret naturally becomes the expectation of all the past losses:\nRT := E Ξn,T∼Dn,T { T∑ t=1 n∑ i=1 ( fi,t(x (i) t ; ξ (i) t )− fi,t(x∗; ξ (i) t ) )}\n(1)\nwith x∗ = arg minx EΞn,T∼Dn,T ∑T t=1 ∑n i=1 fi,t(x; ξ (i) t ).\nOne benefit of the above formulation is that it partially resolves the non-I.I.D. issue in federated learning. A fundamental assumption in many traditional distributed machine learning methods is that the data samples stored on all nodes are I.I.D., which fails to hold for federated learning since the data on each user’s device is highly correlated to that user’s preferences and habits. However, our formulation does not require the I.I.D. assumption to hold for the adversarial component at all. Even though the random samples for the stochastic component still need to be independent, they are allowed to be drawn from different distributions.\nFinally, one should note that online optimization also includes stochastic optimization (i.e., data samples are drawn from a fixed distribution) and offline optimization (i.e., data are already collected before optimization begins) as its typical cases Shalev-Shwartz et al. (2012). Hence, our setting covers a wide range of applications." }, { "heading": "4 ONLINE PUSH-SUM ALGORITHM", "text": "In this section, we define the construction of the confusion matrix and introduce the proposed algorithm." }, { "heading": "4.1 CONSTRUCTION OF CONFUSION MATRIX", "text": "One important parameter of the algorithm is the confusion matrix W. W is a matrix depending on the network topology G, which means Wij = 0 if there is no directed edge (i, j) in G. If the value of Wij is large, the node i will have a stronger impact on node j. However, W still allows flexibility where users can specify their weights associated with existing edges, meaning that even if there is a physical connection between two nodes, the nodes can decide against using the channel. For example, even if (i, j) ∈ E, user still can set Wij = 0 if user i thinks node j is not trustworthy and therefore chooses to exclude the channel from i to j.\nOf course, there are still some constraints over W. W must be a row stochastic matrix (i.e., each entry in W is non-negative, and the summation of each row is 1). This assumption is different from the one in classical decentralized distributed optimization, which typically assumes W is symmetric and doubly stochastic (e.g., Duchi et al. (2011)) (i.e., the summations of both rows and columns are all 1). Such a requirement is quite restrictive, because not all networks admit a doubly stochastic matrix (Gharesifard and Cortés (2010)), and relinquishing double stochasticity can introduce bias in optimization Ram et al. (2010); Tsianos and Rabbat (2012). As a comparison, our assumption that W is row stochastic will avoid such concerns since any non-negative matrix with at least one positive entry on each row (which is already implied by the connectivity of the graph) can be easily normalized into row stochastic. The relaxation of this assumption is crucial for federated learning, considering that the federated learning system usually involves complex network topology due to its large number of clients. Moreover, since each node only needs to make sure the summation of its out-weights is 1, there is no need for it to be aware of the global network topology, which significantly benefits the implementation of the federated learning system. Meanwhile, requiring W to be symmetric rules out the possibility of using asymmetric network topology and adopting sing-sided trust, while our method does not have such restriction." }, { "heading": "4.2 ALGORITHM DESCRIPTION", "text": "The proposed online push-sum algorithm is presented in Algorithm 1. The algorithm design mainly follows the pattern of push-sum algorithm Tsianos et al. (2012), but here we further generalize it into the online setting.\nThe algorithm mainly consists of three steps:\n1. Local update: each client i applies the current local model x(i)t to obtain the loss function, based on which an intermediate local model z(i)\nt+ 12 is computed;\n2. Push: the weighted variable Wijz (i)\nt+ 12 is sent to j for all its out neighbors j;\n3. Sum: all the received Wjiz (j) t+ 12 is summed and normalized to obtain the new model x(i)t+1.\nAlgorithm 1 Online Push-Sum (OPS) Algorithm\nRequire: Learning rate γ, number of iterations T , and the confusion matrix W.\n1: Initialize x(i)0 = z (i) 0 = 0, ω (i) 0 = 1 for all\ni ∈ [n] 2: for t = 0, 1, ..., T − 1 do 3: // For all users (say the i-th node i ∈\n[n]) 4: Apply local model x(i)t and suffer loss fi,t(x (i) t ; ξ (i) t ) 5: Locally computes the intermedia variable\nz (i)\nt+ 12 = z\n(i) t − γ∇fi,t\n( x\n(i) t ; ξ (i) t\n)\n6: Send ( Wijz (i)\nt+ 12 ,Wijω\n(i) t\n) to all j ∈\nN outi 7: Update\nz (i) t+1 = ∑ k∈N ini Wkiz (k) t+ 12\nω (i) t+1 = ∑ k∈N ini Wkiω (k) t\nx (i) t+1 =\nz (i) t+1 ω (i) t+1\n8: end for 9: return x(i)T to node i\nIt should be noted an auxiliary variables z(i) t+ 12 and z(i)t+1 are used in the algorithm. Actually, they are used in the algorithm to clarify the description but may be easily removed in the practical implementation. Besides, another variable ω(i)t+1 is also introduced, which is the normalizing factor of z\n(i) t+1. ω (i) t+1 plays an important role in the push-sum algorithm, since W is not doubly stochastic in our setting, and it is possible that the total weight i receives does not equal to 1. The introduction of the normalizing factor ω(i)t helps the algorithm avoid issues brought by that W is not doubly stochastic. Furthermore, when W becomes doubly stochastic, it can be easily verified that ω(i)t ≡ 1 and x(i)t ≡ z (i) t for any i and t, then Algorithm 1 reduces to the distributed online gradient method proposed by Zhao et al. (2019).\nIn the algorithm, the local data, which is encoded in the gradient fi,t(x (i) t ; ξt) Shokri and Shmatikov (2015), is only utilized in updating local model. What neighboring nodes exchanges are only limited to the local models." }, { "heading": "4.3 REGRET ANALYSIS", "text": "In this subsection, we provide regret bound analysis of OPS algorithm. Due to the limitation of space, the detail proof is deferred to the appendix. For convenience, we first denote\nFi,t(x) := E ξi,t∼Di,t fi,t(x; ξi,t).\nTo carry out the analysis, the following assumptions are required: Assumption 1. We make the following assumptions throughout this paper: (1) The topological graph G is strongly connected; W is row stochastic; (2) For any i ∈ [n] and t ∈ [T ], the loss function fi,t(x; ξi,t) is convex in x; (3) The problem domain is bounded such that for any two vectors x and y we always have ‖x− y‖2 ≤ R; (4) The norm of the expected gradient ∇Fi,t(·) is bounded, i.e., there exist constant G > 0 such that ‖∇Fi,t(x)‖2 ≤ G2 for any i, t and x; (5) The gradient variance is also bounded by σ2, namely,\nE ξi,t∼Di,t ‖∇fi,t(x; ξi,t)−∇Fi,t(x)‖2 ≤ σ2.\nHere constant G provides an upper bound for the adversarial component. On the other hand, σ measures the magnitude of stochasticity brought by the stochastic component. When σ = 0, the problem setting simply reduces back to normal distributed online learning. The strong connectivity assumption is necessary to ensure that the information can be exchanged between any two nodes. As for the convexity and the domain boundedness assumptions, they are quite common in online learning literature, such as Hazan et al. (2016).\nEquipped with these assumptions, now we are ready to present the convergence result:\nTheorem 2. If we set\nγ =\n√ nR\nσ √ 1 + nC2 +G √ nC1T , (2)\nthe regret of OPS can be bounded by: RT ≤ O ( nGR √ T + σR ( 1 + √ nC2 )√ nT ) , (3)\nwhere C1 and C2 are two constants defined in the appendix.\nNote that when n = 1 and σ = 0, where the problem setting just reduces to normal online optimization, the implied regret bound O(GR √ T ) exactly matches the lower bound of online optimization Hazan et al. (2016). Moreover, our result also matches the convergence rate of centralized online learning where q = 0 for fully connected networks. Hence, we can conclude that the OPS algorithm has optimal dependence on T .\nThis bound has a linear dependence on the number of nodes n, but it is easy to understand. First, we have defined the regret to be the summation of the losses on all the nodes. Increasing n makes the regret naturally larger. Second, our federated learning setting is different from the typical distributed learning in that I.I.D. assumption does not hold here. Each node contains distinct local data that may be drawn from totally different distributions. Therefore, adding more nodes is not helpful for decreasing the regret of existing clients.\nMoreover, we also prove that the difference of the model x(i)t on each worker could be bounded using the following theorem:\nTheorem 3. If we set γ as (2), the difference of the model x(i)t on each worker admits a faster convergence rate than regret:\n1\nT n∑ i T∑ t=0 ∥∥∥x(i)t+1 − zt+1∥∥∥2 ≤O(nGR+ nRσT ) .\nHence, the models on all clients’ devices will finally converge to the same one with rate O(1/T )." }, { "heading": "4.4 PRIVACY PROTECTION", "text": "Our proposed algorithm has several advantages concerning privacy protection.\nFirst, as we have mentioned, OPS runs in a decentralized way and exchanges models instead of gradients or training samples, which is already proven effective for reducing the risk of privacy leakage Bellet et al. (2017). Second, OPS runs in a decentralized and asymmetric fashion. These properties create difficulties for many attacking methods such as Nasr et al. (2018). In order to infer the data of other clients, the attacker needs to know the reactions of other nodes after the attack is injected, which is impossible when the connections are single-sided. Even though the attack will spread among the whole network and finally return to the attacker, it is still hard for the attacker to distinguish whether the information he receives from its neighbors is already affected by the attack or not, since he is unaware of the global topology." }, { "heading": "5 EXPERIMENTS", "text": "We compare the performance of our proposed Online Push-Sum (OPS) method with that of Decentralized Online Gradient method (DOL) and Centralized Online Gradient method (COL), and then evaluate the effectiveness of OPS in different network size and network topology density settings." }, { "heading": "5.1 IMPLEMENTATION AND SETTINGS", "text": "We consider online logistic regression with squared `2 norm regularization:\nfi,t (x; ξi,t) = log ( 1 + exp ( −yi,tA>i,tx )) + λ\n2 ‖x‖2,\nwhere regularization coefficient λ is set to 10−4. ξi,t is the stochastic component of the function fi,t introduced in Section § 3, which is encoded in the random data sample (Ai,t,yi,t). We evaluate the learning performance by measuring the average loss\n1\nnT EΞn,T n∑ i=1 T∑ t=1 fi,t (xi,t; ξi,t) ,\ninstead of using the dynamic regret (1) directly, since the optimal reference point x∗ is the same for all the methods. The learning rate γ in Algorithm 1 is tuned to be optimal for each dataset separately. The experiment implementation is based on Python 3.7.0, PyTorch 1.2.0, NetworkX 2.3, and scikitlearn 0.20.3. The source code along with other information concerning the experiment such as the setting of the hyper-parameters is provided in the supplementary materials.\nDataset Experiments were run on two real-world public datasets: SUSY1 and Room-Occupancy2. SUSY and Room-Occupancy are both large-scale binary classification datasets, containing 5,000,000 and 20,566 samples, respectively. Each dataset is split into two subsets: the stochastic data and the adversarial data. The stochastic data is generated by allocating a fraction of samples (e.g., 50% of the whole dataset) to nodes randomly and uniformly. The adversarial data is generated by conducting on the remaining dataset to produce n clusters and then allocating every cluster to a node. As we analyzed previously, only the scattered stochastic data can boost the model performance by intra-node communication. For each node, this pre-acquired data is transformed into streaming data to simulate online learning." }, { "heading": "5.2 COMPARISON WITH DOL AND COL", "text": "To compare OPS with DOL and COL, a network size with 128 nodes and 20 nodes are selected for SUSY and Room-Occupancy, respectively. For COL, its confusion matrix W is fully-connected (doubly stochastic matrix). For DOL and OPS, they are run with the same network topology and the same row stochastic matrix (asymmetric confusion matrix) to maintain a fair comparison. Such asymmetric confusion is constructed by setting each node’s number of neighbors as a random value which is smaller than a fixed upper bound and also ensures the strong connectivity of the whole network (this upper-bound neighbor number is set to 32 for the SUSY dataset, while 10 is set for the Room-Occupancy dataset). Since DOL typically requires the network to be the symmetric and doubly stochastic confusion matrix, DOL is run in two settings for comparison. In the first setting, in order to meet the assumption of the symmetry and doubly stochasticity, all unidirectional connections are removed in the confusion matrix so that the row stochastic confusion matrix degenerates into a doubly stochastic matrix. This setting is labeled as DOL-Symm in Figure 2. In another setting, DOL is forced to run on the asymmetric network where each node naively aggregates its received models without considering whether its sending weights are equal to its receiving weights. DOL-Asymm is used to label this setting in Figure 2.\nAs illustrated in Figure 2, in both two datasets, OPS outperforms DOL-Symm in the row stochastic confusion matrix. This demonstrates that incorporating unidirectional communication can help to boost the model performance. In other words, OPS gains better performance in the single-sided\n1https://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/binary.html# SUSY\n2https://archive.ics.uci.edu/ml/datasets/Occupancy+Detection+\ntrust network under the setting of federated learning. OPS also works better than DOL-Asymm. Although DOL-Asymm utilizes additional unidirectional connections, in some cases its performance is even worse than DOL-Symm (e.g., Figure 2a). This phenomenon is most likely attributed to its simple aggregation pattern, which causes decreased performance in DOL-Asymm when removing the doubly stochastic matrix assumption. These two observations confirm the effectiveness of OPS in a row stochastic confusion matrix, which is consistent with our theoretical analysis.\nComparing Figure 2c and Figure 2d, we also observe that when increasing the ratio of the stochastic component, the average loss (regret) becomes smaller. It is reasonable that OPS achieves slightly worse performance than COL because OPS works in a sparsely connected network where information exchanging is much less than COL. We use the COL as the baseline in all experiments.\nOnly the number of iterations instead of the actual running time is considered in the experiment. It is redundant to present the actual running time. Because the centralized method requires more time for each iteration due to the network congestion in the central node, OPS usually outperforms COL in terms of running time." }, { "heading": "5.3 EVALUATION ON DIFFERENT NETWORK SIZES", "text": "Figure 3a and 3b summarizes the evaluation of OPS in different network sizes (in the SUSY dataset, 128, 256, 512, 1024 are set). The upper-bound neighbor number is aligned to the same value among different network sizes to isolate its impact. As we can see, in every dataset, the average loss (regret) curve in different network sizes is close on a small scale. These observations demonstrate OPS is robust to the network size. Furthermore, the average loss (regret) is smaller in larger network size (i.e., the curve of the n = 1024 network size is lower than others), which also demonstrates that more stochastic samples provided by more nodes can naturally accelerate the convergence. Due to limitation of space, the results on the other dataset is deferred to the appendix." }, { "heading": "5.4 EVALUATION ON NETWORK DENSITY", "text": "We also evaluate the performance of OPS in different network densities. We fix the network size to 512 for SUSY dataset. Network density is defined as the ratio of the upper-bound random neighbor number per node to the size of the network (e.g., if the ratio is 0.5 in SUSY, it means 256 is set as the upper-bound neighbor number for each node). We can see from Figure 3c and 3d that as the network density increased, the average loss (regret) decreased. This observation also proves that our proposed OPS algorithm can work well in different network densities, and can gain more benefits from a denser row stochastic matrix. This benefit can also be understood intuitively: in a federated learning network, a user’s model performance will improve if it communicates with more users. The results of Room Occupancy are also deferred to the appendix." }, { "heading": "6 CONCLUSIONS", "text": "Decentralized federated learning with single-sided trust is a promising framework for solving a wide range of problems. In this paper, the online push-sum algorithm is developed for this setting, which is able to handle complex network topology and is proven to have an optimal convergence rate. The regret-based online problem formulation also extends its applications. We tested the proposed OPS algorithm in various experiments, which have empirically justified its efficiency." }, { "heading": "A PROOFS", "text": "Notations: Below we use the following notation in our proof\n• ∇Ft(Xt) := [ ∇F1,t ( x (1) t ) , · · · ,∇Fn,t ( x (n) t )] • Xt := [ x (1) t ,x (2) t , ...,x (n) t\n] • Gt := [ ∇f1,t(x1t ; ξ1t ), . . . ,∇fn,t(xnt ; ξnt )\n] Here we first present the proof Theorem 2, then we will present some key lemmas along with the proof of Theorem 3. The following theorem is the key to prove Theorem 2:\nTheorem 4. For the online push-sum algorithm with step size γ > 0, it holds that\nRT ≤ G2TnγC1 + σ2Tγ(1 + nC2) + nR2\n2γ , (4)\nwhere\nC1 := 8Cq\nδmin(1− q) + 1, C2 :=\n2Cq\nδmin(1− q) ,\nand C, q and δmin are some constants defined in later lemmas.\nProof. Since the loss function fi,t(·) is assumed to be convex, which leads to\nEt n∑ i=1 fi,t ( x (i) t ; ξ (i) t ) − nFt(x∗)\n=Et n∑ i=1 ( fi,t ( x (i) t ; ξ (i) t ) − fi,t ( x∗; ξ (i) t )) ≤Et\nn∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) ,x (i) t − x∗ 〉 =Et\nn∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) ,x (i) t − zt 〉 ︸ ︷︷ ︸\n:=I1t\n+Et n∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) , zt − x∗ 〉 ︸ ︷︷ ︸\n:=I2t\n.\nFor I2t, we have\nEt n∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) , zt − x∗ 〉 = n\nγ Et\n〈 γ\nn n∑ i=1 ∇fi,t ( x (i) t ; ξ (i) t ) , zt − x∗\n〉\n= n\n2γ Et ∥∥∥∥∥γn n∑ i=1 ∇fi,t ( x (i) t ; ξ (i) t )∥∥∥∥∥ 2 + ‖zt − x∗‖2 − ∥∥∥∥∥zt − x∗ − γn n∑ i=1 ∇fi,t ( x (i) t ; ξ (i) t )∥∥∥∥∥ 2 \n= n\n2γ Et ∥∥∥∥∥γn n∑ i=1 ∇fi,t ( x (i) t ; ξ (i) t )∥∥∥∥∥ 2 + ‖zt − x∗‖2 − ‖zt+1 − x∗‖2 \n≤ n 2γ\nEt ( γ2G2 + γ2σ2\nn + ‖zt − x∗‖2 − ‖zt+1 − x∗‖2\n)\nNotice that for COL, we have I1t = 0 because x (i) t = zt. So for DOL, in order to bound I1t, we need to bound the difference ∥∥∥x(i)t − zt∥∥∥ (using Lemma 8).\nEt n∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) ,x (i) t − zt 〉 =Et\nn∑ i=1 〈 ∇Fi,t(x(i)t ),x (i) t − zt 〉 ≤Et\nn∑ i=1 ( α ∥∥∥∇Fi,t (x(i)t )∥∥∥2 + 1α ∥∥∥x(i)t − zt∥∥∥2 ) .\nSumming up the inequality above from t = 1 to t = T , we get\nT∑ t=1 Et n∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) ,x (i) t − zt 〉 =\nT∑ t=1 Et n∑ i=1 〈 ∇Fi,t ( x (i) t ) ,x (i) t − zt 〉 ≤\nT∑ t=1 Et n∑ i=1 ( α ∥∥∥∇Fi,t (x(i)t )∥∥∥2 + 1α ∥∥∥x(i)t − zt∥∥∥2 )\n= T∑ t=1 ( αEt ‖∇Ft(Xt)‖2F + 1 α Et ‖Xt − zt‖2F )\n≤α T∑ t=1 Et ‖∇Ft (Xt)‖2F + 4γ2C2q2 αδ2min(1− q)2 T∑ t=1 Et ‖Gt‖2F\n≤α T∑ t=1 Et ‖∇Ft (Xt)‖2F + 4γ2C2q2 αδ2min(1− q)2 T∑ t=1 ( Et ‖∇Ft(Xt)‖2F + nσ 2 ) .\nChoosing α = 2γCqδmin(1−q) , we have\nT∑ t=1 Et n∑ i=1 〈 ∇fi,t ( x (i) t ; ξ (i) t ) ,x (i) t − zt 〉 ≤ 8nγCTqG 2 δmin(1− q) + 2nγCqσ2T δmin(1− q)\nSo we have\nT∑ t=1 Et n∑ i=1 fi,t ( z (i) t ; ξ (i) t ) − nF (x∗)\n≤8nγCTqG 2\nδmin(1− q) +\n2γCqσ2T\nδmin(1− q) +\nn\n2nγ T∑ t=1 ( γ2G2 + γ2σ2 n + Et ‖zt − x∗‖2 − Et ‖zt+1 − x∗‖2 )\n≤G2Tnγ (\n8Cq\nδmin(1− q) + 1\n) + σ2Tγ ( 1 +\n2nCq\nδmin(1− q)\n) + n\n2γ T∑ t=1 ( Et ‖zt − x∗‖2 − Et ‖zt+1 − x∗‖2 ) ≤G2Tnγ ( 8Cq\nδmin(1− q) + 1\n) + σ2Tγ ( 1 +\n2nCq\nδmin(1− q)\n) + nR2\n2γ\n=C1nG 2Tγ + (1 + nC2)σ 2Tγ + nR2\n2γ .\nNotice that Theorem 2 can be easily verified by setting γ = √ nR√\n(1+nC2)σ2+ √ nC1G2T\n.\nNext, we will present two lemmas for our proof of Lemma 8. The proofs of following two lemmas can be found in existing literature Nedić and Olshevsky (2014; 2016); Assran and Rabbat (2018); Assran et al. (2018).\nLemma 5. Under the Assumption 1, there exists a constant δmin > 0 such that for any t, the following holds\nn∑ j=1 [Wt>Wt>...W0>]ij ≥ δmin ≥ 1 nn , ∀i (5)\nwhere Wt is a row stochastic matrix.\nLemma 6. Under the Assumption 1, for any t, there always exists a stochastic vector ψ(t) and two constants C = 4 and q = 1−n−n < 1 such that for any s satisfying s ≤ t, the following inequality holds ∣∣[Wt>Wt> · · ·Ws+1>Ws>]ij − ψi(t)∣∣ ≤ Cqt−s,∀i, j where Wt is a row stochastic matrix, and ψ(t) is a vector with ψi(t) being its i-th entry. Lemma 7. Given two non-negative sequences {at}∞t=1 and {bt}∞t=1 that satisfying\nat = t∑ s=1 ρt−sbs, (6)\nwith ρ ∈ [0, 1), we have\nDk := k∑ t=1 a2t ≤ 1 (1− ρ)2 k∑ s=1 b2s.\nProof. From the definition, we have\nSk = k∑ t=1 t∑ s=1 ρt−sbs = k∑ s=1 k∑ t=s ρt−sbs = k∑ s=1 k−s∑ t=0 ρtbs ≤ k∑ s=1 bs 1− ρ , (7)\nDk = k∑ t=1 t∑ s=1 ρt−sbs t∑ r=1 ρt−rbr\n= k∑ t=1 t∑ s=1 t∑ r=1 ρ2t−s−rbsbr\n≤ k∑ t=1 t∑ s=1 t∑ r=1 ρ2t−s−r b2s + b 2 r 2\n= k∑ t=1 t∑ s=1 t∑ r=1 ρ2t−s−rb2s\n≤ 1 1− ρ k∑ t=1 t∑ s=1 ρt−sb2s\n≤ 1 (1− ρ)2 k∑ s=1 b2s.\nBased on the above three lemmas, we can obtain the following lemma. Lemma 8. Under the Assumption 1, the updating rule of Algorithm 1 leads to the following inequality\nn∑ i T∑ t=0 ∥∥∥x(i)t+1 − zt+1∥∥∥2 2 ≤ 4γ 2C2q2 δ2min(1− q)2 t∑ s=0 ‖Gs‖2F ,\nwhere γ is the step size, and C = 4, δmin ≥ n−n, q = 1− n−n are constants. Gs is the matrix for the stochastic gradient at time s (e.g., the i-th column is the stochastic gradient vector on node i at time s).\nProof. The updating rule of OPS can be formulated as\nZt+1 = (Zt − γGt)W ωt+1 = W >ωt\nXt+1 = Zt+1[diag(ωt+1)] −1\nwhere W is a row stochastic matrix. Xt = [x (1) t ,x (2) t , ...,x (n) t ] is a matrix whose each column is x (i) t . Gt is the matrix of gradient, whose each column is the stochastic gradient at z (i) t on node i. Zt = [z (1) t , ..., z (n) t ] is the matrix whose each column is z (i) t .\nAssuming X0 = O and ω0 = 1, then we have\nZt+1 = (Zt − γGt)W = ... = −γ t∑\ns=0\nGsW t−s+1, (8)\nzt+1 = zt − γgt = ... = − t∑\ns=0\nγgs, (9)\nωt+1 = W t+1>ω0, (10)\nwhere xt = Xt1 is the average of all variables on the n nodes, and gt = Gt1 is the averaged gradient. We have W1 = 1 since W is a row stochastic matrix.\nFor ωt+1, according to Lemma 6, we decompose it as follows\nωt+1 =W t+1>ω0 = [W t+1> − ψ(t)1>]ω0 + ψ(t)1>ω0 = [Wt+1> − ψ(t)1>]1 + nψ(t), (11)\nsince ω0 = 1.\nOn the other hand, according to Lemma 5, we also have\nω (i) t+1 = [W t+1>1]>ei = n∑ j=1 [Wt+1>]ij ≥ nδmin, (12)\nwhere ei is a vector with only the i-th entry being 1 and 0 for others.\nWe need to further bound the following term∥∥∥x(i)t+1 − zt+1∥∥∥ =γ ∥∥∥∥∥ z (i) t+1\nω (i) t+1\n− zt+1 ∥∥∥∥∥ =γ ∥∥∥∥∥ t∑\ns=0\n( GsW\nt−s+1ei 1>Wt+1ei − Gs1 n )∥∥∥∥∥ =γ ∥∥∥∥∥ t∑\ns=0\nnGsW t−s+1ei −Gs11>Wt+1ei\nnω (i) t+1 ∥∥∥∥∥ , where the second equality is by (8), (9), and (10). We turn to bound the following term∥∥∥∥∥ t∑ s=0 nGsW t−s+1ei −Gs11>Wt+1ei nω (i) t+1\n∥∥∥∥∥ ≤ 1 n2δmin ∥∥∥∥∥ t∑\ns=0\n( nGsW t−s+1ei −Gs11>Wt+1ei )∥∥∥∥∥ ,\nwhere the first inequality is accordng to (12). Therefore, combining the results above, we can have\nn∑ i=1 ∥∥∥x(i)t+1 − zt+1∥∥∥2 2 ≤ γ 2 n4δ2min n∑ i=1 ∥∥∥∥∥ t∑ s=0 ( nGsW t−s+1ei −Gs11>Wt+1ei )∥∥∥∥∥ 2\n2\n≤ γ 2\nn4δ2min\n∥∥∥∥∥ t∑\ns=0\n( nGsW t−s+1 −Gs11>Wt+1 )∥∥∥∥∥ 2\nF where the second inequality is due to ∑n i=1 ‖Aei‖22 = ‖A‖2F .\nIt remains to bound the following term∥∥∥∥∥ t∑\ns=0\n( nGsW t−s+1 −Gs11>Wt+1 )∥∥∥∥∥ 2\nF\n= ∥∥∥∥∥ t∑\ns=0\n( nGsW t−s+1 −Gs1[1>(Wt+1 − ψ(t)1>)> + nψ(t)>] )∥∥∥∥∥ 2\nF\n= ∥∥∥∥∥ t∑\ns=0\n( nGs[W t−s+1 − 1ψ(t)>]−Gs11>[Wt+1 − 1ψ(t)>] )∥∥∥∥∥ 2\nF\n≤\n( t∑\ns=0 ∥∥nGs[Wt−s+1 − 1ψ(t)>]∥∥F + t∑ s=0 ∥∥Gs11>[Wt+1 − 1ψ(t)>]∥∥F )2\n≤ ( n\nt∑ s=0 ∥∥Gs‖F ‖[Wt−s+1 − 1ψ(t)>]∥∥F + t∑ s=0 ∥∥Gs‖F ‖11>‖F ‖[Wt+1 − 1ψ(t)>]∥∥F )2\n≤n2 (\nt∑ s=0 ∥∥Gs‖F ‖[Wt−s+1 − 1ψ(t)>]∥∥F + t∑ s=0 ∥∥Gs‖F ‖[Wt+1 − 1ψ(t)>]∥∥F )2\n≤n2 (\nt∑ s=0 nCqt−s+1‖Gs‖F + t∑ s=0 nCqt+1‖Gs‖F\n)2\n≤4n4C2q2 (\nt∑ s=0 qt−s‖Gs‖F )2 where the third inequality is due to ‖11>‖F = n and the fourth inequality is by Lemma 6 and the fact that ‖A‖F ≤ n ·maxi,j |Aij | if A ∈ Rn×n. Therefore, if we combining all the above inequalities together, we can obtain\nn∑ i=1 ∥∥∥x(i)t+1 − zt+1∥∥∥2 2 ≤ 4γ 2C2q2 δ2min\n( t∑\ns=0\nqt−s‖Gs‖F )2 .\nUsing Lemma 7, we have T∑ t=0 ( t∑ s=0 qt−s‖Gs‖F )2 ≤ 1 (1− q)2 T∑ t=0 ‖Gt‖2F ,\nwhich leads to T∑ t=0 n∑ i=1 ∥∥∥x(i)t+1 − zt+1∥∥∥2 2 ≤ 4γ 2C2q2 δ2min(1− q)2 T∑ t=0 ‖Gt‖2F ,\nwhich completes the proof.\nActually, Theorem 3 is a corollary of Lemma 8 by setting γ as the appropriate value." }, { "heading": "B EXTRA EXPERIMENT RESULTS", "text": "B.1 EVALUATION ON Room Occupancy DATASET\nDue to the limitation of space, we only present the experiment results on SUSY dataset in Section 5.3 and 5.4. Related presents on Room Occupancy is shown in Figure 4 and Figure 5.\nIn Figure 4, we vary the number of clients in the network, from 6 to 20. In Figure 5, the network density is varied. All the results are consistent with the ones on SUSY.\nB.2 COMPARISON WITH LOCAL ONLINE GRADIENT DESCENT\nTo justify the necessity of communication, we also compare OPS with the local online gradient descent (local OGD), where every node trains a local model without communicating with others. We run experiments in different ratios of the adversary and stochastic components based on settings in Figure 2. As we can see in Figure 6, we empirically prove that communication does have benefits in reducing regret. Moreover, as the ratio of the stochastic components increased, the regret of OPS decreases further. This also empirically proves that the stochastic component can benefit from the communication while the adversarial component does not." } ]
2,020
SINGLE-SIDED TRUST SOCIAL NETWORKS
SP:30cbdae14b7b36f103023b56e32c30c8effbf5e6
[ "The paper formulates the problems of learning in invariant representations as a min-max game, exploring tradeoffs between accuracy and invariance of these representations via a geometric plane analysis. Specifically. the paper considers both classification (cross entropy loss) and regressions settings (squared loss).The related minimax problem is separable in the sense that for any fixed feature transformation, the optimization for the min and max agent are independent of each other, resulting is a simple, concise representation of the resulting optimization problem.  The symmetric nature of this description allows for a geometric description of the feasible set in regards to the actions of both min-max agents and the paper goes on to provide some characterizations of external points and other properties (e.g. convexity) of this region/set. The paper also derives a tight lower bound that for the Lagrangian form of accuracy and invariance." ]
Many machine learning applications involve learning representations that achieve two competing goals: To maximize information or accuracy with respect to a target while simultaneously maximizing invariance or independence with respect to a subset of features. Typical examples include privacy-preserving learning, domain adaptation, and algorithmic fairness, just to name a few. In fact, all of the above problems admit a common minimax game-theoretic formulation, whose equilibrium represents a fundamental tradeoff between accuracy and invariance. In this paper, we provide an information theoretic analysis of this general and important problem under both classification and regression settings. In both cases, we analyze the inherent tradeoffs between accuracy and invariance by providing a geometric characterization of the feasible region in the information plane, where we connect the geometric properties of this feasible region to the fundamental limitations of the tradeoff problem. In the regression setting, we also derive a tight lower bound on the Lagrangian objective that quantifies the tradeoff between accuracy and invariance. Our results shed new light on this fundamental problem by providing insights on the interplay between accuracy and invariance. These results deepen our understanding of this fundamental problem and may be useful in guiding the design of adversarial representation learning algorithms.
[]
[ { "authors": [ "Fabio Anselmi", "Lorenzo Rosasco", "Tomaso Poggio" ], "title": "On invariance and selectivity in representation learning", "venue": "Information and Inference: A Journal of the IMA,", "year": 2016 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Jake Bouvrie", "Lorenzo Rosasco", "Tomaso Poggio" ], "title": "On invariance in hierarchical models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Maximin Coavoux", "Shashi Narayan", "Shay B Cohen" ], "title": "Privacy-preserving neural representations of text", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Sanghamitra Dutta", "Dennis Wei", "Hazar Yueksel", "Pin-Yu Chen", "Sijia Liu", "Kush R Varshney" ], "title": "An information-theoretic perspective on the relationship between fairness and accuracy", "venue": null, "year": 1910 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "arXiv preprint arXiv:1511.05897,", "year": 2015 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Robert Gens", "Pedro M Domingos" ], "title": "Deep symmetry networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Jihun Hamm" ], "title": "Preserving privacy of continuous high-dimensional data with minimax filters", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Jihun Hamm" ], "title": "Minimax filter: learning to preserve privacy from inference attacks", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Fredrik D Johansson", "Uri Shalit", "Nathan Kallus", "David Sontag" ], "title": "Generalization bounds and representation learning for estimation of potential outcomes and causal effects", "venue": "arXiv preprint arXiv:2001.07426,", "year": 2020 }, { "authors": [ "Stéphane Mallat" ], "title": "Group invariant scattering", "venue": "Communications on Pure and Applied Mathematics,", "year": 2012 }, { "authors": [ "Daniel McNamara", "Cheng Soon Ong", "Robert C Williamson" ], "title": "Costs and benefits of fair representation learning", "venue": "In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2019 }, { "authors": [ "Aditya Krishna Menon", "Robert C Williamson" ], "title": "The cost of fairness in binary classification", "venue": "In Conference on Fairness, Accountability and Transparency,", "year": 2018 }, { "authors": [ "R Quian Quiroga", "Leila Reddy", "Gabriel Kreiman", "Christof Koch", "Itzhak Fried" ], "title": "Invariant visual representation by single neurons in the human brain", "venue": null, "year": 2005 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ravid Shwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "arXiv preprint arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Taihong Xiao", "Yi-Hsuan Tsai", "Kihyuk Sohn", "Manmohan Chandraker", "Ming-Hsuan Yang" ], "title": "Adversarial learning of privacy-preserving and task-oriented representations", "venue": null, "year": 1911 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Brian Hu Zhang", "Blake Lemoine", "Margaret Mitchell" ], "title": "Mitigating unwanted biases with adversarial learning", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "Han Zhao", "Geoff Gordon" ], "title": "Inherent tradeoffs in learning fair representations", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Han Zhao", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Joao P Costeira", "Geoffrey J Gordon" ], "title": "Adversarial multiple source domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Han Zhao", "Remi Tachet des Combes", "Kun Zhang", "Geoffrey J Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": "arXiv preprint arXiv:1901.09453,", "year": 2019 }, { "authors": [ "Han Zhao", "Amanda Coston", "Tameem Adel", "Geoffrey J Gordon" ], "title": "Conditional learning of fair representations", "venue": "arXiv preprint arXiv:1910.07162,", "year": 2019 } ]
[ { "heading": null, "text": "Many machine learning applications involve learning representations that achieve two competing goals: To maximize information or accuracy with respect to a target while simultaneously maximizing invariance or independence with respect to a subset of features. Typical examples include privacy-preserving learning, domain adaptation, and algorithmic fairness, just to name a few. In fact, all of the above problems admit a common minimax game-theoretic formulation, whose equilibrium represents a fundamental tradeoff between accuracy and invariance. In this paper, we provide an information theoretic analysis of this general and important problem under both classification and regression settings. In both cases, we analyze the inherent tradeoffs between accuracy and invariance by providing a geometric characterization of the feasible region in the information plane, where we connect the geometric properties of this feasible region to the fundamental limitations of the tradeoff problem. In the regression setting, we also derive a tight lower bound on the Lagrangian objective that quantifies the tradeoff between accuracy and invariance. Our results shed new light on this fundamental problem by providing insights on the interplay between accuracy and invariance. These results deepen our understanding of this fundamental problem and may be useful in guiding the design of adversarial representation learning algorithms." }, { "heading": "1 INTRODUCTION", "text": "One of the fundamental tasks in both supervised and unsupervised learning is to learn proper representations of data for various downstream tasks. Due to the recent advances in deep learning, there has been a surge of interest in learning so-called invariant representations. Roughly speaking, the underlying problem of invariant representation learning is to find a feature transformation of the data that balances two goals simultaneously. First, the features should preserve enough information with respect to the target task of interest, e.g., good predictive accuracy. On the other hand, the representations should be invariant to the change of a pre-defined attribute, e.g., in visual perceptions the representations should be invariant to the change of perspective or lighting conditions, etc. Clearly, in general there is often a tension between these two competing goals of error minimization and invariance maximization. Understanding the fundamental limits and tradeoffs therein remains an important open problem.\nIn practice, the problem of learning invariant representations is often formulated as solving a minimax sequential game between two agents, a feature encoder and an adversary. Under this framework, the goal of the feature encoder is to learn representations that could confuse a worst-case adversary in discriminating the pre-defined attribute. Meanwhile, the representations given by the feature encoder should be amenable for a follow-up predictor of target task. In this paper, we consider the situation where both the adversary and the predictor have infinity capacity, so that the tradeoff between accuracy and invariance solely depends on the representations given by the feature encoder. In particular, our results shed light on the best possible tradeoff attainable by any algorithm. This leads to a Lagrangian objective with a tradeoff parameter between these two competing goals, and we study the fundamental limitations of this tradeoff by analyzing the extremal values of this Lagrangian in both classification and regression settings. Our results shed new light on the fundamental tradeoff between accuracy and invariance, and give a crisp characterization of how the dependence between the target task and the pre-defined attribute affects the limits of representation learning.\nContributions We geometrically characterize the tradeoff between accuracy and invariance via the information plane (Shwartz-Ziv & Tishby, 2017) analysis under both classification and regression settings, where each feature transformation correspond to a point on the information plane. For the classification setting, we provide a fundamental characterization of the feasible region in the information plane, including its boundedness, convexity, and extremal vertices. For the regression setting, we provide an analogous characterization of the feasible region by replacing mutual information with conditional variances. Finally, in the regression setting, we prove a tight information-theoretic lower bound on a Lagrangian objective that trades off accuracy and invariance. The proof relies on an interesting SDP relaxation, which may be of independent interest.\nRelated Work There are abundant applications of learning invariant representations in various downstream tasks, including domain adaptation (Ben-David et al., 2007; 2010; Ganin et al., 2016; Zhao et al., 2018), algorithmic fairness (Edwards & Storkey, 2015; Zemel et al., 2013; Zhang et al., 2018; Zhao et al., 2019b), privacy-preserving learning (Hamm, 2015; 2017; Coavoux et al., 2018; Xiao et al., 2019), invariant visual representations (Quiroga et al., 2005; Gens & Domingos, 2014; Bouvrie et al., 2009; Mallat, 2012; Anselmi et al., 2016), and causal inference (Johansson et al., 2016; Shalit et al., 2017; Johansson et al., 2020), just to name a few. To the best of our knowledge, no previous work studies the particular tradeoff problem in this paper. Closest to our work are results in domain adaptation (Zhao et al., 2019a) and algorithmic fairness (Menon & Williamson, 2018; Zhao & Gordon, 2019), showing a lower bound on the classification accuracy on two groups, e.g., source vs. target in domain adaptation and majority vs. minority in algorithmic fairness. Compared to these previous results, our work directly characterizes the tradeoff between accuracy and invariance using information-theoretic concepts in both classification and regression settings. Furthermore, we also give an approximation to the Pareto frontier between accuracy and invariance in both cases." }, { "heading": "2 BACKGROUND AND PRELIMINARIES", "text": "Notation We adopt the usual setup given (X,Y ) ∈ X × Y , where Y is the response, X ∈ Rp represents the input vector, and we seek a classification/regression function f(X) that minimizes E`(f(X), Y ) where ` : Y ×Y → R is some loss function depending on the context of the underlying problem. In this paper, we consider two typical choices of `: (1) the cross entropy loss, i.e. `(y, y′) = −y log(y′)−(1−y) log(1−y′), which is typically used when Y is a discrete variable in classification; (2) the squared loss, i.e. `(y, y′) = (y − y′)2, which is suitable for Y continuous, as in regression. Throughout the paper, we will assume that all random variables have finite second-order moments.\nProblem Setup Apart from the input/output pairs, in our setting there is a third variable A, which corresponds to a variable that a predictor should be invariant to. Depending on the particular application, A could correspond to potential protected attributes in algorithmic fairness, e.g., the ethnicity or gender of an individual; or A could be the identity of domain index in domain adaptation, etc. In general, we assume that there is a joint distribution D over the triple (X,A, Y ), from which our observational data are sampled from. Upon receiving the data, the goal of the learner has two folds. On one hand, the learner aims to accurately predict the target Y . On the other hand, it also tries to be insensitive to variation in A. To achieve this dual goal, one standard approach in the literature (Zemel et al., 2013; Edwards & Storkey, 2015; Hamm, 2015; Ganin et al., 2016; Zhao et al., 2018) is through the lens of representation learning. Specifically, let Z = g(X) where g(·) is a (possibly randomized) transformation function that takes X as input and gives the corresponding feature encoding Z. The hope is that, by learning the transformation function g(·), Z contains as much information as possible about the target Y while at the same time filtering out information related to A. This problem is often phrased as an adversarial game:\nmin f,g max f ′\nED[`(f ◦ g(X), Y )]− λ · ED[`(f ′ ◦ g(X), A)], (1)\nwhere the two competing agents are the feature transformation g and the adversary f ′, and λ > 0 is a tradeoff hyperparameter between the task variable Y and the attribute A. For example, the adversary f ′ could be understood as a domain discriminator in applications related to domain adaptation, or an auditor of sensitive attribute in algorithmic fairness. In the above minimax game, the first term corresponds to the accuracy of the target task, and the second term is the loss incurred by the adversary. It is worth pointing out that the minimax problem in (1) is separable for any fixed feature transformation g, in the sense that once g has been fixed, the optimization of f and f ′ are independent of each other. Formally, define R∗Y (g) := inff ED`(f(g(X)), Y ) to be the optimal risk in predicting\nY using Z = g(X) under loss `, and similarly define R∗A(g). The separation structure of the problem leads to the following compact form:\nOPT(λ) := min g R∗Y (g)− λ ·R∗A(g). (2)\nThe minimization here is taken over a family of (possibly randomized) transformations g. Intuitively, (2) characterizes the situation where for a given transformation Z = g(X), both f and f ′ play their optimal responses. Hence this objective function characterizes a fundamental limit of what the best possible representation we can hope to achieve for a fixed value λ. In general, with 0 < λ < ∞, there is an inherent tension between the minimization of R∗Y (g) and the maximization of R ∗ A(g), and a choice of the tradeoff hyperparameter λ essentially corresponds to a realization of such tradeoff.\nMotivating Examples We discuss several examples to which the above framework is applicable. Example 2.1 (Privacy-Preservation). In privacy applications, the goal is to make it difficult to predict sensitive data, represented by the attribute A, while retaining information about Y (Hamm, 2015; 2017; Coavoux et al., 2018; Xiao et al., 2019). A way to achieve this is to pass information through Z, the “privatized” or “sanitized” data. Example 2.2 (Algorithmic Fairness). In fairness applications, we seek to make predictions about the response Y without discriminating based on the information contained in the protected attributes A. For example, A may represent a protected class of individuals defined by, e.g. race or gender. This definition of fairness is also known as statistical parity in the literature, and has received increasing attention recently from an information-theoretic perspective (McNamara et al., 2019; Zhao & Gordon, 2019; Dutta et al., 2019). Example 2.3 (Domain Adaptation). In domain adaptation, our goal is to train a predictor using labeled data from the source domain that generalizes to the target domain. In this case, A corresponds to the identity of domains, and the hope here is to learn a domain-invariant representation Z that is informative about the target Y (Ben-David et al., 2007; 2010; Ganin et al., 2016; Zhao et al., 2018). Example 2.4 (Group Invariance). In many applications in computer vision, it is desirable to learn predictors that are invariant to the action of a group G on the input space. Typical examples include rotation, translation, and scale. By considering random variables A that take their values in G, one approach to this problem is to learn a representation Z that “ignores” changes in A (Quiroga et al., 2005; Gens & Domingos, 2014; Bouvrie et al., 2009; Mallat, 2012; Anselmi et al., 2016). Example 2.5 (Information bottleneck). The information bottleneck (Tishby et al., 2000) is the problem of finding a representation Z that minimizes the objective I(Z;Y ) − λI(Z;X) in an unsupervised manner. This is closely related to, but not the same as the problem we study, owing to the invariant attribute A." }, { "heading": "3 FEASIBLE REGION ON THE INFORMATION PLANE", "text": "We begin by defining the feasible region associated with the adversarial game (1) and discussing its relevance to the problem we study. Formally, we define the information plane to be the 2D coordinate plane with axes −R∗Y (g) and −R∗A(g), respectively. The feasible region then corresponds to the two-dimensional region defined by the pairs (−R∗Y (g),−R∗A(g)) over all possible representations Z = g(X) on the information plane. More concretely, in the classification and regression settings, these pairs can be given a more intuitive interpretation in terms of mutual information and conditional variances, respectively. In particular, it is easy to show the following:\n1. (Classification) Under cross-entropy loss, using standard information-theoretic identities the adversarial game (2) can be rewritten as\nmin Z=g(X) H(Y | Z)− λ ·H(A | Z) ⇐⇒ max Z=g(X) I(Y ;Z)− λ · I(A;Z). (3)\n2. (Regression) Under the least-squares loss, by the law of total variance, the adversarial game (2) can be rewritten as\nmin Z=g(X) E[Var(Y | Z)]−λ·E[Var(A | Z)] ⇐⇒ max Z=g(X) VarE[Y | Z]−λ·VarE[A | Z]. (4)\nThese equivalences motivate the following definitions: (Classification): RCE := {(I(Y ;Z), I(A;Z)) ∈ R2},\n(Regression): RLS := {(VarE[Y | Z],VarE[A | Z]) ∈ R2}.\nWe call bothRCE andRLS the feasible region for the classification and regression settings, respectively. See Fig. 1 for an illustration of the information planes and the feasible regions. At this point, it may not be immediately clear what the relevance of the feasible region is. To see this, recall that our high-level goal is to find representations Z that maximize accuracy (i.e. ED[`(f(Z), Y )]) while simultaneously maximizing invariance (i.e. ED[`(f ′(Z), A)]), and consider the four vertices (not necessarily a part of the feasible region) in Fig. 1. These four corners have intuitive interpretations:\n• (Red) The so-called “informationless” regime, in which all of the information regarding both Y and A is destroyed. This is achieved by choosing a constant representation Z ≡ c. • (Yellow) Here, we retain all of the information in A while removing all of the information\nabout Y . This is not a particularly interesting regime for the aforementioned applications. • (Blue) The full information regime, where Z = X and no information is lost. This is the\n“classical” setting, wherein information about A is allowed to leak into Y . • (Green) This is the ideal representation that we would like to attain: we preserve all the\nrelevant information about Y while simultaneously removing all the information about A.\nUnfortunately, in general, the ideal representation may not be attainable due to the potential correlation between Y and A. As a result, we are interested in characterizing how “close” we can get to attaining this ideal transformation given the distribution over (X,A, Y ). More precisely, we can describe the various extremal points on the boundary of the feasible region as follows:\n• E∗Y : This point corresponds to a representation Z that maximizes accuracy subject to a hard constraint on the invariance (cf. (5),(10)), i.e. there is no leakage of information aboutA into the representation Z. In classification, we enforce this via the mutual information constraint I(A;Z) = 0 and in regression via the conditional variance constraint VarE[A | Z] = 0. • E∗A: This point corresponds to a representation Z that maximizes invariance subject to a\nhard constraint on the accuracy (cf. (8), (12)), i.e. there is no loss of information about Y in the representation Z. In classification, we enforce this via the mutual information constraint I(Y ;Z) = H(Y ) and in regression we enforce this via the conditional variance constraint VarE[Y | Z] = Var(Y ). • As we vary λ ∈ (0,∞), we carve out a path OPT(λ) between E∗Y and E∗A that corresponds\nto the optimal values of (1). This is the Pareto frontier of the accuracy-invariance tradeoff, and represents the best possible tradeoff attainable for a given λ.\nDue to the symmetry between Y and A in (2), the feasible regions in both cases are symmetric with respect to the diagonal of the bounding rectangle. With the feasible region more clearly exposed, we can now concretely outline our objective: To analytically characterize the solutions to the extremal problems corresponding to the lower and upper right points on the boundaries, and to provide lower bounds on the objective OPT(λ). Due to the page limit, we defer all the detailed proofs to appendix and mainly focus on providing interpretations and insights of our results in the main paper." }, { "heading": "4 CLASSIFICATION", "text": "In order to understand the tradeoff between these two competing goals, it is the most interesting to study the case where the original input X contains full information to predict both Y and A, so that\nany loss of accuracy is not due to the noninformative input X . To this end, our following analysis focuses on the noiseless setting1: Assumption 4.1. There exist functions f∗Y (·) and f∗A(·), such that Y = f∗Y (X) and A = f∗A(X). In order to characterize the feasible regionRCE, first note that from the data processing inequality, the following inequalities hold:\n0 ≤ I(Y ;Z) ≤ I(Y ;X) = H(Y ), 0 ≤ I(A;Z) ≤ I(A;X) = H(A), which means that for any transformation Z = g(X), the point (I(Y ;Z), I(A;Z)) must lie within a rectangle shown in Fig. 2a. The following lemma shows that the feasible regionRCE is convex: Lemma 4.1. RCE is convex. Here, the convexity ofRCE is guaranteed by a construction of randomized feature transformation. As we briefly discussed before, we know that two vertices of the bounding rectangle are attainable, i.e., the “informationless” origin and the “full information” diagonal vertex. Now with Lemma 4.1, it is clear that all the points on the diagonal of the bounding rectangle are also attainable." }, { "heading": "4.1 MAXIMAL MUTUAL INFORMATION UNDER THE INDEPENDENCE CONSTRAINT", "text": "In this section we explore the extremal point E∗Y . This means that we would like to maximize the mutual information of Z w.r.t Y and simultaneously being independent of A:\nmax Z I(Y ;Z), subject to I(A;Z) = 0. (5)\nFirst of all, realize that the optimal solution of (5) clearly depends on the coupling between A and Y . To see this, consider the following two extreme cases: Example 4.1. If A = Y almost surely, then I(A;Z) = 0 directly implies I(Y ;Z) = 0, hence maxZ I(Y ;Z) = 0 under the constraint that I(A;Z) = 0. Example 4.2. If A ⊥ Y , then Z = f∗Y (X) = Y satisfies the constraint that I(A;Z) = I(A;Y ) = 0. Furthermore, I(Y ;Z) = I(Y ;Y ) = H(Y ) ≥ I(Y ;Z ′), ∀Z ′ 6= Y . Hence maxZ I(Y ;Z) = H(Y ).\nThe above two examples show that the optimal solution of (5), if exists analytically, must include a quantity that characterizes the dependency between A and Y . We first define such a quantity: Definition 4.1. Define ∆Y |A := |PrD(Y = 1 | A = 0)− PrD(Y = 1 | A = 1)|. It is easy to verify that the following claims hold about ∆Y |A:\n0 ≤ ∆Y |A ≤ 1, and ∆Y |A = 0 ⇐⇒ A ⊥ Y, and ∆Y |A = 1 ⇐⇒ A = Y or A = 1− Y. (6) With this introduced notation, the following theorem gives an analytic solution to (5). Theorem 4.1. The optimal solution of optimization problem (5) is\nmax Z,I(A;Z)=0\nI(Y ;Z) = H(Y )−∆Y |A ·H(A). (7)\nLet us have a sanity check of this result: First, if A ⊥ Y , then ∆Y |A = 0, and in this case the optimal solution given by Theorem 4.1 reduces to H(Y ) − 0 · H(A) = H(Y ), which is consistent with Example 4.2. Next, consider the other extreme case where A = Y . In this case ∆Y |A = 1 and H(Y ) = H(A), therefore the optimal solution given by Theorem 4.1 becomes H(Y )− 1 ·H(A) = H(Y )−H(Y ) = 0. This is consistent with Example 4.1. Moreover, due to the symmetry between A and Y , we can now characterize the locations of the two extremal points on the lower and left boundaries. The updated figure is plotted in Fig. 2b. In Fig. 2b ∆A|Y is defined analogously as ∆Y |A by swapping Y and A." }, { "heading": "4.2 MINIMUM MUTUAL INFORMATION UNDER THE SUFFICIENT STATISTICS CONSTRAINT", "text": "Next, we characterize the other extremal point, i.e. E∗A. Again, by the symmetry between A and Y , it suffices to solve the following optimization problem, whose optimal solution is E∗A.\nmin Z I(A;Z), subject to I(Y ;Z) = H(Y ) (8)\n1Extensions to the general noisy setting are feasible, but the results are less interpretable. Hence we mainly focus on the noiseless setting in this paper.\nTheorem 4.2. The optimal solution of optimization problem (8) is\nmin Z,I(Y ;Z)=H(Y ) I(A;Z) = I(A;Y ). (9)\nClearly, if A and Y are independent, then the gap I(A;Y ) = 0, meaning that we can simultaneously preserve all the target related information and filter out all the information related to A. With the above result, we can now characterize the locations of the remaining two extremal points on the top and right boundaries of bounding rectangle. The updated figure is shown in Fig. 2c." }, { "heading": "4.3 THE INFORMATION PLANE IN LEARNING REPRESENTATIONS Z", "text": "To get the full picture, we combine our results in Section 4.1 and Section 4.2 and use the fact that RCE must be convex (Lemma 4.1). This allows us to complete the analysis by connecting the black dots on the four boundaries of the bounding rectangle, as shown in Fig. 2d. The feasible regionRCE is a convex polygon. Furthermore, both the constrained accuracy optimal solution and the constrained invariance optimal solution can be readily read from Fig. 2d as well.\nAs we mentioned before, ideally we would like to find a representation Z that attains the green vertex of the bounding rectangle. Unfortunately, due to the potential coupling between Y and A, this solution is not always feasible. Nevertheless, it is instructive to see the gaps between the optimal solutions we could hope to achieve and the ideal one:\n• Maximal information: The gap is given by ∆Y |A ·H(A). On one hand, if A ⊥ Y , then ∆Y |A = 0 so the gap is 0. On the other hand, ifA = Y , then ∆Y |A = 1 andH(A) = H(Y ), so the gap achieves the maximum value H(Y ). • Maximal invariance: The gap is given by I(A;Y ). On one hand, if A ⊥ Y , then I(A;Y ) =\n0 so the gap is 0. On the other hand, if A = Y , then I(A;Y ) = H(A), so again, the gap achieves the maximum value of H(A).\nOne open question that we do not answer here is whether the feasible regionRCE is strictly convex or not. That is, whether the Pareto-frontier between E∗Y and E ∗ A is strictly convex or not? On the other hand, for each value of λ, the line segment connecting E∗Y and E ∗ A forms a lower bound of\nthe Lagrangian. For the aforementioned applications, our approximation of the frontier is critical in order to be able to certify that a given model is not optimal. For example, given some practically computed representation Z and by using known optimal estimators of the mutual information, it is possible to estimate I(A;Y ), I(Y ;Z), I(A;Z) in order to directly bound the (sub)-optimality of Z using Fig. 2d, i.e., how far away the point (I(Y ;Z), I(A;Z)) is from the line segment between E∗Y and E∗A. This distance lower bounds the distance to the optimal representations on the Pareto frontier." }, { "heading": "5 REGRESSION", "text": "Similar to what we have before, in regression we assume a noiseless setting for better interpretability of our results. The generalization to the noisy setting is included in the appendix. Let H be an RKHS.\nAssumption 5.1. There exist functions f∗Y , f∗A ∈ H, such that Y = f∗Y (X) and A = f∗A(X). Let 〈·, ·〉 be the canonical inner product in RKHS H. Under this assumption, there exists a feature map ϕ(X) and a 6= 0, y 6= 0, such that Y = f∗Y (X) = 〈ϕ(X), y〉 and A = f∗A(X) = 〈ϕ(X), a〉. This feature map does not have to be finite-dimensional, and our analysis works for the case where f∗Y , f ∗ A are infinite-dimensional. Next, by the law of total variance, the following inequalities hold:\n0 ≤ VarE[Y | Z] ≤ VarE[Y | X] = Var(Y ), 0 ≤ VarE[A | Z] ≤ VarE[A | X] = Var(A),\nwhich means that for any transformation Z = g(X), the point (VarE[Y | Z],VarE[A | Z]) must lie within a rectangle shown in Fig. 3a. To simplify the notation, we define Σ := Cov(ϕ(X), ϕ(X)) to be the covariance operator of ϕ(X).\nAgain, if we consider all the possible feature transformations Z = g(X), then the points (VarE[Y | Z],VarE[A | Z]) will form a feasible region RLS. Similar to what we have in classification, the following lemma shows that the feasible regionRLS is convex: Lemma 5.1. RLS is convex. The convexity ofRLS is guaranteed by a construction of randomized feature transformation. Similarly, both the “informationless” origin and the “full information” diagonal vertex are attainable." }, { "heading": "5.1 THE BOUNDING VERTICES ON THE PLANE", "text": "In this section we explore the extremal point E∗Y and E ∗ A for regression. For E ∗ Y , this means that we would like to maximize the variance of Z w.r.t Y and simultaneously minimizing that of A: max Z\nVarE[Y | Z], subject to VarE[A | Z] = 0. (10) It is clear that the optimal solution of (10) depends on the coupling between A and Y , and the following theorem precisely characterizes this relationship: Theorem 5.1. The optimal solution of optimization problem (10) is upper bounded by\nmax Z,VarE[A|Z]=0\nVarE[Y | Z] ≤ Var(Y )− (\n2〈y,Σa〉 〈a, a〉 − Var(A) · 〈a, y〉 〈a, a〉2\n) 〈a, y〉. (11)\nAgain, let us sanity check this result: First, if a is orthogonal to y, i.e., 〈a, y〉 = 0, then the gap is 0, and the optimal solution becomes Var(Y ). Next, consider the other extreme case where a is parallel to y. In this case it can be readily verified that the optimal solution reduces to 0. With these two results, we can now characterize the locations of the two extremal points on the bottom and left boundaries of bounding rectangle. The updated figure is plotted in Fig. 3b.\nSimilarly, for E∗A, it suffices to solve the following problem, whose optimal solution is E ∗ A:\nmin Z VarE[A | Z], subject to VarE[Y | Z] = Var(Y ) (12) Theorem 5.2. The optimal solution of optimization problem (12) is lower bounded by\nmin Z,VarE[Y |Z]≥Var(Y )\nVarE[A | Z] ≥ Var(Y ) · 〈a, y〉 2\n〈y, y〉2 . (13)\nAgain, if a is orthogonal to y, then the optimal solution is 0, meaning that we can simultaneously preserve all the target variance and filter out all the variance related to A. On the other hand, if a is parallel to y, then Var(Y ) · 〈a, y〉2/〈y, y〉2 = Var(A). The updated plot is shown in Fig. 3c." }, { "heading": "5.2 A SPECTRAL LOWER BOUND OF THE LAGRANGIAN", "text": "Combining our results in Thm. 5.1 and Thm. 5.2, along with the fact thatRLS must be convex, we plot the full picture about the feasible region in the regression setting in Fig. 3d. Both the constrained accuracy optimal solution and the constrained invariance optimal solution can be readily read from Fig. 3d as well. In fact, in the regression setting, we can say even more: We can derive a tight lower bound to the Lagrangian problem OPT(λ) := minZ=g(X) E[Var(Y | Z)]− λ · E[Var(A | Z)]. Theorem 5.3. The optimal solution of the Lagrangian has the following lower bound:\nOPT(λ) ≥ 1 2\n{ Var(Y )− λ ·Var(A)− √ (Var(Y ) + λ ·Var(A))2 − 4λ〈y,Σa〉2 } . (14)\nEvidently, the key quantity in the lower bound (14) is the quadratic term 〈y,Σa〉, which effectively measures the dependence between the Y and A under the feature covariance Σ.\nThe proof of Thm. 5.1, 5.2 and 5.3 rely on a finite-dimensional SDP relaxation, and constructs an explicit optimal solution to this relaxation. We re-formulate the objective as a linear functional of V := Cov(E[ϕ(X) | Z],E[ϕ(X) | Z]), which satisfies the semi-definite constraint 0 V Σ = Cov(ϕ(X), ϕ(X)). Therefore, the optimal value of the SDP is an upper/lower bound of the objective. Furthermore, we show that under certain regularity conditions, the SDP relaxation is exact. One particularly interesting setting where the regularity condition holds is when ϕ(X) follows a Gaussian distribution. More discussions about the tightness of our bounds are presented in appendix C.5." }, { "heading": "6 CONCLUSION", "text": "We provide an information plane analysis to study the general and important problem for learning invariant representations in both classification and regression settings. In both cases, we analyze the inherent tradeoffs between accuracy and invariance by providing a geometric characterization of the feasible region on the information plane, in terms of its boundedness, convexity, as well as its its extremal vertices. Furthermore, in the regression setting, we also derive a tight lower bound that for the Lagrangian form of accuracy and invariance. Given the wide applications of invariant representations in machine learning, we believe our theoretical results could contribute to better understandings of the fundamental tradeoffs between accuracy and invariance under various settings, e.g., domain adaptation, algorithmic fairness, invariant visual representations, and privacy-preservation learning." }, { "heading": "A PROOFS FOR CLAIMS IN SECTION 3", "text": "In this section we give detailed arguments to derive the objective functions of Eq. (3) and (4) respectively from the original minimax formulation in Eq. (1). First, let us consider the classification setting.\nClassification Given a fixed feature map Z = g(X), due to the symmetry between Y and A in Eq. (1), it suffices for us to consider the case of finding f that minimizes ED[`(f ◦ g(X), Y )], and analogous result follows for the case of finding the optimal f ′ that minimizes ED[`(f ′ ◦ g(X), A)] similarly. By definition of the cross-entropy loss, we have:\nED[`(f ◦ g(X), Y )] = −ED [I(Y = 0) log(1− f(g(X))) + I(Y = 1) log(g(f(X)))] = −ED [I(Y = 0) log(1− f(Z)) + I(Y = 1) log(f(Z))] = −EZEY [I(Y = 0) log(1− f(Z)) + I(Y = 1) log(f(Z)) | Z] = −EZ [Pr(Y = 0 | Z) log(1− f(Z)) + Pr(Y = 1 | Z) log(f(Z))] = EZ [DKL(Pr(Y | Z) ‖ f(Z))] +H(Y | Z) ≥ H(Y | Z).\nIt is also clear from the above proof that the minimum value of the cross-entropy loss is achieved when f(Z) is a randomized classifier such that E[f(Z)] = Pr(Y = 1 | Z). This shows that\nmin f ED[`(f ◦ g(X), Y )] = H(Y | Z), and min f ′ ED[`(f ′ ◦ g(X), A)] = H(A | Z).\nTo see the second part of Eq. (3), simply use the identity that H(Y | Z) = H(Y ) − I(Y ;Z) and H(A | Z) = H(A)−I(A;Z) with the fact that bothH(Y ) andH(A) are constants that only depend on the joint distribution D.\nRegression Again, given a fixed feature map Z = g(X), because of the symmetry between Y and A let us focus on the analysis of finding f that minimizes ED[`(f ◦ g(X), Y )]. In this case since `(·, ·) is the mean squared error, it follows that\nED[`(f ◦ g(X), Y )] = ED [ (f ◦ g(X)− Y )2 ] = ED [ (f(Z)− Y )2\n] = EZ [ (f(Z)− E[Y | Z])2 ] + EZ [ EY [(Y − E[Y | Z])2]\n] ≥ EZ [ EY [(Y − E[Y | Z])2]\n] = E[Var(Y | Z)],\nwhere the third equality is due to the Pythagorean theorem. Furthermore, it is clear that the optimal mean-squared error is obtained by the conditional mean f(Z) = E[Y | Z]. This shows that min f ED[`(f ◦ g(X), Y )] = E[Var(Y | Z)], and min f ′ ED[`(f ′ ◦ g(X), A)] = E[Var(A | Z)].\nFor the second part, use the law of total variance Var(Y ) = E[Var(Y | Z)] + Var(E[Y | Z]) and Var(A) = E[Var(A | Z)] + Var(E[A | Z]). Realizing that both Var(Y ) and Var(A) are constants that only depend on the joint distribution D, we finish the proof." }, { "heading": "B MISSING PROOFS IN CLASSIFICATION (SECTION 4)", "text": "In what follows we first restate the propositions, lemmas and theorems in the main text, and then provide the corresponding proofs.\nB.1 CONVEXITY OF RCE Lemma 4.1. RCE is convex.\nProof. Let Zi = gi(X) for i ∈ {0, 1} with corresponding points (I(Y ;Zi), I(A;Zi)) ∈ RCE. Then we only need to prove that for ∀u ∈ [0, 1], (uI(Y ;Z0) + (1− u)I(Y ;Z1), uI(A;Z0) + (1−\nu)I(A;Z1)) ∈ RCE as well. For any u ∈ [0, 1], let S ∼ U(0, 1), the uniform distribution over (0, 1), such that S ⊥ (Y,A). Consider the following randomized transformation Z:\nZ = { Z0 If S ≤ u, Z1 otherwise.\n(15)\nTo compute I(Y ;Z), we have:\nI(Y ;Z) = E[I(Y ;Z | S)] = Pr(S ≤ u) · I(Y ;Z0) + Pr(S > u) · I(Y ;Z1) = uI(Y ;Z0) + (1− u)I(Y ;Z1).\nSimilar argument could be used to show that I(A;Z) = uI(A;Z0) + (1 − u)I(A;Z1). So by construction we now find a randomized transformation Z = g(X) such that (uI(Y ;Z0) + (1 − u)I(Y ;Z1), uI(A;Z0) + (1− u)I(A;Z1)) ∈ RCE.\nB.2 PROOF OF THEOREM 4.1\nWe proceed to provide the proof that the optimal value of (5) is the one given by Theorem 4.1. Theorem 4.1. The optimal solution of optimization problem (5) is\nmax Z,I(A;Z)=0\nI(Y ;Z) = H(Y )−∆Y |A ·H(A). (7)\nProof. For a joint distribution D over (X,A, Y ) and a function g : X → Z , in what follows we use g]D to denote the induced distribution of D under g over (Z,A, Y ). We first make the following claim: without loss of generality, for any joint distribution g]D over (Z,A, Y ), we could find (Z0, A\n′, Y ′) ∼ g]D and a deterministic function f , such that Y ′ = f(A′, Z0, S) where S ∼ U(0, 1), S ⊥ (A′, Z0) and I(Y ′;Z ′) ≥ I(Y ;Z) with Z ′ = (Z0, S). To see this, consider the following construction: A′, Z0 ∼ D(A,Z), S ∼ U(0, 1). Let (a, z, s) be the sample of the above sampling process and construct\nY ′ = { 1 If s ≤ E[Y | A = a, Z = z], 0 Otherwise.\nNow it is easy to verify that (Z0, A′, Y ′) ∼ g]D and Pr(Y ′ = 1 | A′ = a, Z0 = z) = E[Y | A = a, Z = z]. To see the last claim, we have the following inequality hold:\nI(Y ′;Z ′) = I(Y ′;Z0, S) ≥ I(Y ;Z0) = I(Y ;Z). Now to upper bound I(Y ;Z), we have\nI(Y ;Z) = H(Y )−H(Y | Z), hence it suffices to lower bound H(Y | Z). To this end, define\nD0 := {z, ε ∈ (0, 1) | f(0, z, ε) = 1}, D1 := {z, ε ∈ (0, 1) | f(1, z, ε) = 1}.\nThen,\nPr((z, ε) ∈ D0) = Pr(f(0, z, ε) = 1) = Pr(f(0, z, ε) = 1 | A = 0) = Pr(f(A, z, ε) = 1 | A = 0) = Pr(Y = 1 | A = 0).\nAnalogously, the following equation also holds:\nPr((z, ε) ∈ D1) = Pr(Y = 1 | A = 1).\nWithout loss of generality, assume that Pr(Y = 1 | A = 1) ≥ Pr(Y = 1 | A = 0), then Pr((z, ε) ∈ D1\\D0) ≥ Pr((z, ε) ∈ D1)− Pr((z, ε) ∈ D0) = |Pr(Y = 1 | A = 1)− Pr(Y = 1 | A = 0)|. But on the other hand, we know that if (z, ε) ∈ D1\\D0, then f(1, z, ε) = 1 and f(0, z, ε) = 0, and this implies that Y = A, hence: H(Y | Z) ≥ H(Y | Z, S)\n= Pr((z, ε) ∈ D1\\D0) ·H(Y | (z, ε) ∈ D1\\D0) + Pr((z, ε) 6∈ D1\\D0) ·H(Y | (z, ε) 6∈ D1\\D0) ≥ Pr((z, ε) ∈ D1\\D0) ·H(Y | (z, ε) ∈ D1\\D0) = Pr((z, ε) ∈ D1\\D0) ·H(A) ≥ |Pr(Y = 1 | A = 1)− Pr(Y = 1 | A = 0)| ·H(A),\nwhich implies that I(Y ;Z) ≤ H(Y )− |Pr(Y = 1 | A = 1)−Pr(Y = 1 | A = 0)| ·H(A) = H(Y )−∆Y |A ·H(A). To see that the upper bound could be attained, let us consider the following construction. Denote α := Pr(Y = 1 | A = 0) and β := Pr(Y = 1 | A = 1). Construct a uniformly random Z ∼ U(0, 1) and then sample A independently from Z according to the corresponding marginal distribution of A in D. Next, define:\nY = { 1 if Z ≤ α ∧A = 0 or Z ≤ β ∧A = 1, 0 otherwise.\nIt is easy to see that Z ⊥ A by construction. Furthermore, by the construction of Y , we also have A, Y ∼ D(A, Y ) hold. Since I(Y ;Z) = H(Y ) − H(Y | Z), we only need to verify H(Y | Z) = ∆Y |A ·H(A) in this case. Assume without loss of generality α ≤ β, there are three different cases depending on the value of Z:\n• Z ≤ α: In this case no matter what the value of A, we always have Y = 1. • Z > β: In this case no matter what the value of A, we always have Y = 0. • α < Z ≤ β: In this case Y = A, hence the conditional distribution of Y given Z ∈ (α, β]\nis equal to the conditional distribution of A given Z ∈ (α, β]. But by our construction, A is independent of Z, which means that in this case the conditional distribution of A given Z ∈ (α, β] is just the distribution of A.\nCombine all the above three cases, we have: H(Y | Z) = Pr(Z ≤ α) ·H(Y | Z ≤ α) + Pr(Z > β) ·H(Y | Z > β) + Pr(α < Z ≤ β) ·H(Y | α < Z ≤ β)\n= 0 + 0 + |β − α| ·H(A | α < Z ≤ β) = |Pr(Y = 1 | A = 1)− Pr(Y = 1 | A = 0)| ·H(A) = ∆Y |A ·H(A),\nwhich completes the proof.\nB.3 PROOF OF THEOREM 4.2\nTheorem 4.2. The optimal solution of optimization problem (8) is min\nZ,I(Y ;Z)=H(Y ) I(A;Z) = I(A;Y ). (9)\nProof. First, realize that H(Z) ≥ I(Y ;Z) = H(Y ) by our constraint. Furthermore, we also know that 0 ≤ H(Y | Z,A) ≤ H(Y | Z) = H(Y )− I(Y ;Z) = 0, which means H(Y | Z,A) = 0. With these two observations, we have:\nI(A;Z) = H(Z)−H(Z | A) ≥ H(Y )−H(Z | A) ≥ H(Y )−H(Y,Z | A) = H(Y )−H(Y | A)−H(Y | Z,A) = H(Y )−H(Y | A) = I(A;Y ).\nTo attain the equality, simply set Z = f∗Y (X) = Y . Specifically, this implies that 1-bit is sufficient to encode all the information for the optimal solution, which completes the proof." }, { "heading": "C MISSING PROOFS IN REGRESSION (SECTION 5)", "text": "C.1 CONVEXITY OF RLS Analogous to the classification setting, here we first show that the feasible regionRLS is convex: Lemma 5.1. RLS is convex.\nProof. Let Zi = gi(X) for i ∈ {0, 1} with corresponding points (VarE[Y | Zi],VarE[A | Zi]) ∈ RLS. Then it suffices if we could show for ∀u ∈ [0, 1], (uVarE[Y | Z0] + (1 − u) VarE[Y | Z1], uVarE[A | Z0]) + (1− u) VarE[A | Z1]) ∈ RLS as well. We give a constructive proof. Due to the symmetry betweenA and Y , we will only prove the result for Y , and the same analysis could be directly applied to A as well. For any u ∈ [0, 1], let U ∼ U(0, 1), the uniform distribution over (0, 1), such that U ⊥ (Y,A). Consider the following randomized transformation Z:\nZ = { Z0 If U ≤ u, Z1 otherwise.\n(16)\nTo compute VarE[Y | Z], define K := E[Y | Z], then by the law of total variance, we have: VarE[Y | Z] = Var(K) = E[Var(K | U)] + VarE[K | U ].\nWe first compute VarE[K | U ]: VarE[K | U ] = VarE[E[Y | Z] | U ]\n= VarE[Y | U ] (The law of total expectation) = VarE[Y ] (Y ⊥ U ) = 0.\nOn the other hand, for E[Var(K | U)], we have: E[Var(K | U)] = Pr(U = 0) ·Var(K | U = 0) + Pr(U = 1) ·Var(K | U = 1)\n= u ·Var(K | U = 0) + (1− u) ·Var(K | U = 1) = u ·VarE[Y | Z0] + (1− u) ·VarE[Y | Z1].\nCombining both equations above yields:\nVarE[Y | Z] = u ·VarE[Y | Z0] + (1− u) ·VarE[Y | Z1]. Similar argument could be used to show that VarE[A | Z] = u · VarE[A | Z0] + (1 − u) · VarE[A | Z1]. So by construction we now find a randomized transformation Z = g(X) such that (u · VarE[Y | Z0] + (1− u) · VarE[Y | Z1], u · VarE[A | Z0] + (1− u) · VarE[A | Z1]) ∈ RLS, which completes the proof.\nC.2 PROOF OF THEOREM 5.1 AND THEOREM 5.2\nIn this section, we will prove Theorem 5.1 and Theorem 5.2. We will provide proofs to both theorems in a generalized noisy setting, i.e., we no longer assume the noiseless condition so that the corresponding theorems in the noiseless setting follow as a special case. To this end, we first re-define\nf∗Y (X) := E[Y | X] (17) f∗A(X) := E[A | X] (18)\nand f∗Y , f ∗ A ∈ H. We reuse the notations a, y to denote\nf∗Y (X) = E[Y |X] = 〈y, ϕ(X)〉 (19) f∗A(X) = E[A|X] = 〈a, ϕ(X)〉. (20)\nIt is easy to see that the noiseless setting is indeed a special case where Y = E[Y |X], A = E[A|X] almost surely.\nFor readers’ convenience, we restate the Theorem 5.1 below:\nTheorem 5.1. The optimal solution of optimization problem (10) is upper bounded by\nmax Z,VarE[A|Z]=0\nVarE[Y | Z] ≤ Var(Y )− (\n2〈y,Σa〉 〈a, a〉 − Var(A) · 〈a, y〉 〈a, a〉2\n) 〈a, y〉. (11)\nThe following theorem is the generalized version of Theorem 5.1 in noisy setting: Theorem C.1. The optimal solution of optimization problem (10) is upper bounded by\nmax Z,VarE[A|Z]=0\nVarE[Y | Z] ≤ Var(E[Y |X])− (\n2〈y,Σa〉 〈a, a〉 − Var(E[A|X]) · 〈a, y〉 〈a, a〉2\n) 〈a, y〉. (21)\nIt is easy to see Theorem 5.1 is an immediate corollary of this result: under the noiseless assumption, we have Var(E[Y |X]) = Var(Y ) and Var(E[A|X]) = Var(A).\nProof. Using the law of total expectation,\nE[Y | Z] = E [E[Y | X] | Z] = ż\nX E[Y | X,Z] · p(X | Z) dX.\nSince Z = g(X) is a function of X , we have Z ⊥ Y | X , so E[Y | X,Z] = E[Y | X] = f∗Y (X). Therefore,\nE[Y |Z] = ż\nX E[Y | X,Z] · p(X | Z) dX\n=\nż\nX f∗Y (X) · p(X | Z) dX\n= E[f∗Y (X) | Z]. Hence,\nVar(E[Y | Z]) = Var(E[f∗Y (X) | Z]). (22) Therefore,\nVarE[Y | Z] = Var(E[f∗Y (X) | Z]) = VarE[〈y, ϕ(X)〉 | Z] = Var〈y,E[ϕ(X) | Z]〉 (Linearity of Expectation) = 〈y,Cov(E[ϕ(X) | Z],E[ϕ(X) | Z])y〉.\nSimilarly, for A = 〈a, ϕ(X)〉, we have: VarE[A | Z] = 〈a,Cov(E[ϕ(X) | Z],E[ϕ(X) | Z])a〉.\nTo simplify the notation, define V := Cov(E[ϕ(X) | Z],E[ϕ(X) | Z]). Then again, by the law of total variance, it is easy to verify that 0 V Σ = Cov(ϕ(X), ϕ(X)). Hence the original maximization problem could be relaxed as follows:\nmax Z\n〈y, V y〉, subject to 0 V Σ, 〈a, V a〉 = 0.\nTo proceed, we first decompose y orthogonally w.r.t. a:\ny = y⊥a + y‖a,\nwhere y⊥a is the component of y that is perpendicular to a and y‖a is the parallel component of y to a. Using this orthogonal decomposition, we have ∀V :\n〈y, V y〉 = 〈(y⊥a + y‖a), V (y⊥a + y‖a)〉 = 〈y⊥a, V y⊥a〉 (V 1/2y‖a = 0) ≤ 〈y⊥a,Σy⊥a〉 (V Σ),\nwhere the equality above can be attained by choosing V so that the corresponding eigenvalues of V along the direction of y⊥a coincide with those of Σ. Note that this is also feasible since the constraint\nof eigenvalues being 0 only applies to the direction y‖a, which is orthogonal to y⊥a. To complete the proof, realize that the vector y⊥a could be constructed as follows:\ny⊥a = (I − a0aT0 )y, where a0 = a/‖a‖ is the unit vector of a. The last step is to simplify the above equation as:\n〈y⊥a,Σy⊥a, 〉 = 〈(I − a0aT0 )y,Σ(I − a0aT0 )y〉 = Var(E[Y | X])− (\n2〈y,Σa〉 〈a, a〉 − Var(E[A | X]) · 〈a, y〉 〈a, a〉2\n) 〈a, y〉,\nby using the fact that Var(E[Y | X]) = 〈y,Σy〉 and Var(E[A | X]) = 〈a,Σa〉. To show when the equality is attained, let V ∗ be the optimal solution of (??), which could be constructed by first eigendecomposing Σ and then set all the eigenvalues of Σ to 0 whose corresponding eigenvectors are not orthogonal to a. It is worth pointing out that V ∗ is positive semidefinite but not necessarily invertible. Nevertheless, we could still define the projection matrix of V ∗ that projects to the column space of V ∗ as follows:\nPV ∗ := V ∗(V ∗TV ∗)†V ∗T ,\nwhere Q† denotes the Moore-Penrose pseudoinverse of matrix Q. With PV ∗ , it is easy to verify that the optimal transformation is given by Z such that\nE[ϕ(X) | Z] = PV ∗ϕ(X). To see this, we have:\nCov(E[ϕ(X) | Z],E[ϕ(X) | Z]) = VarE[ϕ(X) | Z] = Var(PV ∗ϕ(X))\n= PV ∗ Var(ϕ(X))P T V ∗\n= PV ∗ΣP T V ∗ = V ∗,\ncompleting the proof.\nNext, we will prove Theorem 5.2, restated below: Theorem 5.2. The optimal solution of optimization problem (12) is lower bounded by\nmin Z,VarE[Y |Z]≥Var(Y )\nVarE[A | Z] ≥ Var(Y ) · 〈a, y〉 2\n〈y, y〉2 . (13)\nThe following theorem is the generalized version of Theorem 5.2 in noisy setting: Theorem C.2. The optimal solution of optimization problem (12) is\nmin Z,VarE[Y |Z]=Var(Y )\nVarE[A | Z] = Var(E[Y |X]) · 〈a, y〉 2\n〈y, y〉2 . (23)\nIt is easy to see Theorem 5.2 is an immediate corollary of this result: under the noiseless assumption, we have Var(E[Y |X]) = Var(Y ) and Var(E[A|X]) = Var(A).\nProof. Due to the symmetry between Y and A, here we only prove the first part of the theorem. As in the proof of Theorem 5.1, we have the following identities hold:\nVarE[A | Z] = 〈a,Cov(E[ϕ(X) | Z],E[ϕ(X) | Z])a〉, VarE[Y | Z] = 〈y,Cov(E[ϕ(X) | Z],E[ϕ(X) | Z])y〉.\nAgain, let V := Cov(E[ϕ(X) | Z],E[ϕ(X) | Z]) so that we can relax the optimization problem as follows:\nmin Z\n〈a, V a〉, subject to 0 V Σ, 〈y, V y〉 = Var(Y ) = 〈y,Σy〉.\nTo proceed, we first decompose a orthogonally w.r.t. y:\na = a⊥y + a‖y,\nwhere a⊥y is the component of a that is perpendicular to y and a‖y is the parallel component of a to y. Using this orthogonal decomposition, we have ∀V :\n〈a, V a〉 = 〈(a⊥y + a‖y), V (a⊥y + a‖y)〉 ≥ 〈a‖y, V a‖y〉, (V 0),\nwhere the equality could be attained by choosing V such that V 1/2a⊥y = 0. On the other hand, it is clear that a‖y = 〈a, y0〉 · y0, where y0 = y/‖y‖ is the unit vector of y. Plug a‖y = 〈a, y0〉 · y0 into 〈a‖y, V a‖y〉 with the fact that 〈y, V y〉 = Var(E[Y |X]) = 〈y,Σy〉, we get\n〈a‖y, V a‖y〉 = Var(E[Y |X]) · 〈a, y〉 2\n〈y, y〉2 .\nAgain, to attain the equality, we should first construct the optimal V ∗ matrix by eigendecomposing Σ. Specifically, this time we set all the eigenvalues of Σ whose corresponding eigenvectors are perpendicular to y to 0. Similar to what we argue in the proof of Theorem 5.1, V ∗ is positive semidefinite but not necessarily invertible. Nevertheless, we could still define the projection matrix of V ∗ that projects to the column space of V ∗ as follows:\nPV ∗ := V ∗(V ∗TV ∗)†V ∗T ,\nwhere Q† denotes the Moore-Penrose pseudoinverse of matrix Q. With PV ∗ , it is easy to verify that the optimal transformation is given by Z such that\nE[ϕ(X) | Z] = PV ∗ϕ(X). To see this, we have:\nCov(E[ϕ(X) | Z],E[ϕ(X) | Z]) = VarE[ϕ(X) | Z] = Var(PV ∗ϕ(X))\n= PV ∗ Var(ϕ(X))P T V ∗\n= PV ∗ΣP T V ∗ = V ∗,\ncompleting the proof.\nC.3 PROOF OF THEOREM 5.3\nTo prove Theorem 5.3, we first introduce the following decompositions of the loss functions:\nThe following lemma is a more refined version of the Data-Processing Inequality, which gives an exact characterization of the Bayes optimality gap for a given Z. Recall that the Bayes error is EX [Var[Y |X]]. Lemma C.1 (L2 Error Decomposition).\nEZ [Var[Y |Z]]− EX [Var[Y |X]] = EZ Var (E [Y |X] |Z) ≥ 0. (24) Similarly,\nEZ [Var[A|Z]]− EX [Var[A|X]] = EZ Var (E [A|X] |Z) ≥ 0. (25)\nProof. Since Z = g(X) is a function ofX , we have p(y|x) = p(y|x, z), or equivalently, (Y ⊥ Z)|X By law of total variance,\nVar(Y |Z) = EX [Var(Y |X,Z)|Z] + Var (E [Y |X,Z] |Z) = EX [Var(Y |X)|Z] + Var (E [Y |X] |Z)\nTaking expectation over Z,\nEZ Var(Y |Z) = EZEX [Var(Y |X)|Z] + EZ Var (E [Y |X] |Z) = EXEZ [Var(Y |X)|Z] + EZ Var (E [Y |X] |Z) = EX Var(Y |X) + EZ Var (E [Y |X] |Z) ,\nwhere the last equality is due to the law of total expectation.\nThe following lemma is a direct consequence of the law of total variance. Lemma C.2 (L2 Invariance Decomposition).\nVar(A)− EZ Var(A|Z) = Var(E[A|Z]) ≥ 0.\nWe will prove a generalized version of Theorem 5.3 without noiseless assumption, stated below: Theorem C.3. The optimal solution of the Lagrangian has the following lower bound:\nOPT(λ) ≥ 1 2\n{ λVar(E[A|X]) + Var(E[Y |X])− √ (λVar(E[A|X]) + Var(E[Y |X]))2 − 4λ〈a,Σy〉2 } + (E[Var(Y |X)]− λVar(A)).\nWhen the noiseless assumption holds, we have Var(E[A|X]) = Var(A), Var(E[Y |X]) = Var(Y ), and E[Var(Y |X)] = 0, hence the bound above simplifies to:\n1\n2\n{ Var(Y )− λVar(A)− √ (λVar(A) + Var(Y ))2 − 4λ〈a,Σy〉2 } .\nwhich is exactly Theorem 5.3.\nProof of Theorem 5.3. By Lemma C.1 and Lemma C.2, we can decompose the objective as:\nE[Var(Y |Z)]− λE[Var(A|Z)] = (E[Var(Y |Z)]− E[Var(Y |X)]) + λ(Var(A)− E[Var(A|Z)]) + (E[Var(Y |X)]− λVar(A)) = EZ Var (E [Y |X] |Z) + Var(E[A|Z]) + (E[Var(Y |X)]− λVar(A))\nSince E[Var(Y |X)]− λVar(A) does not depend onZ, we will focus on the first two terms: min\nZ=g(X) EZ Var (E [Y |X] |Z) + λVar(E[A|Z]). (26)\nRecall that for the squared loss,\nf∗Y (X) = E [Y |X] , f∗A(X) = E [A|X] . We will first simplify the objective in (26). We have\nEVar (E [Y |X] |Z) = EVar (f∗Y (X)|Z) , (27) and using the law of total expectation,\nE(A|Z) = ż E(A|X,Z)p(X|Z)dX.\nSince Z = g(X) is a function of X , we have Z ⊥ A|X , so E(A|X,Z) = E(A|X) = f∗A(X). Therefore,\nE[A|Z] = ż E[A|X,Z]p(X|Z)dX\n=\nż\nf∗A(X)p(X|Z)dX = E[f∗A(X)|Z]\nHence,\nVar(E(A|Z)) = Var(E[f∗A(X)|Z]) (28)\nNow we substitute (27), (28) into (26), which gives the following equivalent form of (26): min\nZ=g(X) {EVar (f∗Y (X)|Z) + λVar(E[f∗A(X)|Z])}\nIn this case, the objective (29) becomes: EVar (f∗Y (X)|Z) + λVar(E[f∗A(X)|Z]) (29)\n=EVar (〈y, ϕ(X)〉|Z) + λVar(E[〈a, ϕ(X)〉|Z]) (30) =〈y,ECov(ϕ(X), ϕ(X)|Z)y〉+ (31) λ〈a,Cov(E[ϕ(X)|Z],E[ϕ(X)|Z])a〉 (32)\nBy the law of total covariance, ECov(ϕ(X), ϕ(X)|Z) + Cov(E[ϕ(X)|Z],E[ϕ(X)|Z]) = Cov(ϕ(X), ϕ(X)) = Σ\nLet V = Cov(E[ϕ(X)|Z],E[ϕ(X)|Z]) , which satisfies Σ V 0. Then, finding the feature transform Z = g(X) that minimizes (32) is equivalent to:\nmin V =Cov(E[ϕ(X)|Z],E[ϕ(X)|Z])\n〈y, (Σ− V )y〉+ λ〈a, V a〉\nThe key technique of our lower bound is to relax the constraint V = Cov(E[ϕ(X)|Z],E[ϕ(X)|Z]) by the semi-definite constraint Σ V 0.\nmin V :Σ V 0\n〈y, (Σ− V )y〉+ λ〈a, V a〉\nThis is an SDP whose optimal solution lower bounds the objective (26). Moreover, we can show that there is a simplified form for the SDP optimal solution using eigenvalues and eigenvectors:\n〈y, (Σ− V )y〉+ λ〈a, V a〉 =〈y,Σy〉+ 〈V, λaaT − yyT 〉 =〈y,Σy〉+ 〈Σ−1/2V Σ−1/2,Σ1/2(λaaT − yyT )Σ1/2〉.\nNote that I Q := Σ−1/2V Σ−1/2 0, and R := Σ1/2(λaaT − yyT )Σ1/2 is a matrix with rank at most 2.\nWhen the matrix R is positive definite or negative definite, the minimum is achieved at Q = 0 or I . Otherwise, the only possibility is that R is a rank-2 matrix with one positive eigenvalue and one negative eigenvalue. By Von-Neumann’s trace inequality,\n〈Q,R〉 ≥ d∑\ni=1\nσi(R)σd−i+1(Q).\nSince σ1(R) > 0 = σ2(R) = ... = σd−1(R) = 0 > σd(R) and 0 ≤ σd(Q) ≤ 1, we have 〈Q,R〉 ≥ σd(R) = σd(Σ1/2(λaaT − yyT )Σ1/2)\nThe minimizer is Q = wwT , V = Σ1/2wwT Σ1/2\nwhere w is the unit eigenvector of R with eigenvalue σd(R). By Lemma C.3,\nσd(R) = 1\n2\n{ λ〈a,Σa〉 − 〈y,Σy〉 − √ (λ〈a,Σa〉+ 〈y,Σy〉)2 − 4λ〈a,Σy〉2 } Therefore, OPT(λ) = 〈y, (Σ− V )y〉+ λ〈a, V a〉+ (E[Var(Y |X)]− λVar(A))\n≥ 〈y,Σy〉+ σd(R) + (E[Var(Y |X)]− λVar(A))\n≥ 1 2\n{ λ〈a,Σa〉+ 〈y,Σy〉 − √ (λ〈a,Σa〉+ 〈y,Σy〉)2 − 4λ〈a,Σy〉2 } + (E[Var(Y |X)]− λVar(A))\n= 1\n2\n{ λVar(E[A|X]) + Var(E[Y |X])− √ (λVar(E[A|X]) + Var(E[Y |X]))2 − 4λ〈a,Σy〉2 } + (E[Var(Y |X)]− λVar(A))\nHence we have completed the proof.\nC.4 EXPLICIT FORMULA FOR EIGENVALUES\nThe following lemma is used the in the last step of the proof of Theorem 5.3 to simplify the expression involving σd(R) = σd(Σ1/2(λaaT − yyT )Σ1/2). Lemma C.3. Let R = Σ1/2(λaaT − yyT )Σ1/2, then the eigenvalues of R are\nσ1(R) = 1\n2\n{ λ〈a,Σa〉 − 〈y,Σy〉+ √ (λ〈a,Σa〉+ 〈y,Σy〉)2 − 4λ〈a,Σy〉2 } ,\nσd(R) = 1\n2\n{ λ〈a,Σa〉 − 〈y,Σy〉 − √ (λ〈a,Σa〉+ 〈y,Σy〉)2 − 4λ〈a,Σy〉2 } σ2(R) = · · · = σd−1(R) = 0\nProof. Since rank(R) ≤ rank(λaaT − yyT ) ≤ 2, R has at most two non-zero eigenvalues σ1(R) and σd(R). Notice that\ntr(R) = d∑ i=1 σi(R) = σ1(R) + σd(R),\ntr(R2) = d∑ i=1 σ2i (R) = σ 2 1(R) + σ 2 d(R)\nWe can write tr(R) and tr(R2) explicitly:\ntr(R) = λ tr(Σ1/2aaT Σ1/2)− tr(Σ1/2yyT Σ1/2) = λ〈a,Σa〉 − 〈y,Σy〉\ntr(R2) = tr(Σ1/2(λaaT − yyT )Σ(λaaT − yyT )Σ1/2) = tr(Σ1/2 ( λ2(aT Σa)aaT − λaT Σy(aΣyT )− λyT Σa(yΣaT ) + (aT Σa)aaT ) Σ1/2)\n= λ2(aT Σa)2 − 2λ(aT Σy)2 + (yT Σy)2 Therefore,\nσ1(R) + σd(R) = λ〈a,Σa〉 − 〈y,Σy〉\nσ1(R)σd(R) = 1\n2\n( (σ1(R) + σd(R)) 2 − (σ21(R) + σ2d(R)) )\n= λ〈a,Σy〉2 − λ〈a,Σa〉〈y,Σy〉 Thus σ1(R) and σd(R) are the roots of the quadratic equation:\nx2 − (λ〈a,Σa〉 − 〈y,Σy〉)x+ λ〈a,Σy〉2 − λ〈a,Σa〉〈y,Σy〉 = 0 We complete the proof by solving this quadratic equation.\nC.5 ACHIEVABILITY OF LOWER BOUND\nIn the proof of Theorem 5.3, we showed a lower bound on the tradeoff via an SDP relaxation. Therefore, the lower bound is achievable whenever the SDP relaxation is tight. We state this as a regularity condition on (X,ϕ). Definition C.1. (X,ϕ) is called regular, if for any positive semidefinite matrix M : Σ M 0, there exists Z = g(X), such that\nCov(E[ϕ(X)|Z],E[ϕ(X)|Z]) = M. Theorem C.4. When (X,ϕ) is regular, the lower bound in Theorem 5.3 is achievable.\nProof. From the proof of Theorem 5.3, we can see that if there exists Z = g(X), such that\nCov(E[ϕ(X)|Z],E[ϕ(X)|Z]) = Σ1/2wwT Σ1/2, where w is the unit eigenvector of R with eigenvalue σd(R), then the equality is achievable. It is easy to see that\nΣ Σ1/2wwT Σ1/2 0. Therefore, choosing M = Σ1/2wwT Σ1/2 in the definition of regularity guarantees the existence of Z. Hence we have completed the proof.\nA sufficient condition on the regularity of (X,ϕ) is the Gaussianity of ϕ(X), in which case choosing g(X) as a linear transform is sufficient: Theorem C.5. (X,ϕ) is regular if ϕ(X) follows Gaussian distribution.\nProof. Note that when ϕ(X) is Gaussian, (ϕ(X), Lϕ(X)) is jointly Gaussian for any L ∈ Rk×d. Let Z = Lϕ(X), then the conditional distribution ϕ(X)|Z is Gaussian, with mean and covariance\nE[ϕ(X)|Z] = ΣLT (LΣLT )−1Z,Cov[ϕ(X), ϕ(X)|Z] = ΣLT (LΣLT )−1LΣ.\nHence,\nECov[ϕ(X), ϕ(X)|Z] = ΣLT (LΣLT )−1LΣ. We will prove that for any Σ M 0, there exists a linear transform L, such that M = ΣLT (LΣLT )−1LΣ.\nConsider the eigenvalue decomposition of Σ−1/2MΣ−1/2 = UTDU , k = rank(M), U ∈ Rk×d, D ∈ Rk×k, D is invertible. Then, let L = D−1/2UΣ−1/2, we have\nLΣLT = D−1,\nLΣ = D−1/2UΣ1/2,\nΣLT (LΣLT )−1LΣ = M.\nTherefore we have completed the proof.\nWe conjecture that this regularity condition holds for more general distributions beyond Gaussian." } ]
2,020
null
SP:4d39ce0230993594b0fddb3e2655f6f2cfdd308a
[ "The paper suggests a method for training binary neural networks. The proposed method is to partially train with full precision and then continue with binarized training using the straight-through estimator. The method is very simple and there is very limited technical contribution, so in order to be worthy of publication it needs to be supported with compelling experimental results. Unfortunately this is not the case." ]
Binarized neural networks, networks with weights and activations constrained to lie in a 2-element set, allow for more timeand resource-efficient inference than standard floating-point networks. However, binarized neural networks typically take more training to plateau in accuracy than their floating-point counterparts, in terms of both iteration count and wall clock time. We demonstrate a technique, partial pre-training, that allows for faster from-scratch training of binarized neural networks by first training the network as a standard floating-point network for a short amount of time, then converting the network to a binarized neural network and continuing to train from there. Without tuning any hyperparameters across four networks on three different datasets, partial pre-training is able to train binarized neural networks between 1.26× and 1.61× faster than when training a binarized network from scratch using standard low-precision training.
[]
[ { "authors": [ "Milad Alizadeh", "Javier Fernández-Marqués", "Nicholas D. Lane", "Yarin Gal" ], "title": "An empirical study of binary neural networks", "venue": "optimisation. In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ron Banner", "Yury Nahshan", "Daniel Soudry" ], "title": "Post training 4-bit quantization of convolutional networks for rapid-deployment", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Joseph Bethge", "Haojin Yang", "Marvin Bornstein", "Christoph Meinel" ], "title": "Back to simplicity: How to train accurate bnns from scratch", "venue": null, "year": 2019 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Pact: Parameterized clipping activation for quantized neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Christopher De Sa", "Megan Leszczynski", "Jian Zhang", "Alana Marzoev", "Christopher R. Aberger", "Kunle Olukotun", "Christopher Ré" ], "title": "High-accuracy low-precision training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Joshua Fromm", "Meghan Cowan", "Matthai Philipose", "Luis Ceze", "Shwetak Patel" ], "title": "Riptide: Fast end-to-end binarized neural networks", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017", "venue": null, "year": 2017 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "F. Maxwell Harper", "Joseph A. Konstan" ], "title": "The movielens datasets: History and context", "venue": "ACM Trans. Interact. Intell. Syst.,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xiangnan He", "Lizi Liao", "Hanwang Zhang", "Liqiang Nie", "Xia Hu", "Tat-Seng Chua" ], "title": "Neural collaborative filtering", "venue": "In Proceedings of the 26th International Conference on World Wide Web, WWW ’17,", "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Wu", "Lingjie Xu", "Cliff Young", "Matei Zaharia" ], "title": "Mlperf training benchmark", "venue": "In Conference on Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Haotong Qin", "Ruihao Gong", "Xianglong Liu", "Xiao Bai", "Jingkuan Song", "Nicu Sebe" ], "title": "Binary neural networks: A survey", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network pruning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2014 }, { "authors": [ "Bram-Ernst Verhoef", "Nathan Laubeuf", "Stefan Cosemans", "Peter Debacker", "Ioannis Papistas", "Arindam Mallik", "Diederik Verkest" ], "title": "Fq-conv: Fully quantized convolution for efficient and accurate inference, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tianyi Zhang", "Zhiqiu Lin", "Guandao Yang", "Christopher De Sa" ], "title": "Qpytorch: A low-precision arithmetic simulation framework, 2019", "venue": null, "year": 2019 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "CoRR, abs/1606.06160,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Quantizing neural networks (Gupta et al., 2015), constraining weights and activations to take on values within some small fixed set, is a popular set of techniques for reducing the storage (Han et al., 2016) or compute (Fromm et al., 2020) requirements of deep neural networks. Weights and activations can often be quantized down to as few as 8 bits with no loss in accuracy compared to a full-precision model. Further quantization often comes at the expense of accuracy: it is possible to binarize neural networks (Hubara et al., 2016; Rastegari et al., 2016), constraining weights and activations to take on values within a set of two elements (often {−1, 1}), but such binarization often lowers the accuracy of the resultant network, necessitating a tradeoff between desired compression and accuracy.\nIn the literature, there are two primary techniques for obtaining a quantized neural network: quantizing a pre-trained full-precision network (Banner et al., 2019; Han et al., 2016), and training a quantized network from scratch (Hubara et al., 2016; Gupta et al., 2015).\nFull-precision training. Quantizing a full-precision network requires few or even no additional training epochs on top of training that full-precision network. Typical procedures for quantizing a full-precision network range from data-blind procedures like selecting quantization bins to minimize distance from the original weights (Banner et al., 2019), to data-intensive procedures such as retraining the network to be more amenable to quantization (Han et al., 2016). However without significant additional training time, quantizing a pre-trained network often does not reach the highest accuracy possible for the quantized network architecture (Alizadeh et al., 2019). Further, achieving high accuracy with heavy quantization, such as binarization, often requires changing the network architecture, for instance by adding skip connections (Bethge et al., 2019); such architectural changes mean that the weights of a pre-trained full-precision network may not transfer to the new architecture.\nLow-precision training. Alternatively, training a quantized network from scratch allows for achieving high accuracy regardless of the availability of pre-trained full-precision weights (Alizadeh et al., 2019). Typical procedures for training a quantized network from scratch involve tracking and optimizing latent weights, weights which are quantized during the forward pass but treated as fullprecision during the backward pass (Hubara et al., 2016). However, training a quantized network from scratch can be costly. Quantized networks typically require more training iterations to plateau in accuracy (Hubara et al., 2016, Figure 1; Bethge et al., 2019, Figure 2). Further, since quantized networks are often trained by simulating the quantized operations in floating-point (Zhang et al., 2019), low-precision training can be even more computationally expensive than the full-precision equivalent.\nResearch question. In this paper, we explore the question:\nCan we accelerate training a binarized neural network from scratch to a given target accuracy?\nConcretely, we assume that a network architecture and standard training schedule are provided, but that pre-trained full-precision networks are not available. We also specifically focus on achieving accuracy in the early phase of training, exposing the tradeoff between training cost and accuracy.\nPartial pre-training. To answer the above research question, we evaluate a technique, partial pre-training, that allows for faster training of binarized neural networks by first training the network as a standard floating point network with standard full-precision training for a short amount of time, then converting the network to a binarized neural network and continuing to train from there with standard low-precision training for the remainder of the budgeted training time. We specifically evaluate partial pre-training’s speedup over standard low-precision training, when training a binarized neural network from scratch. We find that partial pre-training can train VGG, ResNet, and Neural Collaborative Filtering networks on CIFAR-10, ImageNet, and MovieLens-20m between 1.26× and 1.61× faster than standard low-precision training.\nContributions.\n• We present partial pre-training, which can train binarized neural networks from scratch between 1.26× and 1.61× faster than standard low-precision training. • We find that partial pre-training both requires fewer iterations to train to a given accuracy,\nand also that partial pre-training takes on average less time per iteration than standard low-precision training. • We analyze the sensitivity of partial pre-training to the choice of split between full-precision\nand low-precision training finding that an even split, though not always optimal, nearly matches the highest accuracy achievable by any other choice of split.\nAll together, we find that partial pre-training is a simple and effective approach for accelerating binarized neural network training. Partial pre-training is a step towards the goal of binarized neural network training procedures that can match the efficiency gains of binarized neural network inference." }, { "heading": "2 BACKGROUND", "text": "Binarized neural networks trade off accuracy for inference efficiency. However, binarized neural networks often take longer to train than the full-precision versions of the same network architecture, both in terms of training iterations until convergence and wall-clock time per iteration.\nTraining iterations. Binarized neural networks tend to take more iterations to train than the fullprecision versions of the same network architecture. For instance, Hubara et al. (2016, Figure 1) show a binarized neural network with a custom architecture requiring 4× as many training iterations to plateau in accuracy as a full-precision baseline on CIFAR-10. Bethge et al. (2019, Figure 2) similarly show a binarized ResNet-18 taking 2× as many training iterations to plateau in accuracy as a full-precision baseline on ImageNet.\nWall-clock time. Beyond requiring more iterations to train, binarized neural networks tend to take more wall-clock time to complete each iteration of training than full-precision networks do. This is because binarized neural networks are often trained by simulating low-precision operations with standard floating point operations (Zhang et al., 2019; Fromm et al., 2020), and require additional bookkeeping that full-precision networks do not require, such as performing the binarization of weights and activations (Hubara et al., 2016) and calculating scaling factors (Rastegari et al., 2016). While it is theoretically possible to accelerate binarized neural network training, we are not aware of any effort to exploit binarization during the training phase. It is also not clear what the maximum speedup possible from accelerating binarized neural network training would be: Fromm et al. (2020) show an acceleration of 6.33× for a VGG during inference; with the additional bookkeeping of binarized neural network training and potentially requiring higher precision gradients in the backward pass (Zhou et al., 2016), real training speedups would likely be lower." }, { "heading": "3 PARTIAL PRE-TRAINING", "text": "This paper focuses on accelerating the training time of binarized neural networks. Specifically, we aim to reduce both the training iterations and wall clock time of training binarized neural networks. We achieve this by leveraging the faster training time—both iteration-count and time-per-iteration—of full-precision networks. This section formalizes the design space of partial pre-training algorithms, describing the exact training methodology used in the experiments in Sections 5 and 6.\nPartial pre-training splits training into multiple phases. First, partial pre-training trains the network for a short amount of time at full precision (32 bits) using no quantization methods: weights, activations, and gradients are not quantized or clipped. Next, the binarization operators are added to the network (binarizing weights and activations). Partial pre-training then continues to train the binarized neural network using standard low-precision training.\nTo avoid requiring hyperparameter search for each different network, we prescribe a standard training schedule based on the original training schedule of the full-precision network (i.e., learning rate and associated decay, which is assumed to be provided). Each step of partial pre-training takes 50% of the allotted training time. Within the training time for each step of partial pre-training, the network is trained using the original learning rate schedule compressed to the allotted training time.\nThe partial pre-training algorithm is presented in Algorithm 1:\nAlgorithm 1 Partial pre-training. 1. Train the network at full precision, using the original learning rate schedule compressed\ndown to half of the desired training time. 2. Binarize the network, inserting quantization operators into the network. 3. Train the binarized network, using the original learning rate schedule compressed down to\nthe remaining half of the desired training time." }, { "heading": "4 EXPERIMENTAL METHODOLOGY", "text": "" }, { "heading": "4.1 DATASETS AND NETWORKS", "text": "We evaluate partial pre-training across a variety of datasets and networks. Specifically, we evaluate partial pre-training on a CIFAR-10 (Krizhevsky, 2009) ResNet-20 (He et al., 2016), a CIFAR-10 VGG-16 (Simonyan & Zisserman, 2014), an ImageNet (Russakovsky et al., 2015) ResNet-34, and a MovieLens 20M (Harper & Konstan, 2015) Neural Collaborative Filtering (NCF) (He et al., 2017) model. Following Bethge et al. (2019), our ResNet-20 and ResNet-34 are extended with additional skip connections past every quantized convolution, rather than past every block of quantized convolutions as is standard for ResNets; this architectural change facilitates training binarized ResNets from scratch. Our NCF model additionally has BatchNorm (Ioffe & Szegedy, 2015) layers after each binarized layer. The networks otherwise use standard architectures, data augmentation schemes, and training schedules drawn from the literature. Details about the networks and their respective training regimes are presented in Table 1." }, { "heading": "4.2 TRAINING DETAILS", "text": "All networks were trained on AWS GPU instances. The CIFAR-10 and MovieLens-20M networks were trained on p3.2xlarge instances, using an NVIDIA V100 GPU. The ImageNet networks were trained on p3.8xlarge instances, training with 4 data-parallel NVIDIA V100 GPUs. The vision networks were trained using custom implementations of each network in TensorFlow 2.2.0 (Abadi et al., 2016). The NCF was trained using the PyTorch 0.4 (Paszke et al., 2019) implementation included in the MLPerf 0.5 training benchmark suite (Mattson et al., 2020)." }, { "heading": "4.3 BINARIZATION", "text": "We evaluate partial pre-training using standard approaches to neural network binarization:\nInputs. We binarize input activations using PACT (Choi et al., 2018), a gradient-based method of determining activation scale which is used in place of the ReLU activation in binarized neural networks. PACT introduces one trainable parameter per layer, α, which controls the scale of the activations. The PACT activation on binarized networks has the following form:\nPACT(x) = { 0, x ∈ (−∞, α2 ] α, x ∈ (α2 ,∞)\n∂ PACT(x)\n∂α = { 0, x ∈ (−∞, α) 1, x ∈ [α,∞)\n∂ PACT(x)\n∂x = { 1, x ∈ [0, α] 0, otherwise\nWe initialize α = 3 for each layer, and control its magnitude with an L2 regularization penalty with coefficient 0.0002.\nWeights. We binarize weights using the sign of the weight and the straight-through estimator:\nsign(x) = { −1, x ≤ 0 1, x > 0\n∂ sign(x)\nx = { 1, x ∈ [−1, 1] 0, otherwise\nFull precision layers. Following standard practice (Hubara et al., 2016; Simons & Lee, 2019; Qin et al., 2020), we do not binarize inputs or weights to the first, last, or batch normalization layers, nor do we binarize projection layers in ResNets (Bethge et al., 2019)." }, { "heading": "4.4 EXPERIMENTS AND RESULTS", "text": "For each baseline and configuration of partial pre-training, we run three independent trials, plotting the minimum, median, and maximum accuracy achieved across all three trials." }, { "heading": "4.5 SPEEDUPS", "text": "The reported speedups of partial pre-training over low-precision training are calculated by sampling 100 evenly-spaced accuracies, between 50% of the maximum accuracy achievable by the binarized network (as a lower bound on acceptable accuracy) to the accuracy at which the techniques converge (with the exception of the NCF, where low-precision training does not converge to the same accuracy as partial pre-training). Speedups are then calculated by determining the time to train each technique to that accuracy (linearly interpolating between nearby points if there is no trial at exactly that accuracy), calculating the speedup as low-precision training timepartial pre-training training time , and calculating the harmonic mean of the speedup across all target accuracies.\n1The source implementation (from MLPerf) trains the NCF to an accuracy threshold of 0.635 HR@10, taking a mean duration of 388000 iterations (Mattson et al., 2020, Figure 3a). Adding BatchNorm slightly decreases the accuracy of the full-precision network, but is necessary to train the binarized network." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "This section presents the accuracy achieved by partial pre-training across training times on several different networks. Section 5.1 shows the accuracy achieved by partial pre-training when training for different numbers of training iterations, finding that iteration-for-iteration partial pre-training trains faster than standard low-precision neural network training. Section 5.2 extends the analysis to compare wall-clock time, finding that the overhead of simulated low-precision training leads to partial pre-training training even faster than low-precision training." }, { "heading": "5.1 ACCURACY V.S. TRAINING ITERATION", "text": "We find that partial pre-training can accelerate binarized neural network training iteration-for-iteration: binarized neural networks trained with partial pre-training require fewer training iterations to reach any target accuracy than binarized neural networks trained with standard low-precision training. Among the four networks presented in this section, we find a mean speedup of 1.30× of partial pre-training over standard low-precision training when compared iteration-for-iteration.\nMethodology. Each plot shows the accuracy attainable when training a neural network for the specified number of training iterations t. Full-precision is the accuracy attained from training a fullprecision network from scratch for the specified number of iterations, and is presented as an upper bound on training speed. Partial pre-training is the accuracy of the partial pre-training procedure. Low-precision is the accuracy attained from training a binarized network from scratch using standard low-precision training for the specified number of iterations. As specified in Section 4.4, each point on the graph is independent, and shows the median and min/max accuracy from three trials.\nResults. Figure 1 presents the iteration-for-iteration accuracy of partial pre-training compared to full-precision and low-precision training. We find that partial pre-training matches or exceeds the accuracy of low-precision training for any training time, across all networks and datasets, with the exception of after fewer than 10000 training iterations of the CIFAR-10 VGG-16. This speedup is most prominent with relatively little training time, but tapers off and eventually disappears when the network is trained for enough iterations with standard low-precision training; more analysis of this behavior is provided in Section 6." }, { "heading": "5.2 ACCURACY V.S. WALL-CLOCK TIME", "text": "This section presents the accuracy attained by partial pre-training across different wall-clock training times. Table 2 first presents the per-training-iteration slowdown of simulated low-precision optimization, the standard practice in the literature. Figure 2 then presents the accuracy attained at different wall-clock times, showing how the per-iteration accuracy gains from Section 5.1 and the full-precision speedups from Table 2 compound to lead to faster network training from partial pre-training.\nMethodology. The training speeds in Table 2 are calculated by timing the mean time to complete a training epoch across 5 epochs, ignoring the first 3 batches in each epoch. The speeds are presented in training iterations per second, with each iteration corresponding to processing one batch of data according to the batch sizes presented in Table 1. The data in Figure 2 are from the same training runs as Figure 1, but with wall-clock time rather than iteration count presented on the x axis. Wall clock time is measured from just before the start of the first iteration of training until just after the last iteration of training, and is inclusive of any overhead from the full-precision to low-precision projection in partial pre-training. Results. Figure 2 presents the end-to-end wall-clock time of partial pre-training and baselines. Due to the combination of more accuracy per iteration from partial pre-training (Figure 1) and the faster training time of the full-precision training phase (Table 2), we find that partial pre-training trains binarized networks faster than standard low-precision training inclusive of all overhead from the partial pre-training process. Specifically, in the range of data shown in Figure 2, we find that partial pre-training has a harmonic mean speedup over standard low-precision training of: 1.28× for the CIFAR-10 ResNet-20, 1.26× for the CIFAR-10 VGG-16, 1.36× for the ImageNet ResNet-34, and 1.61× for the MovieLens-20M NCF.\n400 0 800 0 120 00 160 00 200 00\n32-bit Training Iterations\n200 00\n160 00\n120 00\n800 0\n400 0 1- b\nit T\nra in\nin g\nIt er\nat io\nn s\n0.68\n0.72\n0.76\n0.80\nT es\nt A\ncc u\nra cy\n(a) Accuracy when training with varying amounts of full-precision and low-precision iterations.\n0% 25% 50% 75% 100% 32-bit Training Time\n0.725\n0.750\n0.775\n0.800\n0.825\nT es\nt A\ncc u\nra cy\nTotal training iterations 40000\n36000\n32000\n28000\n24000\n20000\n16000\n12000\n8000\n(b) Accuracy when training with equivalent training time but varying full-precision/low-precision split.\nFigure 3: Sensitivity of partial pre-training to different combinations of full-precision training and low-precision training times." }, { "heading": "6 ANALYSIS", "text": "Section 5 demonstrated that partial pre-training allows for faster training of binarized neural networks than standard low-precision training. This section analyzes the hyperparameter choice baked into the partial pre-training algorithm of splitting the training time evenly between full-precision and low-precision training, finding that an even split between full-precision and low-precision training allows for generally good performance across training times. Due to compute limitations, this section only analyzes the CIFAR-10 ResNet-20. We find that while this even split is not optimal in all circumstances, it is a generally applicable hyperparameter choice that leads to accuracy competitive with the best choice of training time split within the given training budget.\nMethodology. Figure 3a shows the accuracy from training a CIFAR-10 ResNet-20 with a modification of partial pre-training that has a non-even split between full-precision and low-precision training. Each point is the median accuracy across 3 trials when first training for the number of iterations specified on the x-axis with the original learning rate schedule compressed to the number of training iterations, then projecting to low-precision and training for the number of iterations specified on the y-axis again with the original learning rate schedule compressed to the number of training iterations. Figure 3b is generated from the same data as Figure 3a, showing the accuracy along the top-left to bottom-right diagonals of Figure 3a. That is, Figure 3b shows partial pre-training’s accuracy as the split between full-precision and low-precision training is varied for a given training budget, with each different line representing a different training budget.\nResults. Figure 3 shows that the even split between full-precision and low-precision training baked into partial pre-training is a generally applicable hyperparameter choice that leads to accuracy competitive with the best choice of training time split within the given training budget. When training for a short duration (leading to lower overall accuracy, shown in the lower lines in Figure 3b), spending more time on the full-precision training phase leads to higher accuracy than spending more time on the low-precision training phase. When training for a longer duration (shown in the higher lines), spending more time on the low-precision phase leads to higher accuracy than spending more time on the full-precision phase. Across all training budgets, splitting time evenly between full-precision and low-precision training leads to competitive accuracy with the best choice of hyperparameters." }, { "heading": "7 LIMITATIONS", "text": "While we find that partial pre-training accelerates binarized neural network training time across the networks and datasets experimented on in Section 5, there are dimensions of the hyperparameter space that we do not explore. We do not evaluate binarization techniques other than the commonly\nused PACT activation scaling (Choi et al., 2018) and static weight scaling. We also do not evaluate learning rate schedules other than compressing the original learning rate schedule from the fist phase of training; it is plausible that other learning rate schedules better tuned to the technique or network could train even faster (Renda et al., 2020), but we do not evaluate these alternative schedules." }, { "heading": "8 RELATED WORK", "text": "Several other papers discuss related ideas to partial pre-training. However, this paper is the first to propose partial pre-training as a method for training binarized neural networks from scratch, to provide a concrete partial pre-training algorithm, and to systematically evaluate partial pre-training as a method of speeding up the training of binarized neural networks.\nFaster binarized network training. Partial pre-training’s speedup is due to the slow training speed of low-precision network training, both in terms of iteration count and wall-clock speed per iteration; partial pre-training would be an ineffective training method given a method that could train low-precision networks at an equivalent or faster rate than the full-precision counterparts. Here we survey techniques that propose speeding up low-precision network training, showing that partial pre-training is not supplanted by better low-precision training techniques. Zhou et al. (2016) propose a technique that uses low-bit quantization in the backward pass in addition to the forward pass, but neither measure the iteration-count slowdown induced by gradient quantization nor measure well-clock speedup of gradient quantization. De Sa et al. (2018) propose a technique to reduce quantization error between latent and quantized weights to improve asymptotic training speed; however, the proposed algorithm is only evaluated on 8-bit networks. Alizadeh et al. (2019) suggest that commonly applied weight and gradient clipping techniques constrain the learning rate that can be used during training, and propose disabling gradient clipping to allow for larger learning rates and therefore faster training; however, they do not disable the quantization during training (still incurring overhead from quantization operators in the training graph), they do not evaluate the wall-clock speedup of the approach, and further the approach requires bespoke tuning of the accelerated training schedule per-network.\nFull-precision pre-training. Alizadeh et al. (2019) and Bethge et al. (2019) both find that binarizing a fully trained full-precision network to low-precision then training the binarized network for a short amount of time allows for rapid re-training of the low-precision network. However, Alizadeh et al. and Bethge et al. dismiss the approach because the low-precision network that is fine-tuned for a short amount of time does not achieve the same accuracy as a low-precision network trained from scratch for a longer duration. Alizadeh et al. and Bethge et al. also only evaluate fine-tuning from fully trained full-precision networks, rather than the partially-trained full-precision networks presented in this paper, and do not evaluate the wall-clock speedups of such an approach.\nMulti-phase binarized network training. Zhou et al. (2017) propose an approach that incrementally quantizes neural networks by partitioning weights into disjoint groups based on the magnitude of the pre-trained weights, first binarizing weights with large magnitudes then binarizing weights with small magnitudes. Zhou et al. start this process from a fully-trained full-precision network, and do not consider observed training speedups, only accuracy gains. Verhoef et al. (2019) gradually reduce the number of bits used for weight representation over the course of training, with the goal of improving accuracy; Verhoef et al. do not evaluate the training speedup achieved by the approach, nor does their technique allow for taking advantage of faster full-precision training." }, { "heading": "9 CONCLUSION", "text": "Binarizing neural networks makes them more efficient for inference, but binarized neural networks trained from scratch typically take more iterations and more wall-clock time to plateau in accuracy than their full-precision counterparts. In this paper we demonstrate a technique, partial pre-training, that allows for faster training of binarized neural networks by exploiting fast full-precision training then fine-tuning at low-precision. Without introducing any new hyperparameters, we find that partial pre-training can train VGG, ResNet, and NCF networks on CIFAR-10, ImageNet, and MovieLens20m between 1.26× and 1.61× faster than standard 1-bit training. All together, we find that partial pre-training is a simple and effective approach for accelerating binarized neural network training. Partial pre-training is a step towards the goal of binarized neural network training procedures that can match the efficiency gains of binarized neural network inference." } ]
2,020
FAST BINARIZED NEURAL NETWORK TRAINING
SP:95bf4c07cd691ae29a95ccffa8883f9c92c3eb02
[ "This paper presents a self-supervised learning task of shuffling input patches and demanding the network to learn to unshuffle. A related prior work, Noorozi and Favaro (2016) uses a fixed set of permutations to do this task for a given number of patches, and the current paper argues to expand this idea for the full set of permutations. To this end, the paper encodes a permutation as a number tuple, with the goal of the network to learn to produce the correct tuple that has the numbers in order. As the numbers are discrete and thus non-differentiable, the paper suggests differentiable soft-variants using stochastic perturbations and regularizations (Fenchel-Young loss). Experiments are provided on audio and video tasks and show promise over the method of Noorozi and Favaro (2016)." ]
Self-supervised pre-training using so-called “pretext” tasks has recently shown impressive performance across a wide range of tasks. In this work, we advance self-supervised learning from permutations, that consists in shuffling parts of input and training a model to reorder them, improving downstream performance in classification. To do so, we overcome the main challenges of integrating permutation inversions (a discontinuous operation) into an end-to-end training scheme, heretofore sidestepped by casting the reordering task as classification, fundamentally reducing the space of permutations that can be exploited. We make two main contributions. First, we use recent advances in differentiable ranking to integrate the permutation inversion flawlessly into a neural network, enabling us to use the full set of permutations, at no additional computing cost. Our experiments validate that learning from all possible permutations improves the quality of the pre-trained representations over using a limited, fixed set. Second, we successfully demonstrate that inverting permutations is a meaningful pretext task in a diverse range of modalities, beyond images, which does not require modality-specific design. In particular, we improve music understanding by reordering spectrogram patches in the time-frequency space, as well as video classification by reordering frames along the time axis. We furthermore analyze the influence of the patches that we use (vertical, horizontal, 2-dimensional), as well as the benefit of our approach in different data regimes.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep speech 2: End-toend speech recognition in english and mandarin", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Quentin Berthet", "Mathieu Blondel", "Olivier Teboul", "Marco Cuturi", "Jean-Philippe Vert", "Francis Bach" ], "title": "Learning with differentiable perturbed optimizers", "venue": "arXiv preprint arXiv:2002.08676,", "year": 2020 }, { "authors": [ "Mathieu Blondel", "André FT Martins", "Vlad Niculae" ], "title": "Learning with fenchel-young losses", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Mathieu Blondel", "Olivier Teboul", "Quentin Berthet", "Josip Djolonga" ], "title": "Fast differentiable sorting and ranking", "venue": "arXiv preprint arXiv:2002.08871,", "year": 2020 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Alexis Conneau", "Alexei Baevski", "Ronan Collobert", "Abdelrahman Mohamed", "Michael Auli" ], "title": "Unsupervised cross-lingual representation learning for speech recognition", "venue": "arXiv preprint arXiv:2006.13979,", "year": 2020 }, { "authors": [ "Marco Cuturi", "Olivier Teboul", "Jean-Philippe Vert" ], "title": "Differentiable ranking and sorting using optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Jesse Engel", "Cinjon Resnick", "Adam Roberts", "Sander Dieleman", "Douglas Eck", "Karen Simonyan", "Mohammad Norouzi" ], "title": "Neural audio synthesis of musical notes with wavenet autoencoders, 2017", "venue": null, "year": 2017 }, { "authors": [ "Beat Gfeller", "Christian Frank", "Dominik Roblek", "Matt Sharifi", "Marco Tagliasacchi", "Mihajlo Velimirović" ], "title": "Spice: Self-supervised pitch estimation", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2020 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Raghav Goyal", "Samira Ebrahimi Kahou", "Vincent Michalski", "Joanna Materzynska", "Susanne Westphal", "Heuna Kim", "Valentin Haenel", "Ingo Fruend", "Peter Yianilos", "Moritz Mueller-Freitag" ], "title": "The ”something something” video database for learning and evaluating visual common sense", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Aditya Grover", "Eric Wang", "Aaron Zweig", "Stefano Ermon" ], "title": "Stochastic optimization of sorting networks via continuous relaxations", "venue": "arXiv preprint arXiv:1903.08850,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Colorization as a proxy task for visual understanding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Hsin-Ying Lee", "Jia-Bin Huang", "Maneesh Singh", "Ming-Hsuan Yang" ], "title": "Unsupervised representation learning by sorting sequences", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Tie-Yan Liu" ], "title": "Learning to rank for information retrieval", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Gonzalo Mena", "David Belanger", "Scott Linderman", "Jasper Snoek" ], "title": "Learning latent permutations with gumbel-sinkhorn networks", "venue": "arXiv preprint arXiv:1802.08665,", "year": 2018 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Jeremy Irvin", "Kaylie Zhu", "Brandon Yang", "Hershel Mehta", "Tony Duan", "Daisy Ding", "Aarti Bagul", "Curtis Langlotz", "Katie Shpanskaya" ], "title": "Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning", "venue": "arXiv preprint arXiv:1711.05225,", "year": 2017 }, { "authors": [ "Morgane Rivière", "Armand Joulin", "Pierre-Emmanuel Mazaré", "Emmanuel Dupoux" ], "title": "Unsupervised pretraining transfers well across languages", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Michal Rolı́nek", "Vı́t Musil", "Anselm Paulus", "Marin Vlastelica", "Claudio Michaelis", "Georg Martius" ], "title": "Optimizing rank-based metrics with blackbox differentiation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Rodrigo Santa Cruz", "Basura Fernando", "Anoop Cherian", "Stephen Gould" ], "title": "Visual permutation learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Marin Vlastelica", "Anselm Paulus", "Vı́t Musil", "Georg Martius", "Michal" ], "title": "Rolı́nek. Differentiation of blackbox combinatorial solvers", "venue": "arXiv preprint arXiv:1912.02175,", "year": 2019 }, { "authors": [ "see", "e.g", "Blondel" ], "title": "2020b)). Both our approaches rely on the fact that for a vector θ ∈ R, the vector of ranks of θ in ascending order is given by", "venue": null, "year": 2020 }, { "authors": [ "Blondel" ], "title": "2020b), where the function Ω is not defined as the dual of an expectation, but instead as an explicit convex regularizer over C", "venue": "{〈y,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Supervised learning has achieved important successes on large annotated datasets (Deng et al., 2009; Amodei et al., 2016). However, most available data, whether images, audio, or videos are unlabelled. For this reason, pre-training representations in an unsupervised way, with subsequent fine-tuning on labelled data, has become the standard to extend the performance of deep architectures to applications where annotations are scarce, such as understanding medical images (Rajpurkar et al., 2017), recognizing speech from under-resourced languages (Rivière et al., 2020; Conneau et al., 2020), or solving specific language inference tasks (Devlin et al., 2018). Among unsupervised training schemes, self-supervised learning focuses on designing a proxy training objective, that requires no annotation, such that the representations incidentally learned will generalize well to the task of interest, limiting the amount of labeled data needed for fine-tuning. Such “pretext” tasks, a term coined by Doersch et al. (2015), include learning to colorize an artificially gray-scaled image (Larsson et al., 2017), inpainting removed patches (Pathak et al., 2016) or recognizing with which angle an original image was rotated (Gidaris et al., 2018). Other approaches for self-supervision include classification to original images after data augmentation (Chen et al., 2020) and clustering (Caron et al., 2018).\nIn this work, we consider the pretext task of reordering patches of an input, first proposed for images by Noroozi & Favaro (2016), the analogue of solving a jigsaw puzzle. In this setting, we first split an input into patches and shuffle them by applying a random permutation. We train a neural network to predict which permutation was applied, taking the shuffled patches as inputs. We then use the inner representations learned by the neural network as input features to a low-capacity supervised classifier (see Figures 1 and 2 for illustration). We believe that permutations provide a promising avenue for self-supervised learning, as they are conceptually general enough to be applied across a large range of modalities, unlike colorization (Larsson et al., 2017) or rotations (Gidaris et al., 2018) that are specific to images. The idea of using permutations was also explored in Santa Cruz et al.\n(2018) where they use a bi level optimization scheme which leverages sinkhorn iterations to learn visual reconstructions. Their method resorts to approximating the permuation matrix with such continuous methods. Our method relies on no such approximations and can efficiently represent all possible permutations. Moreover, the encouraging results of Noroozi & Favaro (2016) when transferring learned image features for object detection and image retrieval inspire us to advance this method a step forward. However, including permutations into an end-to-end differentiable pipeline is challenging, as permutations are a discontinuous operation. Noroozi & Favaro (2016) circumvent this issue by using a fixed set of permutations and casting the permutation prediction problem as a classification one. Given that the number of possible permutations of n patches is n!, this approach cannot scale to exploiting the full set of permutations, even when n is moderately small.\nIn this work, we leverage recent advances in differentiable ranking (Berthet et al., 2020; Blondel et al., 2020b) to integrate permutations into end-to-end neural training. This allows us to solve the permutation inversion task for the entire set of permutations, removing a bottleneck that was heretofore sidestepped in manners that could deteriorate downstream performance. Moreover, we successfully demonstrate for the first time the effectiveness of permutations as a pretext task on multiple modalities with minimal modality-specific adjustments. In particular, we improve music understanding by learning to reorder spectrogram frames, over the time and frequency axes. We also improve video understanding by reordering video frames along time.\nTo summarize, we make the following two contributions.\n- We integrate differentiable ranking into end-to-end neural network training for representation learning. This provides an efficient manner to learn in reordering tasks for all permutations, for larger numbers of patches. We show that this drastic increase in the number of permutations improves the quality of learned representations for downstream tasks.\n- We successfully demonstrate for the first time the effectiveness of permutations as a general purpose self-supervision method, efficient on multiple modalities with extremely minimal modifications to the network. Additionally, the pre-trained representations perform well across diverse\ntasks of the same modality. We purposefully divert from domain-specific transformations, often inspired by data augmentation techniques, as predicting the permutation applied to input patches is not restricted either to a domain, nor a modality or a dimensionality. This also creates opportunities for applications beyond the scope of what we illustrate in our work.\nThe rest of the paper is organized as follows. In Section 2 we present the problem formulation and the methods used. In Section 3 we demonstrate the effectiveness of our experiments on audio, video, and image tasks. Further details can be found in the Appendix." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 GENERAL METHODOLOGY", "text": "To efficiently leverage the existence of large amounts of unlabeled data, we present a self-supervised pretext task that predicts the permutation applied to patches of an input. We do so in a manner that removes an existing bottleneck, and allows to use all possible permutations as targets during training. This pretext task is performed upstream and the internal representation learned by the pretext neural network can then be transferred and used on secondary downstream tasks – see Figures 1 and 2.\nIn the upstream task, for each data point, n patches, sub-parts x1, . . . , xn of of identical dimensions d are extracted. Their exact structure depend naturally on the modality: e.g. horizontal bands of an image, frequency bands of a spectrogram, frames of a video. Accordingly, the dimensions in d can represent height, width, channels, etc. These patches are then permuted randomly, and organized in a tensor Xi of dimension n× d (see Figure 2), which is paired to the applied permutation as a label yi (see Section 2.2 for details on permutation encoding).\nWe then train the weights w of a neural network to minimize the loss between its output fw(Xi), of size n, and the encoding yi ∈ Rn of the permutation applied to the n patches (see Figure 2). Note that the last operation of the network is a differentiable ranking operator y∗ε . This operator, the encoding of the permutations for the labels, and the loss used in this upstream task are detailed in Section 2.2 below. The network, and the details of the data-processing pipeline generating the patches, are detailed in Section 2.3.\nAfter upstream training on the initial dataset, the upstream network weights can be used to generate representations. By truncating the network, removing some of the last layers, we can extract an embedding of any input vector. These representations can be used in a downstream task to train a new network, with its own weights, minimizing a loss (typically classification or regression) between its output and the downstream task labels (see Figures 1 and 2).\nWe mostly evaluate our methods on downstream performance: the accuracy after training of the downstream network, on a task that is unknown during upstream training. However, the pretext reordering task can be of interest in and of itself, as in learning-to-rank problems (Liu, 2011), and we also report generalization performance in this task.\nAs an aside, in the Jigsaw puzzle reassembly task, Noroozi & Favaro (2016) show that the choice of permutation set matters when it comes to performance. They make use of the Hamming distance to\nchoose a permutation set that it maximally separated. This permutation set is diverse but is not close to covering the entire permutation space. By supporting the full set of permutations, our approach does not suffer from this issue.\nThe downstream tasks are dataset-dependent. However, the network in the upstream reordering task requires minimal modification across tasks and modalities (see Section 3 for details). In this work, we demonstrate the effectiveness of our method on audio, video, and images, across several classification and regression tasks." }, { "heading": "2.2 DIFFERENTIABLE RANKING METHODOLOGY", "text": "Our methodology for representation learning relies importantly on the ability to incorporate ordering or ranking operations in an end-to-end differentiable pipeline. During training for the upstream task, the last two layers of the network consist of: a vector of score values θw(X) ∈ Rn, and network outputs fw(X) = y∗ε (θw(X)) ∈ Rn, using a differentiable ranking operator y∗ε that we detail here. The goal of the upstream task is to find good parameters w such that the network outputs fw(X) correctly recover the label y representing the permutation applied to the patches in X .\nIn earlier works using permutations as pretext task (Noroozi & Favaro, 2016; Lee et al., 2017), training with permutation labels is achieved by reducing the permutation inversion task to classification. More specifically, it is encoding a fixed number of permutations L n! as classes. Each class is represented by a one-hot vector, and network outputs are logits θw(X) of size L, leading to a prediction by taking a softmax among these L classes. This approach is obviously limited: representing all the permutations requires in principle n! classes, which is quickly not manageable, even for small values of n. Further, this does not take into account the similarity between permutations: with this encoding, permutations are orthogonal, no matter how similar they are.\nWe address these issues by having network outputs θw(X) of size only n, and interpreting their relative orders as the predicted permutation (e.g. y = (0, 1, 2, 3) if they are in decreasing order, predicting the identity permutation). The pretext labels also encode the permutations in this manner. This gives a unique encoding to each permutation, operates in dimension n, and can be computed with sorting operations in time O(n log n). Further, small distances in these encodings naturally represent similar permutations.\nHowever, applying directly a ranking operator y∗ on the last layer of the network would not allow for backpropagation of gradients: the function of the weights w 7→ L(y∗(θw(X)); y) is piece-wise constant. Small changes in the weights w induce either large jumps or no change in value at all, its gradients are 0 almost everywhere, and undefined otherwise. In order to overcome this matter, we consider instead two differentiable ranking operations, one using stochastic perturbations, introduced in Berthet et al. (2020) and another one using regularization (Blondel et al., 2020b). These operations, denoted here by y∗ε , map any vector of k values to a point in the convex hull of permutation encodings in dimension k (e.g. (0.1, 0.9, 2.2, 2.8) over 4 elements). They can be thought of analogous to the softmax operator for ranking. They share some of its properties: good approximation of the original function, differentiability in the input values θ with non-zero derivatives everywhere, ease of computation, and tuning by a temperature parameter ε (see Appendix A.1 for further details).\nAdjusting the network parameters w requires a notion of loss function between y∗ε (θ) and y. For the version of y∗ε (θ) based on stochastic perturbations (Berthet et al., 2020), we use the associated Fenchel–Young loss (Blondel et al., 2020a), that act directly on θ = θw(X) (outputs of the vector, and inputs of the sorting operations), written here as LFY(θ; y) (see Appendix A.1). This loss is convex in θ, smooth, equal to 0 if and only if y∗ε (θ) = y. Its gradients are given by ∇θLFY(θ; y) = y∗ε (θ)− y . We call this loss “Perturbed F-Y” in our empirical results. For the regularized version of y∗ε (θ) (Blondel et al., 2020b), we use\n1 2 ‖y∗ε (θ)− y‖2.\nWe call this loss “Fast Soft Ranking (FSR)” in our empirical results. We opt for these two losses for their good theoretical properties and O(n log n) complexity. Other choices (Mena et al., 2018; Cuturi et al., 2019; Vlastelica et al., 2019; Rolı́nek et al., 2020; Grover et al., 2019) are also possible, potentially with higher computational cost, or regions with zero gradient." }, { "heading": "2.3 IMPLEMENTATION AND ARCHITECTURE", "text": "Data-processing When constructing the self-supervised task, inputs that can be treated as images (i.e. with width and height) are sliced into patches. This slicing is controlled by two variables nx and ny , determining respectively the number of columns and rows used. In Noroozi & Favaro (2016), 9 square patches are used. Across modalities, different slicing scheme can lead to better performance. In video analysis, there is only one axis along which to permute: time. In audio processing, the downstream task may benefit from the pretext task using frequency slices of the spectrogram instead of time slices (see Figure 1 for illustration). We explore these questions Section 3.\nUpstream task. For the reordering pretext task, performed upstream, we use a Context Free Network (CFN) from Noroozi & Favaro (2016). This network uses an AlexNet (Krizhevsky et al., 2012) backbone which processes each patch individually while sharing weights across all patches as shown in Figure 1. By processing each patch independently, but with shared weights, the network cannot rely on global structure. After the shared backbone, the patches are passed together through two fully connected layers. The layer represents the predicted ranks of the input permutation. Although CFN was originally designed for images, one of our empirical contributions is to show its application to a number of separate modalities without any change to the training mechanism (Figure 1). In particular, we show important performance gains on audio tasks.\nDownstream task. For the downstream task, we use classifiers with low capacity: a 3-layer multilayer perceptron (MLP) for audio and video task, and a linear classifier for images. It is trained on embeddings extracted at the first aggregate fully connected layer of the pretext network (whose weights are frozen during this part). The MLP’s output layer is task dependent. For a regression downstream task, the output of the MLP is a single scalar and the downstream model is trained to minimize mean-squared error. On the other hand, for a downstream classification task, the output of the MLP is a softmax over the class logits, and we train the downstream model by minimizing the cross-entropy between the predictions and the true labels." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we demonstrate the effectiveness of permutation-based self-supervised learning as measured by the test accuracy in downstream tasks across several modalities. All experiments were carried out in Tensorflow (Abadi et al., 2016) and run on a single P100 GPU. For all modalities, in addition to the downstream task, we also report the performance on the upstream task using partial ranks, the proportion of patches ranked in the correct position. We will open-source our codebase for reproducibility and reusability." }, { "heading": "3.1 AUDIO", "text": "Experimental setup. The NSynth dataset (Engel et al., 2017) offers about 300,000 audio samples of musical notes, each with a unique pitch, timbre, and envelope recorded from 1,006 different instruments. The recordings, sampled at 16kHz, are 4 seconds long and can be used for 3 downstream classification tasks: predicting the instrument itself (1,006 classes) the instrument family (11 classes) and predicting the pitch of the note (128 classes). Since the pitch estimation is a regression task, we report in our results the mean squared error (MSE) for this task.\nAudio processing systems usually take as input a log-compressed spectrogram of the recordings rather than the raw waveform, since it is a convenient and compact time-frequency representation of the input signal. Moreover, the 2D structure of the spectrogram allows us to use the 2D convolutional network as for images and videos.\nWe train our CFN with an AlexNet backbone on the upstream task of predicting the applied permutation for 1000 epochs, over mini batches of size 32 and with an Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−6. After convergence, we evaluate the downstream generalization performance over the 3 NSynth tasks, by replacing the last layers of the network by a 3-layer MLP and replacing the random permutation by the identity. To understand the quality of the produced embeddings, we vary the number of examples used to train train the downstream task and report the results for different data regimes.\nWe compare the different variants of our method (number and nature of the patches) with 2 baseline alternatives: i) training the downstream architecture on an untrained encoder (referred to as Random Embedding) and ii) solving the same upstream task but using instead a finite set of 100 permutations as proposed by Noroozi & Favaro (2016) (Fixed Permutation).\nLastly, we compare different losses to train the permutation pretext task: a) cross entropy (XE) when learning over 100 permutations, b) MSE loss (MSE), c) soft ranking via perturbations (Perturbed FY) and d) soft ranking (Fast Soft Ranking).\nEmpirical results. First, we compare the different methods for different data regimes in the downstream task and report graphically the results in Figure 3, choosing the accuracy for the classification tasks and MSE for the regression one. We observe that in the low data regime our method strongly outperforms fully supervised model trained end-to-end rather than on a fixed embedding. Moreover, even though the margin closes when adding more data, we observe that the features obtained greatly assist in the downstream learning process. We also observed that pretraining is particularly impactful for pitch estimation which aligns with results found by Gfeller et al. (2020).\nWe report in Table 1 the results for 1000 downstream examples (see 5 and 6, in the Appendix A.2 for 500 and 5000). Those experiments were run using 10 frequency bands, which corresponds to 10! permutations. This number of potential classes discards the use of classification loss: from a very practical point of view, such a huge number of classes would create a weight matrix on the last dense layer that would not fit in memory.\nWe first observe that random embeddings performs poorly but does represent a good baseline to be compared to. Second, when training the encoder with the fixed set of permutations, we observe a significant increase in performance that confirms experimentally the results reported by (Noroozi & Favaro, 2016). In the first three rows of Table 1, we report the results of our methods in the downstream metrics. We first observe that even with a mean squared error loss, the performance\non the downstream task is comparable or better than the fixed permutation method and we show that using a ranking loss further increases the performance. Those results tend to confirm that i) permutation is an interesting pretext task, ii) considering all possible permutation helps building better representations and iii) the use of a ranking loss is the right choice of loss for such a task.\nFurthermore, Fig.4 shows the effect of the number of frequency bands on the downstream performance. It shows that, as the number of permutations grows, the performance over the downstream task increases, providing better representations. However, it eventually stops increasing or can even decrease the performance.\nWe report in the last row of Table 1 performance in the pretext task. Good performance on the downstream task is often connected to good performance on the pretext task. Here, we measure performance by the ability of the CFN network to reorder the shuffled inputs, reporting the proportion of items ranked in the correct position.\nTime-frequency structure and permutations. Unlike images, the horizontal and vertical dimensions of a spectrogram are semantically different, respectively representing time and frequency. While Noroozi & Favaro (2016) only exploited square patches, experimenting with audio allows exploring permutations over frequency bands (horizontal patches), time frames (vertical patches) or square time-frequency patches, and comparing the resulting downstream performance Table 2 reports a comparison between these three settings. Overall, shuffling along the frequency axis only is the best pre-training strategy. These results illustrate a particularity of the dataset: our inputs are single notes, many of them having an harmonic structure. In this context, learning the relation between frequency bands is meaningful both to recognize which instrument is playing, as well as which note (pitch). This also explains the poor performance of slicing along the time axis. Pitch is a time-independent characteristic, so the time structure is not relevant for this task. Moreover, musical notes have an easily identifiable time structure (fast attack and slow decay), which may make the task of reordering time frames trivial. We hypothesize that signals with a richer, non-stationary time structure, such as speech, would benefit more from shuffling time frames." }, { "heading": "3.2 VIDEO", "text": "Experimental setup. Another modality where shuffling can assist learning is in video classification, as done by Lee et al. (2017). In this case, instead of slicing a single image with multiple patches, a time frame is treated as a patch and multiple frames are permuted. We sample these frames uniformly along the video, and use 20 frames. We experiment on the something-something dataset (Goyal et al., 2017), a dataset of videos labelled with 174 actions. We choose this dataset as many labels incorporate information about the dynamics (e.g. ”Pulling something from right to left”), so that classification can be improved by learning the natural order of frames.\nExperimental results. We report in Table 3 results on self-supervised learning with permutations on frames of video. Competitive standards for fully supervised performance (using the whole video) on this dataset are around 0.65. Our method, with drastically reduced amount of training data (3000 videos, 1.4% of the dataset), operating on 20 frames, still reaches 0.21. Interestingly, when using fewer frames than 20, the downstream performance does not improve above random chance. Hence, scaling to more patches is required to in this setting, which makes using a fixed number of permutations challenging. This may explain the poorer performance of the Fixed Permutation baseline on this task, and points to permutations as a promising avenue of exploration for selfsupervised learning on videos.\nIn addition to the potential for self-supervised learning, performance on permutation task itself is interesting in the context of videos. We also report experimental results on the upstream reordering task itself in Table 3. In this case, the permutations acts on 20 frames, with an order of 2.1018 possible permutations. Our results show that a significant fraction of elements are correctly reordered, even though this particular permutation has in all likelihood not been observed during training.\nIt is interesting to note that the methods that perform best on the pretext task also generally perform best on the downstream tasks. For example, we observe that our methods get higher performance than the MSE on the Jigsaw puzzle reassembly task. Correspondingly, our methods also achieve higher performance on the downstream tasks." }, { "heading": "3.3 IMAGES", "text": "Experimental setup. We also applied our methodology to images, using the ImageNet and Pascal VOC 2012 datasets. In this experiment, we trained our CFN network on 128×128 images with 3×3 patches. We used a learning rate of 10−6, batch size of 32, and the Adam optimizer. We trained the same CFN network on the jigaw reassembly task using the ImageNet dataset. We then evaluated the quality of the embeddings via a downstream classification task on the PASCAL VOC 2012 dataset. We use the mean AUC metric to measure performance of our method which is appropriate due to the nature of the Pascal VOC classification task.\nExperimental results. Consistent with the other results, we find performance of the two differentiable ranks methods to be consistently higher than learning reordering with classification over fixed permutations, however performance is consistent with MSE although the differentiable ranking methods have slightly lower variance. Additionally, we highlight the efficiency with which the upstream network learns the permutation inversion task. The results in Table 4 show the efficacy of our method on 1000 and 3000 downstream data points." }, { "heading": "4 CONCLUSION", "text": "We present in this paper a general self-supervised learning method that uses permutations to learn high-quality representations. We demonstrate that our method outperforms previous permutation learning schemes by incorporating fully differentiable ranking as a pretext loss, enabling us to take advantage of all n! permutations, instead of a small fixed set. In particular, we show significant improvements in low data regimes. We also demonstrate that our method can be applied seamlessly to improved downstream performance of several classification and regression tasks across audio, videos, and images." }, { "heading": "A APPENDIX", "text": "A.1 DIFFERENTIABLE RANKING AND FENCHEL-YOUNG LOSSES\nThe differentiable operators for ranking that we consider map a vector of values to the convex hull of ranking vectors - i.e. vector whose entries are 1 to n. This set C is referred to as the permutahedron (see, e.g., Blondel et al. (2020b)). Both our approaches rely on the fact that for a vector θ ∈ Rn, the vector of ranks of θ in ascending order is given by\ny∗(θ) = arg max y∈C 〈y, θ〉 .\nThe two variations on differentiable ranking that we introduce are based on modifications of this formulation. The approach in Berthet et al. (2020) is to introduce random perturbations Z tuned by ε > 0 to the vector of values\ny∗ε (θ) = E[arg max y∈C 〈y, θ + εZ〉] .\nIts value as well as its derivatives can be approximated by Monte-Carlo (MC) schemes (see Berthet et al., 2020, for details). In this work, we use a normal distribution for Z, and in learning experiments use ε = 0.1 and average over 10 MC samples. If we denote by Fε(θ) the expectation of the maximum (rather than the argmax), and εΩ its Fenchel dual over Rn, we also have that\ny∗ε (θ) = arg max y∈C {〈y, θ〉 − εΩ(y)} .\nThis is the approach favored in Blondel et al. (2020b), where the function Ω is not defined as the dual of an expectation, but instead as an explicit convex regularizer over C. In this paper, we use Ω(y) = 12‖y‖\n2. The corresponding Fenchel-Young loss (Blondel et al., 2020a), whose definition extends to other problems than ranking, is defined for such convex regularized problems by\nLFY(θ; y) = Fε(θ) + εΩ(y)− 〈θ, y〉 ,\nwhere we recall that Fε = (εΩ)∗ (Fenchel dual). As described in the main text, its gradient over the parameter θ is given by ∇θLFY(θ; y) = y∗ε (θ)− y . Stochastic gradients can therefore be obtained by Monte-Carlo approximation, which we use to minimize this loss in this work.\nA.2 ADDITIONAL EXPERIMENTAL RESULTS\nAppendix Tables for additional information on the audio task (see next page)." } ]
2,020
null
SP:eed27a2d9c5d77bfc9aacb5d2ca5c7885b2e29f9
[ "The paper presents an approach that for every object identifies the factors that have a high impact on the models' uncertainty. The approach consists of i) disentangled representations ii) a classifier on the top of the trained representations iii) technique that select dimensions of representation of an objects' (factors), that impact the uncertainty of a model the most. The i) and ii) are done in a known way; The disentanglement is done via Deep Convolutional Inverse Graphics Network (2015), and a classifier is trained with a standard maximum likelihood approach." ]
In image-based object classification, the visual appearance of objects determines which class they are assigned to. External variables that are independent of the object, such as the perspective or the lighting conditions, can modify the object’s appearance resulting in ambiguous images that lead to misclassifications. Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence. However, such methods do not indicate which variables are the potential sources that cause uncertainty. In this paper, we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions. This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification.
[]
[ { "authors": [ "Alessandro Achille", "Tom Eccles", "Loic Matthey", "Christopher P. Burgess", "Nick Watters", "Alexander Lerchner", "Irina Higgins" ], "title": "Life-long disentangled representation learning with cross-domain latent homologies", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alejandro Barredo Arrieta", "Natalia Dı́az-Rodrı́guez", "Javier Del Ser", "Adrien Bennetot", "Siham Tabik", "Alberto Barbado", "Salvador Garcia", "Sergio Gil-Lopez", "Daniel Molina", "Richard Benjamins", "Raja Chatila", "Francisco Herrera" ], "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", "venue": "Information Fusion,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation Learning: A Review and New Perspectives", "venue": "URL http://arxiv.org/abs/1206.5538", "year": 1993 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D Mcauliffe" ], "title": "Variational Inference : A Review for Statisticians", "venue": null, "year": 2018 }, { "authors": [ "Nicki Skafte Detlefsen", "Søren Hauberg" ], "title": "Explicit Disentanglement of Appearance and Perspective in Generative Models. (NeurIPS), 2019", "venue": "URL http://arxiv.org/abs/1906.11881", "year": 1906 }, { "authors": [ "Aviv Gabbay", "Yedid Hoshen" ], "title": "Demystifying Inter-Class Disentanglement", "venue": "In International Conference on Learning Representations, pp. 1–2,", "year": 2020 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Ryuhei Hamaguchi", "Ken Sakurada", "Ryosuke Nakamura" ], "title": "Rare event detection using disentangled representation learning", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "B-Vae: Learning Basic Visual Concepts With a Constrained Variational Framework", "venue": "Iclr 2017,", "year": 2016 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? (Nips), 2017", "venue": "ISSN 16653521. doi: 10.1109/TDEI.2009.5211872. URL http://arxiv.org/abs/1703.04977", "year": 2009 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih. Disentangling by Factorising." ], "title": "ISSN 21622701", "venue": "doi: 10. 1364/CLEO.2009.CMS6. URL http://arxiv.org/abs/1802.05983.", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3D Object Representations for FineGrained Categorization", "venue": "IEEE International Conference on Computer Vision Workshops, pp. 554–561. IEEE,", "year": 2013 }, { "authors": [ "Tejas D. Kulkarni", "Will Whitney", "Pushmeet Kohli", "Joshua B. Tenenbaum" ], "title": "Deep Convolutional Inverse Graphics Network", "venue": "pp. 1–10,", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "venue": null, "year": 2018 }, { "authors": [ "Romain Lopez", "Jeffrey Regier", "Michael I. Jordan", "Nir Yosef" ], "title": "Information constraints on autoencoding variational Bayes", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling Disentanglement in Variational Autoencoders", "venue": null, "year": 2018 }, { "authors": [ "Ramprasaath R. Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization", "venue": "pp. 1–23,", "year": 2016 }, { "authors": [ "Scott Reed", "Ming Hsuan Yang", "Honglak Lee" ], "title": "Weakly-supervised disentangling", "venue": null, "year": 1905 }, { "authors": [ "Quan-shi Zhang", "Song-chun Zhu" ], "title": "Visual interpretability for deep learning: a survey", "venue": "Systems, 2015-Janua:1099–1107,", "year": 2015 } ]
[ { "heading": null, "text": "In image-based object classification, the visual appearance of objects determines which class they are assigned to. External variables that are independent of the object, such as the perspective or the lighting conditions, can modify the object’s appearance resulting in ambiguous images that lead to misclassifications. Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence. However, such methods do not indicate which variables are the potential sources that cause uncertainty. In this paper, we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions. This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification." }, { "heading": "1 INTRODUCTION", "text": "An object from the real world can be represented in terms of the data gathered from it through an observation/sensing process. These observations contain information about the properties of the object that allows their recognition, identification, and discrimination. In particular, one can obtain images from objects which represent its visual characteristics through photographs or rendering of images from 3D models.\nImage-based object classification is the task of assigning a category to images obtained from an object based on their visual appearance. The visual appearance of objects in an image is determined by the properties of the object itself (intrinsic variables) and the transformations that occur in the real world (extrinsic variables) (Kulkarni et al., 2015).\nProbabilistic classifiers based on neural networks can provide a measure for the confidence of a model for a given prediction in terms of a probability distribution over the possible categories an image can be classified into. However, they do not indicate what variable contributes to the uncertainty. In some cases the extrinsic variables can affect the visual appearance of objects in images in such way that the predictions are highly uncertain. A measure of the uncertainty in terms of these extrinsic features can improve interpretability of the output of a classifier.\nDisentangled representation learning is the task of crating low-dimensional representations of data that capture the underlying variability of the data and in particular this variability can be explained in terms of the variables involved in data generation. These representations can provide interpretable data representations that can be used for different tasks such as domain adaptation (Higgins et al., 2017),continuous learning (Achille et al., 2018), noise removal (Lopez et al., 2018), and visual reasoning (van Steenkiste et al., 2019).\nIn this paper we propose a method for the identification of the sources of uncertainty in imagebased object classification with respect to the extrinsic features that affect the visual appearance of objects in images by using disentangled data representations. Given an image of an object, our model identifies which extrinsic feature contributes the most to the classification output and provides information on how to modify such feature to reduce the uncertainty in the predictions." }, { "heading": "2 RELATED WORK", "text": "Achieving explainable results in predictive models is an important task, especially for critical applications in which the decisions can affect human life such as in health, security, law and defence Barredo Arrieta et al. (2020). Even though deep neural networks provide successful results for image classification, their predictions can’t be directly interpreted due to their complexity (Zhang & Zhu, 2018). In order to solve this different approaches have been proposed to provide visual interpretability to the results such as identification of the image regions that contribute to classification (Selvaraju et al., 2016) .\nThe uncertainty of predictions provides an extra level of interpretability to the predictions of a model by indicating the level of confidence in a prediction Kendall & Gal (2017). There are different methods to introduce uncertainty measures in classifiers which include bayesian neural networks, ensembles, etc.\nObtaining disentangled representations, that capture distinct sources of variation independently, is an important step towards interpretable machine learning systems Kim & Mnih (2018). Despite the lack of agreement on the definition, one description states that a disentangled representation should separate the distinct, informative factors of variations in the data Bengio et al. (2012).\nWithin deep generative models, disentanglement is achieved by using neural networks that approximate a conditional distribution on the data and a set of unobserved latent variables. Particularly variational autoencoders (VAEs) Kingma & Welling (2014) are heavily favored due to their ability to model a joint distribution while maintaining scalability and training stability Higgins et al. (2016). Therefore most of the methods are based on augmentations on original VAE framework Higgins et al. (2016); Kim & Mnih (2018); Kulkarni et al. (2015); Mathieu et al. (2018) .\nIn image-based object classification the variables that explain the visual characteristics of objects in data can be divided into those which represent the inherent properties of objects and those which represent its transformations. This explanation has been explored in Kulkarni et al. (2015) by describing the object’s properties as the intrinsic variables and the properties that describe the object transformations as the extrinsic variables.\nOther work refers to similar sets of variables and their disentanglement under different names but representing similar concepts. Hamaguchi et al. (2019) disentangles the variables corresponding to ambient variables with respect to object identity information on images. (Gabbay & Hoshen, 2020) proposes the learning of disentangled representations that express the intra-class variability in terms of the class and content. (Detlefsen & Hauberg, 2019) proposes the disentanglement of the appearance and the perspective. Separate the identity of cars from their pose (Yang et al., 2015)." }, { "heading": "3 SETTING", "text": "Consider a dataset of images that have been generated from the observations of an object and which should be classified into a certain category. We will assume that this category depends only on the properties of the object itself and not on its surroundings.\nWe use a neural network as a probabilistic classifier to assign each of the images to a category. Usually the output of a neural network can’t be directly interpreted in terms of the characteristics of the object have affected the confidence of the prediction. Disentanglement serves as a method to produce interpretable low-dimensional data representations that separate the variables that describe the properties of the objects and their surrounding.\nThe main idea is that one can train a probabilistic classifier on disentangled low dimensional representations and identify which variables contribute to the uncertainty of the classification." }, { "heading": "3.1 PROBABILISTIC CLASSIFIERS ON IMAGES", "text": "A probabilistic classifier is a model which outputs a conditional probability density PY |x over the set of K ∈ N possible categories Y = {1, 2, . . . ,K} conditioned on an input image x ∈ X . The value PY |x(y) can be interpreted as the degree of confidence the model assigns for an image x ∈ X\nto be classified into category y ∈ Y . We will be only considering throughout this work probabilistic classifiers that use deep neural networks to obtain the predictions.\nOne can train a probabilistic classifier using a dataset of labeled images. Given a labeled datapoint x ∈ X with true label y∗ ∈ Y , the neural network’s weights are optimized to reduce the cross entropy between the network’s output distribution PY |x and the true label y∗. The cross entropy loss corresponds to,\nL(PY |x, y∗) = − ∑ y∈Y δy,y∗ logPY |x(y), (1)\nwith δy,y∗ the Kroenecker delta.\nOne can measure the degree of uncertainty of the output by calculating the entropy of the output distribution PY |x for a given image. Higher entropy corresponds to a higher uncertainty. The entropy is calculated as,\nH(PY |x) = ∑ y∈Y PY |x(y) logPY |x(y). (2)\nEven though it is possible to provide a degree of uncertainty to the estimates of a probabilistic classifier trained on images, these estimates do not indicate which true generative variables that describe the image have contributed to the uncertainty. In this paper we assume that those sources of uncertainty are due to the extrinsic variables that participate in the data generation process." }, { "heading": "3.2 DATA GENERATION: INTRINSIC AND EXTRINSIC VARIABLES", "text": "We will assume that the underlying generative process for our data can be modelled by the joint probability distribution PX×V over the data space X and a set of true generative variables V . The true variables condition the data generative process and represent the properties of the real world. In our case, we will consider that these variables can be divided into two sets V = V (I) × V (E) that represent the intrinsic and extrinsic variables respectively.\nThe intrinsic variables are those which represent the properties of the object itself (e.g. its color, shape, size, material), while the extrinsic variables represent external factors (e.g. the light conditions or the relative position and orientation of the object with respect to the observer that generates the image). Intrinsic variables are invariant to the transformations described by the extrinsic variables.\nThe generative process consists of the independent sampling of an intrinsic variable v(I) ∼ PV (I) and an extrinsic variable v(E) ∼ PV (E) . Those generative variables together determine a conditional distribution over the data space from which a data point x ∼ PX|(v(I),v(E)) is sampled. We assume that the intrinsic variables are independent of the extrinsic variables. During data generation both variables are combined to produce the visual appearance of the objects following the formula for the joint probability density\nPX×V (x, (v (I), v(E))) = PX|(v(I),v(E))(x)PVI (v (I))PVE (v (E)). (3)\nWe assume that the set of intrinsic and extrinsic variables can be separated into a finite product of MI ,ME ∈ N variables such that V (I) = V (I)1 × · · · × V (I) MI and V (E) = V (E)1 × · · · × V (E) ME\n. The total number of true generative variables is then M =MI +ME ." }, { "heading": "3.3 DISENTANGLEMENT FOR INTERPRETABILITY", "text": "Learning disentangled representations is the task of creating interpretable low-dimensional data representations that separate the information about the variables that are involved in the generation of the data (Locatello et al., 2018). There is no common agreement on the definition of a disentangled representation. However, two properties have been proposed for disentangled representations that will be useful for our goal of producing interpretable measures of uncertainty in classification predictions.\n• Modularity: No more than a single dimension of the data representation should encode information about a true generative variable.\n• Compactness: Each of the true generative variables is encoded by a single dimension of the data representation.\nIf a data representation is both modular and compact, we can identify for each true generative variable its corresponding data representation’s dimension that has all information about it. Consider Z = Z1 × Z2 × · · · × ZD as a D-dimensional data representation space. One can measure the compactness of a learned representation by measuring the mutual information between a data representation dimension in Zi and a true variable Vm as\nI(PZi , PVm) = KL(PZi×Vm ||PZi ⊗ PVm)\nA disentangled representation is modular and compact if for each generative variable Vm there is a unique latent dimension Zi such that I(PZi , PVm) 6= 0 and I(PZj , PVm) = 0 for all j 6= i. If a disentangled data representation is obtained which fulfills both modularity and compactness then we can separate the latent dimensions in such a way that there is a decomposition of the latent space Z = Z(I) × Z(E) into an intrinsic and an extrinsic set of latent variable dimensions. This would mean that a probabilistic classifier trained on the latent variables PY |Z(y) = PY |Z(I)(y) would only depend on the intrinsic variables.\nHowever, it has been proved in (Locatello et al., 2018; Mathieu et al., 2018) that without supervision disentanglement cannot be achieved. Therefore one might not be able to achieve perfect modularity or compactness. This means that some information about the true intrinsic variables might be contained in the extrinsic latent dimensions such that there is a dependency of the uncertainty in those variables as seen in Section 4.2." }, { "heading": "3.4 PROBABILISTIC DISENTANGLEMENT", "text": "Probabilistic methods for disentanglement propose a generative distribution that approximates the true generative process in terms of a set of unobserved latent variables that serve as the data representation space Z. For simplicity we will assume that Z corresponds to a D-dimensional vector space, e.g. Z = Rd. The generative probability density function PX×Z over the data space X and latent space Z can be expressed as\nPX×Z(x, z) = PX|z(x)PZ(z). (4)\nIn order to provide a good approximation to the true generative model, the data distribution determined by the true model and the one proposed in terms of the latent variables should match. This is expressed via the marginalization of the generative probability densities over the set of true generative variables and latent variables respectively, i.e.\nPX(x) = ∫ V PX|v(x)PV (v)dv = ∫ Z PX|z(x)PZ(z)dz. (5)\nOne problem is that the integral above is in most cases intractable. There are different approaches to solve this problem. In particular, variational inference offers a method for approximating to the target true probability density (Blei et al., 2018) by means of an approximation to the true posterior distribution.\nThe variational autoencoder (Kingma & Welling, 2014) implements variational inference using an encoding neural network that calculates the parameters of the posterior approximate QZ|x and the distribution PX|z also called the decoder distribution by using neural networks. The neural network weights are optimized by maximizing the lower bound of the latent variable model distribution over the data PX .\nGiven a data point x ∈ X and the approximate posterior QZ|x it is possible to define a function over the data space z : X → Z such that for data point x ∈ X its representation in the latent space is given by\nz(x) = Ez∼QZ|x [z] . (6)" }, { "heading": "3.5 DISENTANGLEMENT FOR PROBABILISTIC CLASSIFIERS", "text": "A disentangled representation can be used to train a probabilistic classifier and provide interpretability to the results in terms of each of its latent dimensions. Since the information about the true generative variables is encoded in the separate latent variables one can estimate the uncertainty of such classifier and measure the change in the uncertainty by small perturbations in the latent dimension.\nWe can train a probabilistic classifier with respect to the latent variables i.e. PY |z(x) and estimate the uncertainty of a prediction as H(PY |z(x)). If this probabilistic classifier is trained with a neural network, we can obtain the gradient of the entropy with respect to the latent variables∇ZH(PY |z(x)) via backpropagation. The gradient indicates the direction in the latent space that leads to a lower entropy in the prediction, i.e. to a latent variable with lower uncertainty.\nFor a given image x ∈ X , it is possible to find the latent variable that would decrease the uncertainty by using the function z∗ : X → Z defined by\nz∗(x) = z(x)− α∇ZH(PY |z(x)). (7)\nwhere α corresponds to the size of the step taken in the direction of the negative gradient. Alternatively, it is possible to estimate the latent dimension that provides the highest change in the uncertainty of the prediction. We can evaluate the new latent variable obtained by taking a step along only the d-th latent dimension as\nz′d(x) = z(x)− α ∂H(PY |z(x))\n∂zd êd, (8)\nwith êd the d-th canonical basis vector. Then we can evaluate Equation 8 for each of the dimensions that corresponds to the extrinsic variables to find which is the one that provides the highest decrease in entropy\nz∗(x) = argmax z′d∈Z(E) H(PY |z(x))−H(PY |z′d(x)) (9)\nThe generative variable associated with the latent dimension that produces the highest decrease in entropy can be considered as the one that contributes the most to the entropy." }, { "heading": "4 EXPERIMENTS & RESULTS", "text": "In order to test the ideas presented in the previous section we used a synthetic dataset where we could have control over a set of extrinsic and intrinsic variables. In this case we decided to use a dataset of images generated from 3D models where we could generate large amounts of data by varying different extrinsic variables.\nModelnet40 Aligned Dataset In order to investigate whether the disentangled representations of data can be useful to identify the sources of uncertainty we have used a synthetic dataset where we have control over most of the variables involved in the data generation. In particular we generate a dataset of rendered images obtained from 3D models within the Modelnet40 aligned dataset which contains 3D models from 40 different classes. Each 3D model is aligned to a certain predefined orientation which is unique to each class. Images were generated from only the car class by rendering of the cars for different configurations of the camera and object properties: relative azimuth and elevation of the camera with respect to the car, the light intensity and location within the scene, see Figure 4. For the training set 40 car models were used and 8 were used for validation and 8 for test. We have manually labeled each of the cars in each of the datasets into four categories for classification: SUV, sport, sedan and hatchback.\nFor each 3D car model We generated a 65 images by changing four extrinsic variables (azimuth, elevation, light intensity, location of light) and one intrinsic variable (color) over 6 possible values.\n1. Light Intensity: Indicates the intensity of the light source used in Watts per square meter W/m2. Values are from {0.5, 1.0, 1.5, 2.0, 3.0}\n2. Light Location: Indicates the position of the light source along one axis, its values are {−3,−2,−1, 1, 2, 3}\n3. Elevation: Determines the angular elevation of the camera with respect to the ground, the values used are {18.4◦, 22.6◦, 26.5◦, 30.25◦, 33.7◦, 36.9◦}.\n4. Azimuth: Indicates the relative angle of rotation of the camera with respect to the front of the car when the camera is rotating with respect to an axis perpendicular to the floor, the values taken are {0◦, 36◦, 72◦, 108◦, 144◦, 180◦}.\n5. Color: This variable corresponds to an intrinsic feature of the car. Different hues were chosen for each car: green, cyan, blue, magenta, red and yellow." }, { "heading": "4.1 DISENTANGLED REPRESENTATIONS", "text": "We train a Deep Convolutional Inverse Graphics Network (DC-IGN) (Kulkarni et al., 2015) since it provides a method for obtaining disentangled representations by training a variational autoencoder Kingma & Welling (2014) with batches of images where only only one generative variable changes across the images. DC-IGN takes advantage of the batches to enforce the separation of the information of each generative variable into separate latent dimensions.\nThe encoder network has 3 convolutional blocks in which a convolutional layer with kernel size of 5 and 2 × 2 max pooling with stride 2 are used. Differently from the original implementation, we set the size of the latent space as 128. The decoder network consists of 3 convolutional blocks with a 2 × 2 upsampling layer each followed by convolutional layer with kernel size 7. The training is performed for a total of 100 epochs.\nThe disentangled representations were characterized by measuring the discretized mutual information as in Locatello et al. (2018). The results are shown in Table 4.2, latent traversals are presented in Figure 3 . Since the latent variables are encouraged to only contain information about the generative variable that changes in each batch we can directly associate a latent dimension to each generative variable as it can be seen in the results. The latent variable associated to the target generative variable does contain the largest value of mutual information.\nOne important note about our approach. we proposed the use of DC-IGN a disentanglement learning algorithm which is tailored for the separation of extrinsic features. However, it is possible to use any other disentanglement approach as long as there is a small subset of data where each extrinsic variable is available. In such case it is possible to measure the mutual information between each generative variable and latent dimension for the small dataset.\nOne can assign the latent dimension with the highest mutual information to each of the true generative variables such that the gradient with respect to a latent dimension indicates a change in the corresponding generative variable." }, { "heading": "4.2 PROBABILISTIC CLASSIFIER: DISENTANGLED REPRESENTATIONS", "text": "We train a probabilistic classifier with the latent variables obtained from the encoder DC-IGN by using the corresponding car type labels. We train a simple neural network with 2 dense layers with 150 and 50 neurons each respectively and an output dense layer with one neuron per category and a softmax activation function. The model is trained with early stopping using as hyperparameter a minimum loss decrease of 0.001 and patience of 3 epochs. The classifier achieved 89% accuracy on both the training and the test set. We estimated the uncertainty of the estimations using the softmax output of the intrinsic classifier. The average value of the metrics accross all car types is precision 0.91, recall 0.89, and F1-Score 0.89.\nWe reasoned the validity of the predictive uncertainty by evaluating with the Expected Calibration Error (ECE) metric as described in Guo et al. (2017). The metric measures the expected error between the estimation confidences and the true class probabilities and indicates the discrepancies between the accuracy and the uncertainty measures in our case this gives a value of 0.015 (lower is better calibration). Comparatively, as presented in Guo et al. (2017), ResNet 50 trained on Stanford Cars Krause et al. (2013) data set has 0.0430 ECE without any additional calibration method. Even with additional calibration enchancements the lowest ECE of Resnet 50 set is 0.0174.\nGiven that the intrinsic classifier is calibrated, the uncertainty of the classifier is estimated with the entropy of the softmax output. In Section 3.4 we mentioned that we expect the true extrinsic generative variables affect the uncertainty of a clasifier. To justify this claim, we measured the entropy of the images predicted for the classifier and compared their distribution across the values of each extrinsic generative variable.\nIf the extrinsic generative variables do not affect the uncertainty predictions then across different values of the same generative factor the distribution of entropies should be the same across different values of the extrinsic variable. We used the Mann-Whitney U (MWU) test to test whether the distribution of entropy between two different values of a generative variable are different. The null hypothesis for this test states that there is not a statistically significant difference between two distributions. In Appendix A we show the log entropy distributions for each extrinsic variable with violin plots together with the p-values from the pair-wise comparison of MWU test.\nOnly for the tests between angle 36◦ and 72◦ of the azimuth, 22.6◦ and 26.5◦ of elevation and 2,−2 for the light location the null hypothesis cannot be rejected with a p-value larger than 0.05. In other words except for these particular combinations of extrinsic variables we conclude that the distribution of entropy values are statistically different from each other for the generative variables. This means that the uncertainty of the predictions is indeed dependent on the true extrinsic variables for the trained classifier." }, { "heading": "5 CONCLUSION", "text": "In this work we proposed the use of disentangled data representations to provide interpretability to the results of classifiers based on the set of extrinsic variables that affect the estimation of the intrinsic properties of objects.\nWe showed that for a probabilistic classifier trained on the disentangled latent variables, the extrinsic variables affect the uncertainty of the predictions. Moreover we provide a method to modify the latent variables in order to identify the extrinsic variable that produces the strongest uncertainty in the classifier’s predictions.\nFuture work will consider the exploration of methods to react upon the sources of uncertainty in image-based object classification by proposing actions to an agent that gethers data in order to modify the extrinsic variables that produce the uncertainty." }, { "heading": "A UNCERTAINTY DEPENDENCE ON EXTRINSIC VARIABLES", "text": "In this appendix, the plots representing the distribution of uncertainty values of the probabilistiic classifier trained on the disentangled latent variables together with the Mann-Whitney U test pvalues are presented. The violin plots show the distribution of the uncertainty with respect to the true extrinsic variables and the p-value tables indicate whether a pair of values for an extrinsic variable produce statistically significant different uncertainty. If p > 0.05 the uncertainty for two values of a generative variable are similar." }, { "heading": "33.7 0.000 0.000 0.000 0.000 - 0.000", "text": "" }, { "heading": "30.25 0.000 0.000 0.000 - 0.000 0.000", "text": "" }, { "heading": "26.5 0.000 0.198 - 0.000 0.000 0.000", "text": "" }, { "heading": "22.6 0.000 - 0.198 0.000 0.000 0.000", "text": "" }, { "heading": "18.4 - 0.000 0.000 0.000 0.000 0.045", "text": "" } ]
2,020
null
SP:fa6456aa23ea8635e04081a043a07915b1c0808f
[ "This paper introduces a method for performing safe exploration in RL. It addresses the problem of ensuring that partially-trained policies do not visit unsafe regions of the state space, while still being exploratory enough to collect useful training experiences. The proposed technique is based on learning conservative estimates of the probability of a catastrophic failure occurring at different states. Based on these, the authors show that it is possible to upper bound the likelihood of reaching an unsafe state at every training step, thereby guaranteeing that all safety constraints are satisfied with high probability. Importantly, the authors also show that (at least asymptotically), the method is no worse than standard unsafe reinforcement learning algorithms." ]
Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning. In this paper, we target the problem of safe exploration in RL by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration. We theoretically characterize the tradeoff between safety and policy improvement, show that the safety constraints are likely to be satisfied with high probability during training, derive provable convergence bounds for our approach, which is no worse asymptotically than standard RL, and demonstrate the efficacy of the proposed approach on a suite of challenging navigation, manipulation, and locomotion tasks. Empirically, we show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates during training than prior methods. Videos are at this url https: //sites.google.com/view/conservative-safety-critics/
[ { "affiliations": [], "name": "Homanga Bharadhwaj" }, { "affiliations": [], "name": "Aviral Kumar" }, { "affiliations": [], "name": "Nicholas Rhinehart" }, { "affiliations": [], "name": "Sergey Levine" }, { "affiliations": [], "name": "Florian Shkurti" }, { "affiliations": [], "name": "Animesh Garg" } ]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "arXiv preprint arXiv:1705.10528,", "year": 2017 }, { "authors": [ "Alekh Agarwal", "Sham M Kakade", "Jason D Lee", "Gaurav Mahajan" ], "title": "Optimality and approximation with policy gradient methods in markov decision processes", "venue": null, "year": 1908 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov decision processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Andrea Bajcsy", "Somil Bansal", "Eli Bronstein", "Varun Tolani", "Claire J Tomlin" ], "title": "An efficient reachability-based framework for provably safe autonomous navigation in unknown environments", "venue": "IEEE 58th Conference on Decision and Control (CDC),", "year": 2019 }, { "authors": [ "Somil Bansal", "Mo Chen", "Sylvia Herbert", "Claire J Tomlin" ], "title": "Hamilton-jacobi reachability: A brief overview and recent advances", "venue": "IEEE 56th Annual Conference on Decision and Control (CDC),", "year": 2017 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela Schoellig", "Andreas Krause" ], "title": "Safe model-based reinforcement learning with stability guarantees", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Aleksandra Faust", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "Lyapunov-based safe policy optimization for continuous control", "venue": null, "year": 1901 }, { "authors": [ "Alexander I Cowen-Rivers", "Daniel Palenicek", "Vincent Moens", "Mohammed Abdullah", "Aivar Sootla", "Jun Wang", "Haitham Ammar" ], "title": "Samba: Safe model-based & active reinforcement learning", "venue": "arXiv preprint arXiv:2006.09436,", "year": 2020 }, { "authors": [ "Gal Dalal", "Krishnamurthy Dvijotham", "Matej Vecerik", "Todd Hester", "Cosmin Paduraru", "Yuval Tassa" ], "title": "Safe exploration in continuous action spaces", "venue": "arXiv preprint arXiv:1801.08757,", "year": 2018 }, { "authors": [ "Sarah Dean", "Stephen Tu", "Nikolai Matni", "Benjamin Recht" ], "title": "Safely learning to control the constrained linear quadratic regulator", "venue": "American Control Conference (ACC),", "year": 2019 }, { "authors": [ "Benjamin Eysenbach", "Shixiang Gu", "Julian Ibarz", "Sergey Levine" ], "title": "Leave no trace: Learning to reset for safe and autonomous reinforcement learning", "venue": "arXiv preprint arXiv:1711.06782,", "year": 2017 }, { "authors": [ "Jaime F Fisac", "Anayo K Akametalu", "Melanie N Zeilinger", "Shahab Kaynama", "Jeremy Gillula", "Claire J Tomlin" ], "title": "A general safety framework for learning-based control in uncertain robotic systems", "venue": "IEEE Transactions on Automatic Control,", "year": 2018 }, { "authors": [ "Jaime F Fisac", "Neil F Lugovoy", "Vicenç Rubies-Royo", "Shromona Ghosh", "Claire J Tomlin" ], "title": "Bridging hamilton-jacobi safety analysis and reinforcement learning", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Javier Garcıa", "Fernando Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Djordje Grbic", "Sebastian Risi" ], "title": "Safe reinforcement learning through meta-learned instincts", "venue": "arXiv preprint arXiv:2005.03233,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Sylvia L Herbert", "Somil Bansal", "Shromona Ghosh", "Claire J Tomlin" ], "title": "Reachability-based safety guarantees using efficient initializations", "venue": "IEEE 58th Conference on Decision and Control (CDC),", "year": 2019 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Sham Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In ICML,", "year": 2002 }, { "authors": [ "Torsten Koller", "Felix Berkenkamp", "Matteo Turchetta", "Andreas Krause" ], "title": "Learning-based model predictive control for safe exploration", "venue": "IEEE Conference on Decision and Control (CDC),", "year": 2018 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement learning,", "year": 2012 }, { "authors": [ "Karen Leung", "Edward Schmerling", "Mo Chen", "John Talbot", "J Christian Gerdes", "Marco Pavone" ], "title": "On infusing reachability-based safety assurance within probabilistic planning frameworks for human-robot vehicle interactions", "venue": "In International Symposium on Experimental Robotics,", "year": 2018 }, { "authors": [ "Sergey Levine" ], "title": "Deep reinforcement learning course, 2018", "venue": "URL http://rail.eecs. berkeley.edu/deeprlcourse-fa18/static/slides/", "year": 2018 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Santiago Paternain", "Miguel Calvo-Fullana", "Luiz FO Chamon", "Alejandro Ribeiro" ], "title": "Safe policies for reinforcement learning via primal-dual methods", "venue": "arXiv preprint arXiv:1911.09101,", "year": 2019 }, { "authors": [ "Santiago Paternain", "Luiz Chamon", "Miguel Calvo-Fullana", "Alejandro Ribeiro" ], "title": "Constrained reinforcement learning has zero duality gap", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xue Bin Peng", "Erwin Coumans", "Tingnan Zhang", "Tsang-Wei Lee", "Jie Tan", "Sergey Levine" ], "title": "Learning agile robotic locomotion skills by imitating animals", "venue": "arXiv preprint arXiv:2004.00784,", "year": 2020 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking safe exploration in deep reinforcement learning", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "Daniel Russo" ], "title": "Worst-case regret bounds for exploration via randomized value functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "Krishnan Srinivasan", "Benjamin Eysenbach", "Sehoon Ha", "Jie Tan", "Chelsea Finn" ], "title": "Learning to be safe: Deep rl with a safety critic", "venue": "arXiv preprint arXiv:2010.14603,", "year": 2020 }, { "authors": [ "Adam Stooke", "Joshua Achiam", "Pieter Abbeel" ], "title": "Responsive safety in reinforcement learning by pid lagrangian methods", "venue": "arXiv preprint arXiv:2007.03964,", "year": 2020 }, { "authors": [ "Rich Sutton" ], "title": "Reinforcement learning book, 2020", "venue": "URL http://incompleteideas.net/ book/first/ebook/node42.html", "year": 2020 }, { "authors": [ "Chen Tessler", "Daniel J Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "arXiv preprint arXiv:1805.11074,", "year": 2018 }, { "authors": [ "Brijen Thananjeyan", "Ashwin Balakrishna", "Ugo Rosolia", "Felix Li", "Rowan McAllister", "Joseph E Gonzalez", "Sergey Levine", "Francesco Borrelli", "Ken Goldberg" ], "title": "Safety augmented value estimation from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Es∼ρφold", "a∼πφ [Qζ(s", "a)] − Es∼ρφold", "a∼πφold [Qζ(s" ], "title": "Here, the RHS is precisely the term in equation 2 of (Kumar et al., 2020) that is bounded by CQL. We get an overstimated advantage ÂC(s, a) from training the safety critic QC through updates in equation 2", "venue": null, "year": 2020 }, { "authors": [ "≤ C" ], "title": "Gc,T is a constant depending on the concentration properties of the safety constraint function C(s, a) and the state transition operator T (s′|s", "venue": "a) (Kumar et al.,", "year": 2020 }, { "authors": [ "A Franka" ], "title": "Emika Panda arm must push a vertically placed block across the table to a goal location without the block toppling over. The workspace dimensions of the table are 20cmx40cm and the dimensions of the block are 5cmx5cmx10cm. The environment is based on Robosuite Zhu et al. (2020) and we use Operational Space Control (OSC) to control the end-effevctor velocities of the robot arm. A catastrophic", "venue": null, "year": 2020 }, { "authors": [ "Robosuite Zhu" ], "title": "2020) and we use Operational Space Control (OSC) to control the end-effector velocities of the robot arm", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is a powerful framework for learning-based control because it can enable agents to learn to make decisions automatically through trial and error. However, in the real world, the cost of those trials – and those errors – can be quite high: a quadruped learning to run as fast as possible, might fall down and crash, and then be unable to attempt further trials due to extensive physical damage. However, learning complex skills without any failures at all is likely impossible. Even humans and animals regularly experience failure, but quickly learn from their mistakes and behave cautiously in risky situations. In this paper, our goal is to develop safe exploration methods for RL that similarly exhibit conservative behavior, erring on the side of caution in particularly dangerous settings, and limiting the number of catastrophic failures.\nA number of previous approaches have tackled this problem of safe exploration, often by formulating the problem as a constrained Markov decision process (CMDP) (Garcıa & Fernández, 2015; Altman, 1999). However, most of these approaches require additional assumptions, like assuming access to a function that can be queried to check if a state is safe (Thananjeyan et al., 2020), assuming access to a default safe controller (Koller et al., 2018; Berkenkamp et al., 2017), assuming knowledge of all the unsafe states (Fisac et al., 2019), and only obtaining safe policies after training converges, while being unsafe during the training process (Tessler et al., 2018; Dalal et al., 2018).\nIn this paper, we propose a general safe RL algorithm, with bounds on the probability of failures during training. Our method only assumes access to a sparse (e.g., binary) indicator for catastrophic failure, in the standard RL setting. We train a conservative safety critic that overestimates the probability of catastrophic failure, building on tools in the recently proposed conservative Q-learning framework (Kumar et al., 2020) for offline RL. In order to bound the likelihood of catastrophic failures at every iteration, we impose a KL-divergence constraint on successive policy updates so that the stationary distribution of states induced by the old and the new policies are not arbitrarily\n∗Work done during HB’s (virtual) visit to Sergey Levine’s lab at UC Berkeley\ndifferent. Based on the safety critic’s value, we consider a chance constraint denoting probability of failure, and optimize the policy through primal-dual gradient descent.\nOur key contributions in this paper are designing an algorithm that we refer to as Conservative Safety Critics (CSC), that learns a conservative estimate of how safe a state is, using this conservative estimate for safe-exploration and policy updates, and theoretically providing upper bounds on the probability of failures throughout training. Through empirical evaluation in five separate simulated robotic control domains spanning manipulation, navigation, and locomotion, we show that CSC is able to learn effective policies while reducing the rate of catastrophic failures by up to 50% over prior safe exploration methods." }, { "heading": "2 PRELIMINARIES", "text": "We describe the problem setting of a constrained MDP (Altman, 1999) specific to our approach and the conservative Q learning (Kumar et al., 2020) framework that we build on in our algorithm.\nConstrained MDPs. We take a constrained RL view of safety (Garcıa & Fernández, 2015; Achiam et al., 2017), and define safe exploration as the process of ensuring the constraints of the constrained MDP (CMDP) are satisfied while exploring the environment to collect data samples. A CMDP is a tuple (S,A, P,R, γ, µ, C), where S is the state space,A is the action space, P : S ×A×S → [0, 1] is a transition kernel, R : S × A → R is a task reward function, γ ∈ (0, 1) is a discount factor, µ is a starting state distribution, and C = {(ci : S → {0, 1}, χi ∈ R)|i ∈ Z} is a set of (safety) constraints that the agent must satisfy, with constraint functions ci taking values either 0 (alive) or 1 (failure) and limits χi defining the maximal allowable amount of non-satisfaction, in terms of expected probability of failure. A stochastic policy π : S → P(A) is a mapping from states to action distributions, and the set of all stationary policies is denoted by Π. Without loss of generality, we can consider a single constraint, where C denotes the constraint satisfaction function C : S → {0, 1}, (C ≡ 1{failure}) similar to the task reward function, and an upper limit χ. Note that since we assume only a sparse binary indicator of failure from the environmentC(s), in purely online training, the agent must fail a few times during training, and hence 0 failures is impossible. However, we will discuss how we can minimize the number of failures to a small rate, for constraint satisfaction.\nWe define discounted state distribution of a policy π as dπ(s) = (1 − γ) ∑∞ t=0 γ\ntP (st = s|π), the state value function as V πR (s) = Eτ∼π [R(τ)|s0 = s], the state-action value function as QπR(s, a) = Eτ∼π [R(τ)|s0 = s, a0 = a], and the advantage function as AπR(s, a) = QπR(s, a) − V πR (s). We define similar quantities for the constraint function, as VC , QC , and AC . So, we have V πR (µ) = Eτ∼π [ ∑∞ t=0R(st, at)] and V π C (µ) denoting the average episodic failures, which\ncan also be interpreted as expected probability of failure since V πC (µ) = Eτ∼π [ ∑∞ t=0 C(st)] = Eτ∼π[1{failure}] = P(failure|µ). For policy parameterized as πφ, we denote dπ(s) as ρφ(s). Note that although C : S → {0, 1} takes on binary values in our setting, the function V πC (µ) is a continuous function of the policy π.\nConservative Q Learning. CQL (Kumar et al., 2020) is a method for offline/batch RL (Lange et al., 2012; Levine et al., 2020) that aims to learn a Q-function such that the expected value of a policy under the learned Q function lower bounds its true value, preventing over-estimation due to out-ofdistribution actions as a result. In addition to training Q-functions via standard Bellman error, CQL minimizes the expected Q-values under a particular distribution of actions, µ(a|s), and maximizes the expected Q-value under the on-policy distribution, π(a|s). CQL in and of itself might lead to unsafe exploration, whereas we will show in Section 3, how the theoretical tool introduced in CQL can be used to devise a safe RL algorithm." }, { "heading": "3 THE CONSERVATIVE SAFE-EXPLORATION FRAMEWORK", "text": "In this section we describe our safe exploration framework. The safety constraint C(s) defined in Section 2 is an indicator of catastrophic failure: C(s) = 1 when a state s is unsafe and C(s) = 0 when it is not, and we ideally desire C(s) = 0 ∀s ∈ S that the agent visits. Since we do not make any assumptions in the problem structure for RL (for example a known dynamics model), we cannot guarantee this, but can at best reduce the probability of failure in every episode. So, we formulate the constraint as V πC (µ) = Eτ∼π [ ∑∞ t=0 C(st)] ≤ χ, where χ ∈ [0, 1) denotes probability of failure. Our approach is motivated by the insight that by being “conservative” with respect to how\nsafe a state is, and hence by over-estimating this probability of failure, we can effectively ensure constrained exploration.\nFigure 1 provides an overview of the approach. The key idea of our algorithm is to train a conservative safety critic denoted as QC(s, a), that overestimates how unsafe a particular state is and modifies the exploration strategy to appropriately account for this safety under-estimate (by overestimating the probability of failure). During policy evaluation in the environment, we use the safety critic QC(s, a) to reduce the chance of catastrophic failures by checking whether taking action a in state s has QC(s, a) less than a threshold . If not, we re-sample a from the current policy π(a|s). We now discuss our algorithm more formally. We start by discussing the procedure for learning the safety critic QC , then discuss how we incorporate this in the policy gradient updates, and finally discuss how we perform safe exploration (Garcıa & Fernández, 2015) during policy execution in the environment.\nOverall objective. Our objective is to learn an optimal policy π∗ that maximizes task rewards, while respecting the constraint on expected probability of failures.\nπ∗ = arg max π∈ΠC V πR (µ) where ΠC = {π ∈ Π : V πC (µ) ≤ χ} (1)\nLearning the safety critic. The safety critic QC is used to obtain an estimate of how unsafe a particular state is, by providing an estimate of probability of failure, that will be used to guide exploration. We desire the estimates to be “conservative”, in the sense that the probability of failure should be an over-estimate of the actual probability so that the agent can err on the side of caution while exploring. To train such a critic QC , we incorporate tools from CQL to estimate QC through updates similar to those obtained by reversing the sign of α in Equation 2 of CQL(H) (Kumar et al., 2020). This gives us an upper bound on QC instead of a lower bound, as ensured by CQL. We denote the over-estimated advantage corresponding to this safety critic as ÂC .\nFormally the safety critic is trained via the following objective, where the objective inside arg min is called CQL(ζ), ζ parameterizes QC , and k denotes the kth update iteration.\nQ̂k+1C ← arg min QC\nα · ( −Es∼Denv,a∼πφ(a|s)[QC(s, a)] + E(s,a)∼Denv [QC(s, a)] ) + 1\n2 E(s,a,s′,c)∼Denv\n[( QC(s, a)− B̂πφQ̂kC(s, a) )2] (2) Here, B̂πφ is the empirical Bellman operator discussed in section 3.1 and equation 2 of Kumar et al. (2020). α is a weight that varies the importance of the first term in equation 2, and controls the magnitude of value over-estimation, as we now highlight in red above. For states sampled from the replay buffer Denv , the first term seeks to maximize the expectation of QC over actions sampled from the current policy, while the second term seeks to minimize the expectation of QC over actions sampled from the replay buffer. Denv can include off-policy data, and also offline-data (if available). We interleave the gradient descent updates for training ofQC , with gradient ascent updates for policy πφ and gradient descent updates for Lagrange multiplier λ, which we describe next.\nPolicy learning. Since we want to learn policies that obey the constraint we set in terms of the safety critic, we can solve the objective in equation 1 via:\nmax πφ\nEs∼ρφ,a∼πφ [ A πφ R (s, a) ] s.t. Es∼ρφ,a∼πφQC(s, a) ≤ χ (3)\nWe can construct a Lagrangian and solve the policy optimization problem through primal dual gradient descent\nmax πφ min λ≥0\nEs∼ρφ,a∼πφ [ A πφ R (s, a)− λ (QC(s, a)− χ) ] We can apply vanilla policy gradients or some actor-critic style Q-function approximator for optimization. Here, QC is the safety critic trained through CQL as described in equation 2. We defer specific implementation details for policy learning to the final paragraph of this section.\nAlgorithm 1 CSC: safe exploration with conservative safety critics 1: Initialize V rθ (task value fn), Q s ζ (safety critic), policy πφ, λ, Denv , thresholds , δ, χ.\n2: Set V̂ πφold C (µ)← χ. . V̂ πφold C (µ) denotes avg. failures in the previous epoch. 3: for epochs until convergence do . Execute actions in the environment. Collect on-policy samples. 4: for episode e in {1, . . . , M} do 5: Set ← (1− γ)(χ− V̂\nπφold C (µ))\n6: Sample a ∼ πφold(s). Execute a iff QC(s, a) ≤ . Else, resample a. 7: Obtain next state s′, r = R(s, a), c = C(s′). 8: Denv ← Denv ∪ {(s, a, s′, r, c)} . If available, Denv can be seeded with off-policy/offline data 9: end for\n10: Store the average episodic failures V̂ πφold C (µ)← ∑M e=1 V̂ e C 11: for step t in {1, . . . , N} do . Policy and Q function updates using Denv 12: Gradient ascent on φ and (Optionally) add Entropy regularization (Appendix A.2) 13: Gradient updates for the Q-function ζ := ζ − ηQ∇ζCQL(ζ) 14: Gradient descent step on Lagrange multiplier λ (Appendix A.2) 15: end for 16: φold ← φ 17: end for\nExecuting rollouts (i.e., safe exploration). Since we are interested in minimizing the number of constraint violations while exploring the environment, we do not simply execute the learned policy iterate in the environment for active data collection. Rather, we query the safety critic QC to obtain an estimate of how unsafe an action is and choose an action that is safe via rejection sampling. Formally, we sample an action a ∼ πφold(s), and check if QC(s, a) ≤ . We keep re-sampling actions πφold(s) until this condition is met, and once met, we execute that action in the environment. In practice, we execute this loop for 100 iterations, and choose the action a among all actions in state s for which QC(s, a) ≤ and the value of QC(s, a) is minimum. If no such action a is found that maintains QC(s, a) ≤ , we just choose a for which QC(s, a) is minimum (although above the threshold).\nHere, is a threshold that varies across iterations and is defined as = (1 − γ)(χ − V̂ πφoldC (µ)) where, V̂\nπφold C (µ) is the average episodic failures in the previous epoch, denoting a sample estimate\nof the true V πφold C (µ). This value of is theoretically obtained such that Lemma 1 holds.\nIn the replay buffer Denv , we store tuples of the form (s, a, s′, r, c), where s is the previous state, a is the action executed, s′ is the next state, r is the task reward from the environment, and c = C(s′), the constraint value. In our setting, c is binary, with 0 denoting a live agent and 1 denoting failure.\nOverall algorithm. Our overall algorithm, shown in Algorithm 1, executes policy rollouts in the environment by respecting the constraint QC(s, a) ≤ , stores the observed data tuples in the replay buffer Denv , and uses the collected tuples to train a safety value function QC using equation 2, update the policy and the dual variable λ following the optimization objective in equation 6.\nImplementation details. Here, we discuss the specifics of the implementation for policy optimization. We consider the surrogate policy improvement problem Sutton (2020):\nmax πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ and V πφ C (µ) ≤ χ\n(4)\nHere, we have introduced a DKL constraint to ensure successive policies are close in order to help obtain bounds on the expected failures of the new policy in terms of the expected failures of the old policy in Section 4. We replace the DKL(πφold(·|s)||πφ(·|s)) term by its second order Taylor expansion (expressed in terms of the Fisher Information Matrix F ) and enforce the resulting constraint\nexactly (Schulman et al., 2015a). Following equation 22 (Appendix A.2) we have,\nmax πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. V πφold C (µ) + 1 1− γ Es∼ρφold ,a∼πφ [AC(s, a)] ≤ χ\ns.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ (5) We replace the true AC by the learned over-estimated ÂC , and consider the Lagrangian dual of this constrained problem, which we can solve by alternating gradient descent as shown below.\nmax πφ min λ≥0\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] − λ ( V πφold C (µ) + 1 1− γ Es∼ρφold ,a∼πφ [ ÂC(s, a) ] − χ ) s.t. 1\n2 (φ− φold)TF (φ− φold) ≤ δ (6)\nNote that although we use FIM for the updates, we can also apply vanilla policy gradients or some actor-critic style Q-function approximator to optimize equation 6. Detailed derivations of the gradient updates are in Appendix A.2." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we aim to theoretically analyze our approach, showing that the expected probability of failures is bounded after each policy update throughout the learning process, while ensuring that the convergence rate to the optimal solution is only mildly bottlenecked by the additional safety constraint.\nOur main result, stated in Theorem 1, bounds the expected probability of failure of the policy that results from Equation 5. To prove this, we first state a Lemma that shows that the constraints in Equation 5 are satisfied with high probability during the policy updates. Detailed proofs of all the Lemmas and Theorems are in Appendix A.1.\nNotation. Let C = maxs |Ea∼πφnewAC(s, a)| and ∆ be the amount of overestimation in the expected advantage value generated from the safety critic, Es∼ρφ\nold′ ,a∼πφold [ÂC(s, a)] as per equa-\ntion 2, such that ∆ = Es∼ρφ old′ ,a∼πφold [ÂC(s, a) − AC(s, a)]. Let ζ denote the sampling error in the estimation of V\nπφold C (µ) by its sample estimate V̂ πφold C (µ) (i.e. ζ = |V̂ πφold C (µ) − V πφold C (µ)|)\nand N be the number of samples used in the estimation of VC . Let RegC(T ) be the total cumulative failures incurred by running Algorithm 1 until T samples are collected from the environment. We first show that when using Algorithm 1, we can upper bound the expectation probability of failure for each policy iterate πφold .\nLemma 1. If we follow Algorithm 1, during policy updates via Equation 5, the following is satisfied with high probability ≥ 1− ω\nV πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ [AC(s, a)] ≤ χ+ ζ −\n∆\n1− γ\nHere, ζ captures sampling error in the estimation of V πφold C (µ) and we have ζ ≤\nC′ √ log(1/ω)\n|N | , where C ′ is a constant independent of ω obtained from union bounds and concentration inequalities (Kumar et al., 2020) and N is the number of samples used in the estimation of VC .\nThis lemma intuitively implies that the constraint on the safety critic in equation 5 is satisfied with a high probability, when we note that the RHS can be made small as N becomes large.\nLemma 1 had a bound in terms of V πφold C (µ) for the old policy πφold , but not for the updated policy πφnew . We now show that the expected probability of failure for the policy πφnew resulting from solving equation 5, V πφnewC (µ) is bounded with a high probability.\nTheorem 1. Consider policy updates that solve the constrained optimization problem defined in Equation 5. With high probability ≥ 1− ω, we have the following upper bound on expected probability of failure V πφnewC (µ) for πφnew during every policy update iteration:\nV πφnew C (µ) ≤ χ+ ζ −\n∆\n1− γ +\n√ 2δγ C (1− γ)2 where ζ ≤\nC ′ √ log(1/ω)\n|N | (7)\nSo far we have shown that, with high probability, we can satisfy the constraint in the objective during policy updates (Lemma 1) and obtain an upper bound on the expected probability of failure of the updated policy πφnew (Theorem 1). The key insight from Theorem 1 is that if we execute policy πφnew in the environment, the probability of failing is upper-bounded by a small number depending on the specified safety threshold χ. Since the probability of failure is bounded, if we execute πφnew for multiple episodes, the total number of failures is bounded as well.\nWe now bound the task performance in terms of policy return and show that incorporating and satisfying safety constraints during learning does not severely affect the convergence rate to the optimal solution for task performance. Theorem 2 builds upon and relies on the assumptions in (Agarwal et al., 2019) and extends it to our constrained policy updates in equation 5. Theorem 2 (Convergence rate for policy gradient updates with the safety constraint). If we run the policy gradient updates through equation 5, for policy πφ, with µ as the starting state distribution, with φ(0) = 0, learning rate η > 0, and choose α as mentioned in the discussion of Theorem 1, then for all policy update iterations T > 0 we have, with probability ≥ 1− ω,\nV ∗R(µ)− V (T ) R (µ) ≤ log |A| ηT + 1 (1− γ)2T +K\n∑T−1 t=0 λ (t)\nηT where K ≤ (1− χ) + 4\n√ 2δγ\n(1− γ)2\nSince the value of the dual variables λ strictly decreases during gradient descent updates (Algorithm 1), ∑T−1 t=0 λ\n(t) is upper-bounded. So, we see that the additional term proportional to K introduced in the convergence rate (compared to (Agarwal et al., 2019)) due to the safety constraint is upper bounded, and can be made small with a high probability by choosing α appropriately. In addition, we note that the safety threshold χ helps tradeoff the convergence rate by modifying the magnitude of K (a low χ means a stricter safety threshold, and a higher value of K, implying a larger RHS and slower convergence). We discuss some practical considerations of the theoretical results in Appendix A.4.\nSo far we have demonstrated that the resulting policy iterates from our algorithm all satisfy the desired safety constraint of the CMDP which allows for a maximum safety violation of χ for every intermediate policy. While this result ensures that the probability of failures is bounded, it does not elaborate on the total failures incurred by the algorithm. In our next result, we show that the cumulative failures until a certain number of samples T of the algorithm grows sublinearly when executing Algorithm 1, provided the safety threshold χ is set in the right way. Theorem 3. [Number of cumulative safety failures grows sublinearly] Let χ in Algorithm 1 be timedependent such that χt = O(1/ √ t). Then, the total number of cumulative safety violations until when T transition samples have been collected by Algorithm 1, RegC(T ), scales sub-linearly with T , i.e., RegC(T ) = O( √ |S||A|T ).\nA proof is provided in Appendix A.1. Theorem 3 is in many ways similar to a typical regret bound for exploration (Russo, 2019; Jaksch et al., 2010), though it measures the total number of safety violations. This means that training with Algorithm 1 will converge to a “safe” policy that incurs no failures at a quick, O( √ T ) rate." }, { "heading": "5 EXPERIMENTS", "text": "Through experiments on continuous control environments of varying complexity, we aim to empirically evaluate the agreement between empirical performance and theoretical guidance by understanding the following questions:\n• How safe is CSC in terms of constraint satisfaction during training? • How does learning of safe policies trade-off with task performance during training?" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Environments. In each environment, shown in Figure 2, we define a task objective that the agent must achieve and a criteria for catastrophic failure. The goal is to solve the task without dying. In point agent/car navigation avoiding traps, the agent must navigate a maze while avoiding traps. The agent has a health counter that decreases every timestep that it spends within a trap. When the\ncounter hits 0, the agent gets trapped and dies. In Panda push without toppling, a 7-DoF Franka Emika Panda arm must push a vertically placed block across the table to a goal location without the block toppling over. Failure is defined as when the block topples. In Panda push within boundary, the Panda arm must be controlled to push a block across the table to a goal location without the block going outside a rectangular constraint region. Failure occurs when the block center of mass ((x, y) position) move outside the constraint region. In Laikago walk without falling, an 18-DoF Laikago quadruped robot must walk without falling. The agent is rewarded for walking as fast as possible (or trotting) and failure occurs when the robot falls. Since quadruped walking is an extremely challenging task, for all the baselines, we initialize the agent’s policy with a controller that has been trained to keep the agent standing, while not in motion.\nBaselines and comparisons. We compare CSC to three prior methods: constrained policy optimization (CPO) (Achiam et al., 2017), a standard unconstrained RL method (Schulman et al., 2015a) which we call Base (comparison with SAC (Haarnoja et al., 2018) in Appendix Figure 7), an algorithm similar to Base, called BaseShaped that modifies the reward R(s, a) as R(s, a)− PC(s) where P = 10 and C(s) is 1 when a failure occurs and is 0 otherwise. We also consider a method that extends Leave No Trace (Eysenbach et al., 2017) to our setting, which we refer to as Q ensembles. This last comparison is the most similar to our approach, in that it also implements a safety critic (adapted from LNT’s backward critic), but instead of using our conservative updates, the safety critic uses an ensemble for epistemic uncertainty estimation, as proposed by Eysenbach et al. (2017).\nThere are other safe RL approaches which we cannot compare against, as they make multiple additional assumptions, such as the availability of a function that can be queried to determine if a state is safe or not Thananjeyan et al. (2020), availability of a default safe policy for the task Koller et al. (2018); Berkenkamp et al. (2017), and prior knowledge of the location of unsafe states (Fisac et al., 2019). In addition to the baselines (Figure 3), we analyze variants of our algorithm with different safety thresholds through ablation studies (Figure 4). We also analyze CSC and the baselines by seeding with a small amount of offline data in the Appendix A.10." }, { "heading": "5.2 EMPIRICAL RESULTS", "text": "Comparable or better performance with significantly lower failures during training. In Figure 3, we observe that CSC has significantly lower average failures per episode, and hence lower cumulative failures during the entire training process. Although the failures are significantly lower for our method, task performance and convergence of average task rewards is comparable to or better than all prior methods, including the Base method, corresponding to an unconstrained RL algorithm. While the CPO and Q-ensembles baselines also achieve near 0 average failures eventually, we see that CSC achieves this very early on during training.\nCSC trades off performance with safety constraint satisfaction, based on the safety-threshold χ. In Figure 4, we plot variants of our method with different safety constraint thresholds χ. Observe that: (a) when the threshold is set to a lower value (stricter constraint), the number of avg. failures per episode decreases in all the environments, and (b) the convergence rate of the task reward is lower when the safety threshold is stricter. These observations empirically complement our theoretical guarantees in Theorems 1 and 2. We note that there are quite a few failures even in the case where χ = 0.0, which is to be expected in practice because in the initial stages of training there is high function approximation error in the learned critic QC . However, we observe that the average episodic failures quickly drop below the specified threshold after about 500 episodes of training." }, { "heading": "6 RELATED WORK", "text": "We discuss prior safe RL and safe control methods under three subheadings\nAssuming prior domain knowledge of the problem structure. Prior works have attempted to solve safe exploration in the presence of structural assumptions about the environment or safety structures. For example, Koller et al. (2018); Berkenkamp et al. (2017) assume access to a safe set of environment states, and a default safe policy, while in Fisac et al. (2018); Dean et al. (2019), knowledge of system dynamics is assumed and (Fisac et al., 2019) assume access to a distance metric on the state space. SAVED (Thananjeyan et al., 2020) learns a kernel density estimate over unsafe states, and assumes access to a set of user demonstrations and a user specified function that can be queried to determine whether a state is safe or not. In contrast to these approaches, our method does not assume any prior knowledge from the user, or domain knowledge of the problem setting, except a binary signal from the environment indicating when a catastrophic failure has occurred.\nAssuming a continuous safety cost function. CPO (Achiam et al., 2017), and (Chow et al., 2019) assume a cost function can be queried from the environment at every time-step and the objective is to keep the cumulative costs within a certain limit. This assumption limits the generality of the method in scenarios where only minimal feedback, such as binary reward feedback is provided (additional details in section A.3).\n(Stooke et al., 2020) devise a general modification to the Lagrangian by incorporating two additional terms in the optimization of the dual variable. SAMBA (Cowen-Rivers et al., 2020) has a learned GP dynamics model and a continuous constraint cost function that encodes safety. The objective is to minimize task cost function while maintaining the CVARα of cumulative costs below a threshold. In the work of Dalal et al. (2018); Paternain et al. (2019b;a); Grbic & Risi (2020), only the optimal policy is learned to be safe, and there are no safety constraint satisfactions during training. In contrast to these approaches, we assume only a binary signal from the environment indicating when a catastrophic failure has occurred. Instead of minimizing expected costs, our constraint formulation directly seeks to constrain the expected probability of failure.\nSafety through recoverability. Prior works have attempted to devise resetting mechanisms to recover the policy to a base configuration from (near) a potentially unsafe state. LNT (Eysenbach et al., 2017) trains both a forward policy for solving a task, and a reset goal-conditioned policy that kicks in when the agent is in an unsafe state and learns an ensemble of critics, which is substantially more complex than our approach of a learned safety critic, which can give rise to a simple but provable safe exploration algorithm. Concurrently to us, SQRL (Srinivasan et al., 2020) developed an approach also using safety critics such that during the pre-training phase, the agent explores both safe and unsafe states in the environment for training the critic.\nIn control theory, a number of prior works have focused on Hamilton-Jacobi-Isaacs (HJI) reachability analysis (Bansal et al., 2017) for providing safety constraint satisfactions and obtaining control inputs for dynamical systems (Herbert et al., 2019; Bajcsy et al., 2019; Leung et al., 2018). Our method does not require knowledge of the system dynamics or regularity conditions on the statespace, which are crucial for computing unsafe states using HJI reachability." }, { "heading": "7 DISCUSSION, LIMITATIONS, AND CONCLUSION", "text": "We introduced a safe exploration algorithm to learn a conservative safety critic that estimates the probability of failure for each candidate state-action tuple, and uses this to constrain policy evaluation and policy improvement. We provably demonstrated that the probability of failures is bounded throughout training and provided convergence results showing how ensuring safety does not severely bottleneck task performance. We empirically validated our theoretical results and showed that we achieve high task performance while incurring low accidents during training.\nWhile our theoretical results demonstrated that the probability of failures is bounded with a high probability, one limitation is that we still observe non-zero failures empirically even when the threshold χ is set to 0. This is primarily because of neural network function approximation error in the early stages of training the safety critic, which we cannot account for precisely in the theoretical results, and also due to the fact that we bound the probability of failures, which in practice means that the number of failures is also bounded, but non-zero. We also showed that if we set the constraint threshold in an appropriate time-varying manner, training with CSC incurs cumulative failures that scales at most sub-linearly with the number of transition samples in the environment.\nAlthough our approach bounds the probability of failure and is general in the sense that it does not assume access any user-specified constraint function, in situations where the task is difficult to solve, for example due to stability concerns of the agent, our approach will fail without additional assumptions. In such situations, some interesting future work directions would be to develop a curriculum of tasks to start with simple tasks where safety is easier to achieve, and gradually move towards more difficult tasks, such that the learned knowledge from previous tasks is not forgotten." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank Vector Institute, Toronto and the Department of Computer Science, University of Toronto for compute support. We thank Glen Berseth and Kevin Xie for helpful initial discussions about the project, Alexandra Volokhova, Arthur Allshire, Mayank Mittal, Samarth Sinha, and Irene Zhang for feedback on the paper, and other members of the UofT CS Robotics Group for insightful discussions during internal presentations and reading group sessions. Finally, we are grateful to the anonymous ICLR 2021 reviewers for their feedback in helping improve the paper." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOFS OF ALL THEOREMS AND LEMMAS", "text": "Note. During policy updates via Equation 5, the DKL constraint is satisfied with high probability if we follow Algorithm 1.\nFollowing the steps in the Appendix A.2, we can write the gradient ascent step for φ as φ← φold + βF−1∇φold J̃(φold) β = βj √ 2δ\n∇φold J̃(φold)TF∇φold J̃(φold) (8)\nF can be estimated with samples as F = Es∼ρφold [ Ea∼πφold [ ∇φold log πφold(∇φold log πφold)T ]] (9) Here βj is the backtracking coefficient and we perform backtracking line search with exponential decay. ∇φold J̃(φold) is calculated as,\n∇φold J̃(φold) = Es∼ρφold ,a∼πφold [ ∇φold log πφold(a|s)à πφold R ] (10)\nAfter every update, we check if D̄KL(φ||φold) ≤ δ, and if not we decay βj = βj(1 − βj)j , set j ← j + 1 and repeat for L steps until D̄KL ≤ δ is satisfied. If this is not satisfied after L steps, we backtrack, and do not update φ i.e. set φ← φold. Lemma 1. If we follow Algorithm 1, during policy updates via equation 5, the following is satisfied with high probability ≥ 1− ω\nV πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ [AC(s, a)] ≤ χ+ ζ −\n∆\n1− γ\nHere, ζ captures sampling error in the estimation of V πφold C (µ) and we have ζ ≤\nC √ log(1/ω)\n|N | , where C is a constant and N is the number of samples used in the estimation of VC .\nProof. Based on line 6 of Algorithm 1, for every rollout {(s, a)}, the following holds: QC(s, a) ≤ (1− γ)(χ− V̂ πφold C (µ))) ∀(s, a)\n=⇒ ÂC(s, a) ≤ (1− γ)(χ− V̂ πφold C (µ))) ∀(s, a)\n=⇒ V̂ πφoldC (µ) + 1\n1− γ ÂC(s, a) ≤ χ ∀(s, a)\n=⇒ V̂ πφoldC (µ) + 1\n1− γ Es∼ρφold ,a∼πφ\n[ ÂC(s, a) ] ≤ χ\n(11)\nWe note that we can only compute a sample estimate V̂ πφold C (µ) instead of the true quantity VC which can introduce sampling error in practice. In order to ensure that V̂ πφold C (µ) is not much lesser than V πφold C (µ), we can obtain a bound on their difference. Note that if V̂ πφold C (µ) ≥ V πφold C (µ), the Lemma holds directly, so we only need to consider the less than case.\nLet V̂ πφold C (µ) = V πφold C (µ) − ζ. With high probability ≥ 1 − ω, we can ensure ζ ≤\nC′ √ log(1/ω)\n|N | , where C ′ is a constant independent of ω (obtained from union bounds and concentration inequalities) and N is the number of samples used in the estimation of VC . In addition, our estimate of Es∼ρφold ,a∼πφ [ ÂC(s, a) ] is an overestimate of the true Es∼ρφold ,a∼πφ [AC(s, a)], and we denote their difference by ∆.\nSo, with high probability ≥ 1− ω, we have\nV̂ πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ\n[ ÂC(s, a) ] ≤ χ\n=⇒ V πφoldC (µ) + 1\n1− γ Es∼ρφold ,a∼πφ [AC(s, a)] ≤ χ+ ζ −\n∆\n1− γ\n(12)\nTheorem 1. Consider policy updates that solve the constrained optimization problem defined in equation 5. With high probability ≥ 1− ω, we have the following upper bound on expected probability of failure V πφnewC (µ) for πφnew during every policy update iteration\nV πφnew C (µ) ≤ χ+ ζ −\n∆\n1− γ +\n√ 2δγ C (1− γ)2 where ζ ≤\nC √ log(1/ω)\n|N | (13)\nHere, C = maxs |Ea∼πφnewAC(s, a)| and ∆ is the overestimation in Es∼ρφold′ ,a∼πφold [AC(s, a)] due to CQL.\nProof. C(s) denotes the value of the constraint function from the environment in state s. This is analogous to the task reward function R(s, a). In our case C(s) is a binary indicator of whether a catastrophic failure has occurred, however the analysis we present holds even when C(s) is a shaped continuous cost function.\nC(s) = { 1, 1{failure} = 1 0, otherwise\nLet V πφR (µ) denotes the discounted task rewards obtained in expectation by executing policy πφ for one episode, and let V πφC (µ) denote the corresponding constraint values.\nmax πφ\nV πφ R (µ) s.t. V πφ C (µ) ≤ χ (14)\nFrom the TRPO (Schulman et al., 2015a) and CPO (Achiam et al., 2017) papers, following similar derivations, we obtain the following bounds\nV πφ R (µ)− V πφold R (µ) ≥\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold R (s, a)−\n2γ R 1− γ\nDTV (πφ||πφold)[s] ] (15)\nHere, AπφR is the advantage function corresponding to the task rewards and R = maxs |Ea∼πφA πφ R (s, a)|. DTV is the total variation distance. We also have,\nV πφ C (µ)− V πφold C (µ) ≤\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold C (s, a) +\n2γ C 1− γ\nDTV (πφ||πφold)[s] ] (16)\nHere, A πφold C is the advantage function corresponding to the costs and C = maxs |Ea∼πφA πφold C (s, a)|. In our case, AC is defined in terms of the safety Q function QC(s, a), and CQL can bound its expectation directly. To see this, note that, by definition Es∼ρφold ,a∼πφ [ A πφold C (s, a) ] = Es∼ρφold ,a∼πφ [Qζ(s, a)] − Es∼ρφold ,a∼πφold [Qζ(s, a)]. Here, the RHS is precisely the term in equation 2 of (Kumar et al., 2020) that is bounded by CQL. We get an overstimated advantage ÂC(s, a) from training the safety critic QC through updates in equation 2. . Let ∆ denote the expected magnitude of over-estimate Es∼ρφold ,a∼πφ [ ÂC(s, a) ] = Es∼ρφold ,a∼πφ [AC(s, a)] + ∆, where ∆ is positive. Note that\nreplacing AC , by its over-estimate ÂC , the inequality in 16 above still holds.\nUsing Pinsker’s inequality, we can convert the bounds in terms of DKL instead of DTV , DTV (p||q) ≤ √ DKL(p||q)/2 (17) By Jensen’s inequality, E[ √ DKL(p||q)/2] ≤ √ E[DKL(p||q)]/2 (18)\nSo, we can replace the E[DTV (p||q)] terms in the bounds by √\nE[DKL(p||q)]. Then, inequation 16 becomes,\nV πφ C (µ)− V πφold C (µ) ≤\n1\n1− γ\n[ Es∼ρφold ,a∼πφ [ A πφold C (s, a) ] + 2γ C 1− γ √ Es∼ρφold ,a∼πφ [DKL(πφ||πφold)[s]] ] (19)\nRe-visiting our objective in equation 5,\nmax πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ s.t. V\nπφ C (µ) ≤ χ\n(20)\nFrom inequation 19 we note that instead of of constraining V πφC (µ) we can constrain an upper bound on this. Writing the constraint in terms of the current policy iterate πφold using equation 19,\nπφnew = max πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ\ns.t. V πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold C (s, a) ] + β √ Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ χ\n(21)\nAs there is already a bound on DKL(πφold(·|s)||πφ(·|s))], getting rid of the redundant term, we define the following optimization problem, which we actually optimize for\nπφnew = max πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ\ns.t. V πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold C (s, a) ] ≤ χ\n(22)\nUpper bound on expected probability of failures. If πφnew is updated using equation 5, then we have the following upper bound on V πφnewC (µ)\nV πφnew C (µ) ≤ V πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold C ] +\n2γ C (1− γ)2 √ Es∼ρφold ,a∼πφ [DKL(πφ||πφold)[s]]\n(23)\nIf we ensure V πφold C (µ) + 1 1−γEs∼ρφold ,a∼πφ [ A πφold C (s, a) ] ≤ χ holds by following Algorithm 1,we have the following upper bound on V πφnewC (µ)\nV πφnew C (µ) ≤ χ+\n√ 2δγ C (1− γ)2 (24)\nHere, C = maxs |Ea∼πφnewA πφold C (s, a)|.\nNow, instead of AC(s, a), we have an over-estimated advantage estimate ÂC(s, a) obtained by training the safety critic QC through CQL as in equation 2. Let ∆ denote the expected magnitude of over-estimate Es∼ρφold ,a∼πφ [ ÂC(s, a) ] = Es∼ρφold ,a∼πφ [AC(s, a)] + ∆, where ∆ is positive.\nFrom Lemma 1, we are able to ensure the following with high probability ≥ 1− ω\nV πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ [AC(s, a)] ≤ χ+ ζ −\n∆\n1− γ By combining this with the upper bound on V πφnewC (µ) from inequality 23, we obtain with probability ≥ 1− ω\nV πφnew C (µ) ≤ χ+ ζ −\n∆\n1− γ +\n√ 2δγ C (1− γ)2 where ζ ≤\nC ′ √ log(1/ω)\n|N | (25)\nSince C depends on the optimized policy πφnew , it can’t be calculated exactly prior to the update. As we cap QC(s, a) to be ≤ 1, therefore, the best bound we can construct for C is the trivial bound C ≤ 2. Now, in order to have V πφnew C (µ) < χ, we require ∆ > 2 √ 2δγ 1−γ + (1 − γ)ζ. To guarantee this, replacing ∆ by the exact overestimation term from CQL, we have the following condition on\nα:\nα > Gc,T 1− γ · max s∼ρφ\nold′\n( 1\n| √ Dφold′ | + 2 √ 2δγ + (1− γ)2ζ Gc,T )[ Ea∼πφold ( πφold πφold′ − 1 )]−1 (26)\nHere, Gc,T is a constant depending on the concentration properties of the safety constraint function C(s, a) and the state transition operator T (s′|s, a) (Kumar et al., 2020). φold′ denotes the parameters of the policy π in the iteration before φold. Now, with probability≥ 1−ω, we have ζ ≤ C′ √ log(1/ω)\n|N | . So, if α is chosen as follows\nα > Gc,T 1− γ · max s∼ρφ\nold′ 1|√Dφold′ | + 2 √ 2δγ + (1− γ)2C ′ √ log(1/ω) |N | Gc,T [Ea∼πφold ( πφoldπφold′ − 1 )]−1\n(27) Then with probability ≥ 1− ω, we will have,\nV πφnew C (µ) ≤ χ (28)\nIn the next theorem, we show that the convergence rate to the optimal solution is not severely affected due to the safety constraint satisfaction guarantee, and gets modified by addition of an extra bounded term.\nTheorem 2. If we run the policy gradient updates through equation 5, for policy πφ, with µ as the starting state distribution, with φ(0) = 0, and learning rate η > 0, then for all policy update iterations T > 0 we have, with probability ≥ 1− ω,\nV ∗R(µ)− V (T ) R (µ) ≤ log |A| ηT + 1 (1− γ)2T +\n( (1− χ) + ( 1− 2∆\n(1− γ)\n) + 2ζ ) ∑T−1 t=0 λ (t)\nηT\nSince the value of the dual variables λ strictly decreases during gradient descent updates (Algorithm 1), ∑T−1 t=0 λ (t) is upper-bounded. In addition, if we choose α as mentioned in the discussion of Theorem 1, we have ∆ > 2 √\n2δγ 1−γ + ζ. Hence, with probability ≥ 1− ω, we can ensure that\nV ∗R(µ)− V (T ) R (µ) ≤ log |A| ηT + 1 (1− γ)2T +K\n∑T−1 t=0 λ (t)\nηT where K ≤ (1− χ) + 4\n√ 2δγ\n(1− γ)2\nProof. Let superscript (t) denote the tth policy update iteration. We follow the derivation in Lemma 5.2 of (Agarwal et al., 2019) but replaceA(s, a) with our modified advantage estimator Â(t)(s, a) = A\n(t) R (s, a)− λ(t)AC(s, a). The quantity logZt(s) is defined in terms of A (t) R as logZt(s) = log ∑ a π(t)(a|s) exp (ηA(t)/(1− γ))\n≥ ∑ a π(t)(a|s) log exp ηA(t)(s, a)/(1− γ)) = η\n1− γ ∑ a π(t)(a|s)A(t)(s, a)\n= 0\n(29)\nWe define an equivalent alternate quantity based on Â(t) log Ẑt(s) = log ∑ a π(t)(a|s) exp (ηÂ(t)(s, a)/(1− γ))\n= log ∑ a π(t)(a|s) exp (η(A(t)R (s, a)− λ (t)AC(s, a))/(1− γ))\n≥ ∑ a π(t)(a|s) log exp (ηA(t)R (s, a)/(1− γ))− λ (t) ∑ a π(t)(a|s) log exp (ηA(t)C (s, a)/(1− γ))\n= 0− λ (t)η 1− γ ∑ a π(t)(a|s)A(t)C (s, a)\n(30)\nFor simplicity, consider softmax policy parameterization (equivalent results hold under the function approximation regime as shown in (Agarwal et al., 2019)), where we define the policy updates with the modified advantage function Â(t) to take the form:\nφ(t+1) = φ(t) + η 1− γ Â(t) and π(t+1)(a|s) = π(t)(a|s)exp(η (t)(s, a)/(1− γ)) Ẑt(s) ,\nHere, Ẑt(s) = ∑ a∈A π\n(t)(a|s) exp(ηÂ(t)(s, a)/(1− γ)). Note that our actual policy updates (with backtracking line search) are almost equivalent to this when η is small. For the sake of notational convenience, we will denote log Ẑt(s)+ λ (t)η 1−γ ∑ a π\n(t)(a|s)A(t)C (s, a) asGt(s). We haveGt(s) ≥ 0 from equation 30.\nWe consider the performance improvement lemma (Kakade & Langford, 2002) with respect to the task advantage function A(t)R (s, a) and express it in terms of the modified advantage function Â(t)(s, a) = A\n(t) R (s, a)− λ(t)AC(s, a). Let µ be the starting state distribution of the MDP, and d(t)\ndenote the stationary distribution of states induced by policy π in the tth iteration.\nV (t+1) R (µ)− V (t) R (µ) =\n1\n1− γ Es∼d(t+1) ∑ a π(t+1)(a|s)A(t)R (s, a)\n= 1\n1− γ Es∼d(t+1) ∑ a π(t+1)(a|s)(Â(t)(s, a) + λ(t)A(t)C (s, a))\n= 1\nη Es∼d(t+1) ∑ a π(t+1)(a|s) log π (t+1)(a|s)Ẑt(s) π(t)(a|s)\n+ 1\n1− γ Es∼d(t+1) ∑ a π(t+1)(a|s)(λ(t)A(t)C (s, a))\n= 1\nη Es∼d(t+1)DKL(π(t+1)s ||π(t)s ) +\n1 η Es∼d(t+1) log Ẑt(s)\n+ 1\n1− γ Es∼d(t+1) ∑ a π(t+1)(a|s)(λ(t)A(t)C (s, a))\n≥ 1 η Es∼d(t+1) log Ẑt(s) +\nλ(t)\n1− γ Es∼d(t+1) ∑ a π(t)(a|s)A(t)C (s, a)\n≥ 1 η Es∼d(t+1)Gt(s)\n≥ 1− γ η Es∼µGt(s)\n(31)\nWe note that Gt(s) ≥ 0 from equation 30. We now prove a result upper bounding the difference between the optimal task value for any state distribution ρ and the task value at the tth iteration for the same state distribution.\nSub-optimality gap. The difference between the optimal value function and the current value function estimate is upper bounded.\nV π ? R (ρ)− V (t) R (ρ) =\n1\n1− γ Es∼d? ∑ a π?(a|s)(Â(t)(s, a) + λ(t)A(t)C (s, a))\n= 1\nη Es∼d? ∑ a π?(a|s) log π (t+1)(a|s)Ẑt(s) π(t)(a|s) + 1 1− γ Es∼d? ∑ a π?(a|s)λ(t)A(t)C (s, a)\n= 1\nη Es∼d?\n( DKL(π ? s ||π(t)s )−DKL(π?s ||π(t+1)s ) + ∑ a π∗(a|s) log Ẑt(s) )\n+ 1\n1− γ Es∼d? ∑ a π?(a|s)λ(t)A(t)C (s, a)\n= 1\nη Es∼d?\n( DKL(π ? s ||π(t)s )−DKL(π?s ||π(t+1)s ) + log Ẑt(s) ) + 1\n1− γ Es∼d? ∑ a π?(a|s)λ(t)A(t)C (s, a)\n= 1\nη Es∼d?\n( DKL(π ? s ||π(t)s )−DKL(π?s ||π(t+1)s ) ) + 1\nη Es∼d?\n( log Ẑt(s) + λ(t)\n1− γ ∑ a π?(a|s)A(t)C (s, a)\n)\n= 1\nη Es∼d?\n( DKL(π ? s ||π(t)s )−DKL(π?s ||π(t+1)s ) ) + 1\nη Es∼d?\n( Gt(s) + λ(t)\n1− γ ∑ a π?(a|s)A(t)C (s, a)− λ(t) 1− γ ∑ a π(t)(a|s)A(t)C (s, a) ) (32)\nUsing equation 31 with d? as the starting state distribution µ, we have: 1\nη Es∼d? logGt(s) ≤\n1\n1− γ\n( V (t+1)(d?)− V (t)(d?) ) which gives us a bound on Es∼d? logGt(s).\nUsing the above equation and that V (t+1)(ρ) ≥ V (t)(ρ) (as V (t+1)(s) ≥ V (t)(s) for all states s), we have:\nV π ? R (ρ)− V (T−1) R (ρ) ≤\n1\nT T−1∑ t=0 (V π ? R (ρ)− V (t) R (ρ))\n≤ 1 ηT T−1∑ t=0 Es∼d?(DKL(π?s ||π(t)s )−DKL(π?s ||π(t+1)s )) + 1 ηT T−1∑ t=0 Es∼d? logGt(s)\n+ 1\nηT T−1∑ t=0\nEs∼d? ( λ(t)\n1− γ ∑ a π?(a|s)A(t)C (s, a)− λ(t) 1− γ ∑ a π(t)(a|s)A(t)C (s, a)\n)\n≤ Es∼d ?DKL(π ? s ||π(0))\nηT +\n1\n(1− γ)T T−1∑ t=0 ( V (t+1) R (d ?)− V (t)R (d ?) )\n+ 1\nηT T−1∑ t=0 λ(t)\n( 1\n1− γ Es∼d? ∑ a π?(a|s)A(t)C (s, a)− 1 1− γ Es∼d? ∑ a π(t)(a|s)A(t)C (s, a)\n)\n≤ Es∼d ?DKL(π ? s ||π(0)) ηT + V (T ) R (d ?)− V (0)R (d?) (1− γ)T\n+ 2((1− γ)(χ+ ζ)−∆) ∑T−1 t=0 λ (t)\n(1− γ)ηT\n≤ log |A| ηT + 1 (1− γ)2T + 2((1− γ)(χ+ ζ)−∆)\n∑T−1 t=0 λ (t) (1− γ)ηT .\nHere, ∆ denotes the CQL overestimation penalty, and we have used the fact that each term of( 1 1−γ ∑ a π ?(a|s)A(t)C (s, a)− 1 1−γ ∑ a π (t)(a|s)A(t)C (s, a) )\nis upper bounded by (χ + ζ − ∆(1−γ) ) from Lemma 1, so the difference is upper-bounded by 2(χ+ ζ − ∆(1−γ) ).\nBy choosing α as in equation 26, we have ∆ > 2 √\n2δγ 1−γ + (1− γ)ζ. So, −∆ < −\n2 √\n2δγ 1−γ − (1− γ)ζ.\nHence, we obtain the relation\nWe also observe that 2(χ− ∆(1−γ) ) + 2ζ = χ+ χ− 2 ∆ (1−γ) + 2ζ ≤ 2− χ− 2 ∆ (1−γ) = (1− χ) + 2ζ + (1− 2 ∆(1−γ) ) + 2ζ\nSo, we have the following result for convergence rate\nV ∗R(µ)− V (T ) R (µ) ≤ log |A| ηT + 1 (1− γ)2T + ((1− χ) + (1− 2∆ (1− γ) ) + 2ζ)\n∑T−1 t=0 λ (t)\nηT\nAgain, with probability ≥ 1− ω, we can ensure ζ ≤ C ′ √ log(1/ω)\n|N | . Overall, choosing the value of α\nfrom equation 27, we have ∆ > 2 √\n2δγ 1−γ + (1 − γ)ζ. So, −∆ < −\n2 √\n2δγ 1−γ − (1 − γ)ζ. Hence, with\nprobability ≥ 1− ω, we can ensure that\nV ∗R(µ)− V (T ) R (µ) ≤ log |A| ηT + 1 (1− γ)2T +K\n∑T−1 t=0 λ (t)\nηT\nwhere,\nK ≤ (1− χ) + 4 √ 2δγ\n(1− γ)2\nSo far we have demonstrated that the resulting policy iterates from our algorithm all satisfy the desired safety constraint of the CMDP which allows for a maximum safety violation of χ for every intermediate policy. While this guarantee ensures that the probability of failures is bounded, it does not elaborate on the total failures incurred by the algorithm. In our next result, we show that the cumulative failures until a certain number of iterations T of the algorithm grows sublinearly when executing Algorithm 1, provided the safety threshold χ is set in the right way. Theorem 3. Let χ in Algorithm 1 be time-dependent such that χt = O(1/ √ t). Then, the total number of cumulative safety violations until when T transition samples have been collected by Algorithm 1, RegC(T ), scales sub-linearly with T , i.e., RegC(T ) = O( √ |S||A|T ).\nProof. From Theorem 1, with probability ≥ 1− ω\nV πφnew C (µ) ≤ χ+ ζ −\n∆\n1− γ +\n√ 2δγ C (1− γ)2 where ζ ≤\nC ′ √ log(1/ω)\n|N | (33)\nInstead of using φnew, for ease of notation in this analysis, let us write this in terms of subscript t such that πt denotes the policy at the tth iteration. We can write the bound on ζ more explicitly as, ζt ≤ Es∼dπt ,a∼πt [ C′ √\nlog(1/ω)√ |Nt(s,a)|\n] Here, Nt(s, a) denotes the total number of times (s, a) is seen,\nand can be written asNt(s, a) = ∑t j=1 nj(s, a), where nj(s, a) denotes total number of times (s, a)\nis seen in episode j. T = ∑ s,aNk(s, a) denotes the total number of samples collected so far. Let k be the number of episodes elapsed when this happens. The safety threshold χ can be varied at every iteration, such that it decreases as χt ∝ 1√t . So, we have with probability ≥ 1− ω\nV πtC (µ) ≤ χt+ζt− ∆\n1− γ +\n√ 2δγ C (1− γ)2 where ζt ≤ Es∼dπt ,a∼πt\n[ C ′ √\nlog(1/ω)√ |Nt(s, a)|\n] and χt =\nC ′′√ t\n(34)\nNow, we note that V πtC denotes the expected probability of episodic failures by executing policy πt (as described in section 2 of the main paper). Let us consider that a policy is rolled out for one episode after every training iteration to collect data (exploration).\nHence the total cumulative safety failures when T transition samples have been collected by Algorithm 1 is:\nRegC(T ) = k∑ t=1 V πtC (µ) × 1\n≤ k∑ t=1 (χt + ζt)− k∑ t=1\n( ∆\n1− γ − √ 2δγ C (1− γ)2\n)\n≤ k∑ t=1 C ′′√ t + k∑ t=1 ∑ s,a dπt(s)πt(a|s) C ′ √ log(1/ω)√ |Nt(s, a)| ≤ k∑ t=1 C ′′√ t + C ′ √ log(1/ω) ∑ s,a k∑ t=1 nt(s, a)√ |Nt(s, a)| ≤ C ′′ √ k + C ′ √ log(1/ω)\n∑ s,a √ Nk(s, a)\n≤ C ′′ √ T + C ′ √ log(1/ω) √ |S||A| √∑ s,a Nk(s, a)\n≤ C ′′′ √ |S||A|T\n= O √ |S||A|T\n(35)\nHere, we used the concavity of square root function to obtain ∑ s,a √ Nk(s, a) ≤√\n|S||A| √∑ s,aNk(s, a) and the definition T = ∑ s,aNk(s, a)" }, { "heading": "A.2 DERIVATION OF THE POLICY UPDATE EQUATIONS", "text": "Let a ∈ A denote an action, s ∈ S denote a state, πφ(a|s) denote a parameterized policy, r(s, a) denote a reward function for the task being solved, and τ denote a trajectory of actions by following policy πφ at each state. To solve the following constrained optimization problem:\nmax πφ Eτ∼πφ [ ∑ τ r(·)] s.t. Eτ∼πφ [ ∑ τ 1{failure}] = 0 (36)\nHere, τ is the trajectory corresponding to an episode. The objective is to maximize the cumulative returns while satisfying the constraint. The constraint says that the agent must never fail during every episode. 1{failure} = 1 if there is a failure and 1{failure} = 0 if the agent does not fail. The only way expectation can be 0 for this quantity is if every element is 0, so the constraint essentially is to never fail in any episode. Let’s rewrite the objective, more generally as\nmax πφ\nV πφ R (µ) s.t. V πφ C (µ) = 0 (37)\nWe can relax the constraint slightly, by introducing a tolerance parameter χ ≈ 0. The objective below tolerates atmost χ failures in expectation. Since the agent can fail only once in an episode, V πφ C (µ) can also be interpreted as the probability of failure, and the constraint V πφ C (µ) ≤ χ says that the probability of failure in expectation must be bounded by χ. So, our objective has a very intuitive and practical interpretation.\nmax πφ\nV πφ R (µ) s.t. V πφ C (µ) ≤ χ (38)\nWe learn one state value function, VR (corresponding to the task reward), parameterized by θ and one state-action value function QC (corresponding to the sparse failure indicator), parameterized by ζ. We have a task reward function r(s, a) from the environment which is used to learn VR. For learning QC , we get a signal from the environment indicating whether the agent is dead (1) or alive (0) i.e. 1{failure}.\nThe safety critic QC is used to get an estimate of how safe a particular state is, by providing an estimate of probability of failure, that will be used to guide exploration. We desire the estimates to be conservative, in the sense that the probability of failure should be an over-estimate of the actual probability so that the agent can err in the side of caution while exploring. To train such a critic QC , we incorporate theoretical insights from CQL, and estimate QC through updates similar to those obtained by flipping the sign of α in equation 2 of the CQL paper (Kumar et al., 2020). The motivation for this is to get an upper bound on QC instead of a lower bound, as guaranteed by CQL.\nWe also note that the CQL penalty term (the first two terms of equation 2 of the CQL paper) can be expressed as an estimate for the advantage function of the policy Es∼dπφold ,a∼πφ(a|s)[A(s, a)],where, A(s, a) is the advantage function.\nEs∼dπφold ,a∼πφ(a|s)[Q(s, a)]− Es∼dπφold ,a∼πφold (a|s)[Q(s, a)]\n= Es∼dπφold ,a∼πφ(a|s)[Q(s, a)− Ea∼πφold (a|s)Q(s, a)] = Es∼dπφold ,a∼πφ(a|s)[Q(s, a)− V (s)] = Es∼dπφold ,a∼πφ(a|s)[A(s, a)]\n(39)\nHence, CQL can help provide an upper bound on the advantage function directly. Although the CQL class of algorithms have been proposed for batch RL, the basic bounds on the value function hold even for online training.\nWe denote the objective inside arg min as CQL(ζ), where ζ parameterizes QC , and k denotes the kth update iteration.\nQ̂k+1C ← arg min QC\nα ( −Es∼Denv,a∼πφ(a|s)[QC(s, a)] + E(s,a)∼Denv [QC(s, a)] ) + 1\n2 E(s,a,s′,c)∼Denv\n[( QC(s, a)− B̂πφQ̂kC(s, a) )2] (40) For states sampled from the replay buffer Denv , the first term seeks to maximize the expectation of QC over actions sampled from the current policy, while the second term seeks to minimize the expectation of QC over actions sampled from the replay buffer. Denv can include off-policy data, and also offline-data (if available). Let the over-estimated advantage, corresponding to the overestimated critic QC , so obtained from CQL, be denoted as ÂC(s, a), where the true advantage is AC(s, a).\nNow, let ρφ(s) denote the stationary distribution of states induced by policy πφ. For policy optimization, we have to solve a constrained optimization problem as described below:\nmax πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ s.t. V\nπφ C (µ) ≤ χ\n(41)\nThis, as per equation 22 can be rewritten as\nπφnew = max πφ\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ\ns.t. V πφold C (µ) +\n1\n1− γ Es∼ρφold ,a∼πφ\n[ A πφold C (s, a) ] ≤ χ\n(42)\nSince we are learning an over-estimate of AC through the updates in equation 2, we replace AC by the learned ÂC in the constraint above. There are multiple ways to solve this constrained optimization problem, through duality. If we consider the Lagrangian dual of this, then we have the following optimization problem, which we can solve approximately by alternating gradient descent. For now, we keep the KL constraint as is, and later use its second order Taylor expansion in terms of the Fisher Information Matrix.\nmax πφ min λ≥0\nEs∼ρφold ,a∼πφ [ A πφold R (s, a) ] − λ ( V πφold C (µ) + 1 1− γ Es∼ρφold ,a∼πφ [ ÂC(s, a) ] − χ ) s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ\n(43)\nWe replace V πφold C (µ) by its sample estimate V̂ πφold C (µ) and denote χ− V̂ πφold C (µ) as χ ′. Note that χ′ is independent of parameter φ that is being optimized over. So, the objective becomes\nmax πφ min λ≥0\nEs∼ρφold ,a∼πφ [ Âπφold (s, a)− λ\n1− γ ÂC(s, a)\n] + λχ′\ns.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ (44)\nFor notational convenience let λ′ denote the fraction λ1−γ . Also, in the expectation, we replace a ∼ πφ by a ∼ πφold and account for it by importance weighting of the objective. Let us consider maxπφ operation and the following gradient necessary for gradient ascent of φ\nφ←arg max φ Es∼ρφold [ Ea∼πφold [ πφ(a|s) πφold(a|s) (A πφold R (s, a)− λ ′ÂC(s, a)) ]] s.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ\n(45)\nφ←arg max φ ∇φoldĀ(φold)T (φ− φold)\ns.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ (46)\nHere, using slide 20 of Lecture 9 in (Levine, 2018), and the identity∇φπφ = πφ∇φ log πφ we have\n∇φĀ(φ) = Es∼ρφold [ Ea∼πφold [ πφ(a|s) πφold(a|s) ∇φ log πφ(a|s)(A πφold R (s, a)− λ ′ÂC(s, a)) ]] (47)\nUsing slide 24 of Lecture 5 in (Levine, 2018) and estimating locally at φ = φold, ∇φoldĀ(φold) = Es∼ρφold [ Ea∼πφold [ ∇φold log πφold(a|s)(A πφold R (s, a)− λ ′ÂC(s, a)) ]] (48)\nWe note that, Es∼ρφold [ Ea∼πφold [ ∇φold log πφold(a|s)Âπφold (s, a) ]] = ∇φoldJ(φold), the original policy gradient corresponding to task rewards. So, we can write equation 48 as\n∇φoldarA(φold) = ∇φoldJ(φold) + Es∼ρφold [ Ea∼πφold [ −λ′ÂC(s, a) ]] (49)\nIn practice, we estimate A πφold R through GAE (Schulman et al., 2015b;a; Levine, 2018)\nÂπφold = ∞∑ t′=t (γ)t ′−t∆t′ ∆t′ = r(st′ , at′) + γVR(st′+1)− VR(st′) (50)\nLet Âπφold (s, a) = A πφold R (s, a)−λ′AC(s, a) denote the modified advantage function corresponding to equation 48\nÂπφold = ∞∑ t′=t (γ)t ′−t∆t′ ∆t′ = r(st′ , at′) + γVR(st′+1)− VR(st′)− λ′ÂC(st′ , at′) (51)\nSo, rewriting equations 48 and 53 in terms of Ãπφold , we have ∇φoldĀ(φold) = Es∼ρφold [ Ea∼πφold [ ∇φold log πφold(a|s)Âπφold ]] (52)\n∇φoldĀ(φold) = ∇φold J̃(φold) (53) Substituting in equation 46, we have\nφ←arg max φ ∇φold J̃(φold)T (φ− φold)\ns.t. Es∼ρφold [DKL(πφold(·|s)||πφ(·|s))] ≤ δ (54)\nAs shown in slide 20 of Lecture 9 (Levine, 2018) and (Schulman et al., 2015a), we can approximate DKL in terms of the Fisher Information Matrix F (this is the second order term in the Taylor\nexpansion of KL; note that around φ = φold, both the KL term and its gradient are 0),\nDKL(πφold(·|s)||πφ(·|s)) = 1\n2 (φ− φold)TF(φ− φold) (55)\nWhere, F can be estimated with samples as F = Es∼ρφold [ Ea∼πφold [ ∇φold log πφold(∇φold log πφold)T ]] (56) So, finally, we can write the gradient ascent step for φ as (natural gradient conversion)\nφ← φold + βF−1∇φold J̃(φold) β = √\n2δ\n∇φold J̃(φold)TF∇φold J̃(φold) (57)\nIn practice, we perform backtracking line search to ensure the DKL constraint satisfaction. So, we have the following update rule φ← φold + βF−1∇φold J̃(φold) β = βj √ 2δ\n∇φold J̃(φold)TF∇φold J̃(φold) (58)\nAfter every update, we check if D̄KL(φ||φold) ≤ δ, and if not we decay βj = βj(1 − βj)j , set j ← j + 1 and repeat for L steps until D̄KL ≤ δ is satisfied. If this is not satisfied after L steps, we backtrack, and do not update φ i.e. set φ← φold. For gradient descent with respect to the Lagrange multiplier λ we have (from equation 5),\nλ← λ− ( 1\n1− γ Es∼ρφold ,a∼πφold [ÂC(s, a)]− χ\n′ )\n(59) Note that in the derivations we have ommitted ∑ t in the outermost loop of all expectations, and subscripts (e.g. at, st) in order to avoid clutter in notations." }, { "heading": "A.3 RELATION TO CPO", "text": "The CPO paper (Achiam et al., 2017) considers a very similar overall objective for policy gradient updates, with one major difference. CPO approximates the V πφC (µ) ≤ χ constraint by replacing V πφ C (µ) with its first order Taylor expansion and enforces the resulting simplified constraint exactly in the dual space. On the other hand, we do not make this simplification, and use primal-dual optimization to optimize an upper bound on VC through the CQL inspired objective in equation 2. Doing this and not not making the linearity modification allows us to handle sparse (binary) failure indicators from the environment without assuming a continuous safety cost function as done in CPO (Achiam et al., 2017)." }, { "heading": "A.4 PRACTICAL CONSIDERATIONS", "text": "Depending on the value of KL-constraint on successive policies δ, the RHS in Theorem 2 can either be a lower or higher rate than the corresponding problem without safety constraint. In particular, let the sampling error ζ = 0, then if δ ≥ (1−γ)\n4(2−χ)2 8γ2 , the third term is negative.\nIf we set γ = 0.99 and χ = 0.05, then for any δ > 1e-8, the third term in Theorem 3 will be negative. Also, if α is chosen to be much greater than that in equation 26, the value of ∆ can be arbitrarily increased in principle, and we would be overestimating the value of QC significantly. While increasing ∆ significantly will lead to a decrease in the upper bound of V ∗R(µ) − V (T ) R (µ), but in practice, we would no longer have a practical algorithm. This is because, when QC is overestimated significantly, it would be difficult to guarantee that line 9 of Algorithm 1 is satisfied, and policy execution will stop, resulting in infinite wall clock time for the algorithm.\nIn order to ensure that the above does not happen, in practice we loop over line 6 of Algorithm 1 for a maximum of 100 iterations. So, in practice the anytime safety constraint satisfaction of Theorem 2 is violated during the early stages of training when the function approximation of QC is incorrect. However, as we demonstrate empirically, we are able to ensure the guarantee holds during the majority of the training process." }, { "heading": "A.5 DETAILS ABOUT THE ENVIRONMENTS", "text": "In each environment, shown in Figure 2, we define a task objective that the agent must achieve and a criteria for catastrophic failure. The goal is to solve the task without dying. In all the environments, in addition to the task reward, the agent only receives a binary signal indicatin whether it is dead i.e. a catastrophic failure has occurred (1) or alive (0).\n• Point agent navigation avoiding traps. Here, a point agent with two independent actuators for turning and moving forward/backward must be controlled in a 2D plane to reach a goal (shown in green in Figure 2) while avoiding traps shown in violet circular regions. The agent has a health counter set to 25 for the episode and it decreases by 1 for every timestep that it resides in a trap. The agent is alive when the health counter is positive, and a catastrophic failure occurs when the counter strikes 0 and the agent dies.\n• Car agent navigation avoiding traps. Similar environment as the above but the agent is a Car with more complex dynamics. It has two independently controllable front wheels and free-rolling rear wheel. We adapt this environment from (Ray et al., 2019).\n• Panda push without toppling. A Franka Emika Panda arm must push a vertically placed block across the table to a goal location without the block toppling over. The workspace dimensions of the table are 20cmx40cm and the dimensions of the block are 5cmx5cmx10cm. The environment is based on Robosuite Zhu et al. (2020) and we use Operational Space Control (OSC) to control the end-effevctor velocities of the robot arm. A catastrophic failure is said to occur is the block topples.\n• Panda push within boundary. A Franka Emika Panda arm must be controlled to push a block across the table to a goal location without the block going outside a rectangular constraint region. Catastrophic failure occurs when the block center of mass ((x, y) position) move outside the constraint region on the table with dimensions 15cmx35cm. The dimensions of the block are 5cmx5cmx10cm. The environment is based on Robosuite Zhu et al. (2020) and we use Operational Space Control (OSC) to control the end-effector velocities of the robot arm.\n• Laikago walk without falling, a Laikago quadruped robot must walk without falling. The agent is rewarded for walking as fast as possible (or trotting) and failure occurs when the robot falls. Since this is an extremely challenging task, for all the baselines, we initialize the agent’s policy with a controller that has been trained to keep the agent standing, while not in motion. The environment is implemented in PyBullet and is based on (Peng et al., 2020)." }, { "heading": "A.6 HYPER-PARAMETER DETAILS", "text": "We chose the learning rate ηQ for the safety-critic QC to be 2e− 4 after experimenting with 1e− 4 and 2e − 4 and observing slightly better results with the latter. The value of discount factor γ is set to the usual default value 0.99, the learning rate ηλ of the dual variable λ is set to 4e − 2, the value of δ for the DKL constraint on policy updates is set to 0.01, and the value of α to be 0.5. We experimented with three different α values 0.05, 0.5, 5 and found nearly same performance across these three values. For policy updates, the backtracking co-efficient β(0) is set to 0.7 and the max. number of line search iterations L = 20. For the Q-ensembles baseline, the ensemble size is chosen to be 20 (as mentioned in the LNT paper), with the rest of the common hyper-parameter values consistent with CSC, for a fair comparison.All results are over four random seeds." }, { "heading": "A.7 COMPLETE RESULTS FOR TRADEOFF BETWEEN SAFETY AND TASK PERFORMANCE", "text": "" }, { "heading": "A.9 COMPARISON BETWEEN TWO UNCONSTRAINED RL ALGORITHMS", "text": "" }, { "heading": "A.8 COMPLETE RESULTS FOR COMPARISON WITH BASELINES", "text": "" }, { "heading": "A.10 SEEDING THE REPLAY BUFFER WITH VERY FEW SAMPLES", "text": "In order to investigate if we can leverage some offline user-specified data to lower the number of failures during training even further, we seed the replay buffer of CSC and the baselines with 1000 tuples in the Car navigation environment. The 1000 tuples are marked as safe or unsafe depending on whether the car is inside a trap location or not in those states. If our method can leverage such manually marked offline data (in small quantity as this marking procedure is not cheap), then we have a more practical method that can be deployed in situations where the cost of visiting an unsafe state is significantly prohibitive. Note that this is different from the setting of offline/batch RL, where the entire training data is assumed to be available offline - in this experimental setting we consider very few tuples (only 1000). Figure 9 shows that our method can successfully leverage this small offline data to bootstrap the learning of the safety critic and significantly lower the average failures. We attribute this to training the safety critic conservatively through CQL, which is an effective method for handling offline data." }, { "heading": "A.11 CAR NAVIGATION WITH TRAPS USING CONTINUOUS SAFETY SIGNAL", "text": "In this section we consider the case of a continuous safety signal to show that CSC can learn constraints in this setting as well, and minimize failures significantly more compared to the baselines. The car navigation with traps provides a natural setting for this, because every time the agent enters a trap region, it can receive a penalty (and its health counter decreases by 1), and a catastrophic failure occurs when the health counter drops to 0. By training the safety critic with this continuous failure signal instead of on binary failure signals, we can capture a notion of impending failure and hence aim to be more safe. Note that this setting is a strictly easier evaluation setting than what we previously considered with the binary safety signal." } ]
2,021
CONSERVATIVE SAFETY CRITICS FOR EXPLORATION
SP:6945d14d266d0ca1c931a55091166604e7984604
[ "This paper proposes a novel multi-task learning method which adjusts task weights dynamically during training, by exploiting task-specific updates of the model parameters between training epochs. Specifically, the proposed model takes the differences between the model’s parameters before and after the singletask update, after that the mixing factors of the model updates are found based on the differences to minimize the loss on the target task’s development data. Empirical studies are performed on tasks of computer vision and natural language understanding." ]
Multitask Learning is a Machine Learning paradigm that aims to train a range of (usually related) tasks with the help of a shared model. While the goal is often to improve the joint performance of all training tasks, another approach is to focus on the performance of a specific target task, while treating the remaining ones as auxiliary data from which to possibly leverage positive transfer towards the target during training. In such settings, it becomes important to estimate the positive or negative influence auxiliary tasks will have on the target. While many ways have been proposed to estimate task weights before or during training they typically rely on heuristics or extensive search of the weighting space. We propose a novel method called α-Variable Importance Learning (αVIL) that is able to adjust task weights dynamically during model training, by making direct use of taskspecific updates of the underlying model’s parameters between training epochs. Experiments indicate that αVIL is able to outperform other Multitask Learning approaches in a variety of settings. To our knowledge, this is the first attempt at making direct use of model updates for task weight estimation.
[]
[ { "authors": [ "Roy Bar Haim", "Ido Dagan", "Bill Dolan", "Lisa Ferro", "Danilo Giampiccolo", "Bernardo Magnini", "Idan Szpektor" ], "title": "The second PASCAL recognising textual entailment challenge", "venue": null, "year": 2006 }, { "authors": [ "J. Barker", "R. Marxer", "E. Vincent", "S. Watanabe" ], "title": "The Third ’CHiME’ Speech Separation and Recognition Challenge: Dataset, Task and Baselines", "venue": "In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU),", "year": 2015 }, { "authors": [ "Luisa Bentivogli", "Ido Dagan", "Hoa Trang Dang", "Danilo Giampiccolo", "Bernardo Magnini" ], "title": "The fifth PASCAL recognizing textual entailment challenge", "venue": null, "year": 2009 }, { "authors": [ "Joachim Bingel", "Anders Søgaard" ], "title": "Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers", "venue": null, "year": 2017 }, { "authors": [ "Rich Caruana" ], "title": "Multitask Learning, pp. 95–133", "venue": "ISBN 978-14615-5529-2", "year": 1998 }, { "authors": [ "Richard Caruana" ], "title": "Multitask learning: A knowledge-based source of inductive bias", "venue": "In Proceedings of the Tenth International Conference on Machine Learning,", "year": 1993 }, { "authors": [ "Ronan Collobert", "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Ido Dagan", "Oren Glickman", "Bernardo Magnini" ], "title": "The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment", "venue": null, "year": 2006 }, { "authors": [ "Marie-Catherine De Marneffe", "Mandy Simons", "Judith Tonhauser" ], "title": "The CommitmentBank: Investigating projection in naturally occurring discourse. 2019", "venue": "To appear in proceedings of Sinn und Bedeutung 23. Data", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Yunshu Du", "Wojciech M Czarnecki", "Siddhant M Jayakumar", "Razvan Pascanu", "Balaji Lakshminarayanan" ], "title": "Adapting auxiliary losses using gradient similarity", "venue": "arXiv preprint arXiv:1812.02224,", "year": 2018 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "Bill Dolan" ], "title": "The third PASCAL recognizing textual entailment challenge", "venue": "In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing,", "year": 2007 }, { "authors": [ "Han Guo", "Ramakanth Pasunuru", "Mohit Bansal" ], "title": "AutoSeM: Automatic task selection and mixing in multi-task learning", "venue": "In Proceedings of the", "year": 2019 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Hector J Levesque", "Ernest Davis", "Leora Morgenstern" ], "title": "The Winograd schema challenge", "venue": "In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning,", "year": 2011 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "venue": null, "year": 1907 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "Chris Quirk", "Chris Brockett", "William B Dolan" ], "title": "Monolingual machine translation for paraphrase generation", "venue": "In Proceedings of the 2004 conference on empirical methods in natural language processing,", "year": 2004 }, { "authors": [ "Melissa Roemmele", "Cosmin Adrian Bejan", "Andrew S. Gordon" ], "title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning", "venue": "In 2011 AAAI Spring Symposium Series,", "year": 2011 }, { "authors": [ "Sebastian Ruder" ], "title": "An Overview of Multi-Task Learning in Deep Neural Networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Multi-task learning as multi-objective optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Sunit Sivasankaran", "Emmanuel Vincent", "Irina Illina" ], "title": "Discriminative Importance Weighting of Augmented Training Data for Acoustic Model Training", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Anders Søgaard", "Yoav Goldberg" ], "title": "Deep multi-task learning with low level tasks supervised at lower layers", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2016 }, { "authors": [ "Ieva Staliūnaitė", "Ignacio Iacobacci" ], "title": "Compositional and lexical semantics in roberta, bert and distilbert: A case study", "venue": "coqa,", "year": 2020 }, { "authors": [ "Ian Tenney", "Dipanjan Das", "Ellie Pavlick. Bert" ], "title": "rediscovers the classical nlp pipeline", "venue": "arXiv preprint arXiv:1905.05950,", "year": 2019 }, { "authors": [ "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Dengxin Dai", "Luc Van Gool" ], "title": "Revisiting Multi-Task Learning in the Deep Learning Era", "venue": "arXiv preprint arXiv:2004.13379,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Yada Pruksachatkun", "Nikita Nangia", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "SuperGLUE: A stickier benchmark for general-purpose language understanding systems", "venue": "arXiv preprint 1905.00537,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "2019b. In the Proceedings of ICLR", "year": 2019 }, { "authors": [ "Xinyi Wang", "Hieu Pham", "Paul Michel", "Antonios Anastasopoulos", "Jaime Carbonell", "Graham Neubig" ], "title": "Optimizing data usage via differentiable rewards, 2020", "venue": null, "year": 2020 }, { "authors": [ "Sen Wu", "Hongyang R. Zhang", "Christopher R" ], "title": "Understanding and improving information transfer in multi-task learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tianhe Yu", "Saurabh Kumar", "Abhishek Gupta", "Sergey Levine", "Karol Hausman", "Chelsea Finn" ], "title": "Gradient surgery for multi-task learning", "venue": "arXiv preprint arXiv:2001.06782,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "In Machine Learning, we often encounter tasks that are at least similar, if not even almost identical. For example, in Computer Vision, multiple datasets might require object segmentation or recognition (Deng et al., 2009; LeCun et al., 1998; Lin et al., 2014) whereas in Natural Language Processing, tasks can deal with sentence entailment (De Marneffe et al., 2019) or paraphrase recognition (Quirk et al., 2004), both of which share similarities and fall under the category of Natural Language Understanding.\nGiven that many such datasets are accessible to researchers, a naturally emerging question is whether we can leverage their commonalities in training setups. Multitask Learning (Caruana, 1993) is a Machine Learning paradigm that aims to address the above by training a group of sufficiently similar tasks together. Instead of optimizing each individual task’s objective, a shared underlying model is fit so as to maximize a global performance measure, for example a LeNet-like architecture (LeCun et al., 1998) for Computer Vision, or a Transformer-based encoder (Vaswani et al., 2017) for Natural Language Processing problems. For a broader perspective of Multitask Learning approaches, we refer the reader to the overviews of Ruder (2017); Vandenhende et al. (2020).\nIn this paper we introduce αVIL, an approach to Multitask Learning that estimates individual task weights through direct, gradient-based metaoptimization on a weighted accumulation of taskspecific model updates. To our knowledge, this is the first attempt to leverage task-specific model deltas, that is, realized differences of model parameters before and after a task’s training steps, to directly optimize task weights for target task-oriented multitask learning. We perform initial experiments on multitask setups in two domains, Computer Vision and Natural Language Understanding, and show that our method is able to successfully learn a good weighting of classification tasks." }, { "heading": "2 RELATED WORK", "text": "Multitask Learning (MTL) can be divided into techniques which aim to improve a joint performance metric for a group of tasks (Caruana, 1993), and methods which use auxiliary tasks to boost the performance of a single target task (Caruana, 1998; Bingel & Søgaard, 2017).\nSome combinations of tasks suffer when their model parameters are shared, a phenomenon that has been termed negative transfer. There have been efforts to identify the cause of negative transfer. Du et al. (2018) use negative cosine similarity between gradients as a heuristic for determining negative transfer between target and auxiliary tasks. Yu et al. (2020) suggest that these conflicting gradients are detrimental to training when the joint optimization landscape has high positive curvature and there is a large difference in gradient magnitudes between tasks. They address this by projecting task gradients onto the normal plane if they conflict with each other. Wu et al. (2020) hypothesize that the degree of transfer between tasks is influenced by the alignment of their data samples, and propose an algorithm which adaptively aligns embedded inputs. Sener & Koltun (2018) avoid the issue of negative transfer due to competing objectives altogether, by casting MTL as a Multiobjective Optimization problem and searching for a Pareto optimal solution.\nIn this work, we focus on the target task approach to Multitask Learning, tackling the problem of auxiliary task selection and weighting to avoid negative transfer and maximally utilize positively related tasks. Auxiliary tasks have been used to improve target task performance in Computer Vision, Reinforcement Learning (Jaderberg et al., 2016), and Natural Language Processing (Collobert et al., 2011). They are commonly selected based on knowledge about which tasks should be beneficial to each other through the insight that they utilize similar features to the target task (Caruana, 1998), or are grouped empirically (Søgaard & Goldberg, 2016). While this may often result in successful task selection, such approaches have some obvious drawbacks. Manual feature-based selection requires the researcher to have deep knowledge of the available data, an undertaking that becomes ever more difficult with the introduction of more datasets. Furthermore, this approach is prone to failure when it comes to Deep Learning, where model behaviour does not necessarily follow human intuition. Empirical task selection, e.g., through trialling various task combinations, quickly becomes computationally infeasible when the number of tasks becomes large.\nTherefore, in both approaches to Multitask Learning (optimizing either a target task using auxiliary data or a global performance metric), automatic task weighting during training can be beneficial for optimally exploiting relationships between tasks.\nTo this end, Guo et al. (2019) use a two-staged approach; first, a subset of auxiliary tasks which are most likely to improve the main task’s validation performance is selected, by utilizing a MultiArmed Bandit, the estimates of which are continuously updated during training. The second step makes use of a Gaussian Process to infer a mixing ratio for data points belonging to the selected tasks, which subsequently are used to train the model.\nA different approach by Wang et al. (2020) aims to directly differentiate at each training step the model’s validation loss with respect to the probability of selecting instances of the training data (parametrised by a scorer network). This approach is used in multilingual translation by training the scorer to output probabilities for all of the tasks’ training data. However, this method relies on noisy, per step estimates of the gradients of the scorer’s parameters as well as the analytical derivation of it depending on the optimizer used. Our method in comparison is agnostic to the optimization procedure used.\nMost similarly to our method, Sivasankaran et al. (2017) recently introduced Discriminative Importance Weighting for acoustic model training. In their work, the authors train a model on the CHiME-3 dataset (Barker et al., 2015), adding 6 artificially perturbed datasets as auxiliary tasks. Their method relies on estimating model performances on the targeted validation data when training tasks in isolation, and subsequently using those estimates as a proxy to adjust individual task weights. Our method differs from this approach by directly optimizing the target validation loss with respect to the weights applied to the model updates originating from training each task.\n3 α-VARIABLE IMPORTANCE LEARNING\nThe target task-oriented approach to Multitask Learning can be defined as follows. A set of classification tasks T ={t1, t2, . . . , tn} are given, each associated with training and validation datasets, Dtrainti and D dev ti , as well as a target task t\n∗∈T . We want to find weights W={ω1, ω2, . . . , ωn} capturing the importance of each task such that training the parameters θ of a Deep Neural Network on the weighted sum of losses for each task maximizes the model’s performance on the target task’s development set:\nθ∗ = argmin θ |T |∑ i=0 wi∑ w · Lti(Dtrainti , θ) s.t. L t∗(Ddevt∗ , θ∗) ≈ min θ Lt ∗ (Ddevt∗ , θ) (1)\nwhere Lti(Dti , θ) is defined as the average loss over all data points in the dataset Dti for network parameters θ, computed using the appropriate loss function for task ti (in our case the standard cross entropy loss):\nLti(Dti , θ) = 1 |Dti | ∑ k L(xk, yk; θ), (xk, yk) ∈ Dti (2)\nThe introduction of task weights w in Equation 1 aims at scaling the tasks’ model updates in a way that positive transfer towards the target is exploited and negative transfer avoided. It is crucial therefore to have an efficient and reliable way of estimating the influence of tasks on the target’s performance, and of adjusting the weights accordingly during model training.\nTo this end, we introduce α-Variable Importance Learning (αVIL), a novel method for target taskoriented Multitask training, outlined in Algorithm 1. αVIL introduces a number of additional parameters – α-variables – into the model, which are associated with the actually realized task-specific model updates.\nAlgorithm 1: The α-Variable Importance Learning algorithm. Data: Model parameters θ; a set of tasks T = {t1, . . . , tn}; a target task t∗; training data\nDtrainti for each task ti; development data D dev t∗ for the target task; maximum number of training epochs E ; ratio ρ of tasks’ training data to sample per epoch; number of α tuning steps s\nResult: updated parameters θ, optimized for performance on t∗ 1 W← {wti = 1 | ti ∈ T} // initialize all task weights to 1 2 for = 1 . . . E do 3 for ti ∈ T do 4 D ti\nρ∼ Dtrainti // sample task-specific data 5 θti ← argminθ′ wti∑ wL ti(D ti , θ ′) // task’s model update starting at θ 6 δti ← θti − θ 7\n// task-specific weight update, optimizing wrt. α parameters on δ 8 for s ∈ 1 . . . s do 9 {α1, α2, . . . , α|T |} ← argmin{α1,α2,...,α|T |} L(D dev t∗ , θ + α1δ1 + . . .+ α|T |δ|T |)\n10 θ ← θ + α1δ1 + . . .+ α|T |δ|T | 11 W← {wti + (αti − 1) | ti ∈ T}\nDuring training, our approach first performs weighted task-specific model updates on a proportion of the available training data for each individual task, starting from the current model parameters. It collects the resulting model deltas, i.e., the differences between the model’s parameters before and after the singletask update and resets the model. After this delta collection phase, the optimal mixing factors, that is, {α1, α2, . . . , α|T |} of the model updates are found, such that the parameters resulting from the interpolation of scaled task-specific δ’s minimize the loss on the target task’s development data.\nThe α–parameters can be optimized through any type of optimization method however, since our models are end to end differentiable, we can backpropagate directly and use gradient descent.\nOnce we have found the optimal mixing ratio of task updates, we write the new state back to the model, and update the task weights subject to the optimized α parameters.\nThe task weight update rule (line 11 in Algorithm 1) combined with the weighted task-specific model updates (line 5) tries to capture the intuition that if a task update was up- or down-scaled in the α-tuning stage, we likely want to update the parameters more/less for this task, during the next delta collection phase." }, { "heading": "4 EXPERIMENTS", "text": "To test the efficacy of αVIL, we apply it in two domains, Computer Vision (CV) and Natural Language Processing (NLP). We perform experiments on a multitask version of the MNIST dataset (LeCun et al., 1998), and on a number of well-established Natural Language Understanding (NLU) tasks. In all scenarios, we evaluate αVIL against baselines that perform single task and standard multitask learning, as well as against a strong target task-oriented approach." }, { "heading": "4.1 COMPUTER VISION", "text": "As the first benchmarking domain for αVIL we chose Computer Vision, as Multitask Learning has a longstanding tradition in this field, with a variety of existing datasets. For our experiments, we use Sener & Koltun (2018)’s variation of MultiMNIST, itself introduced in Sabour et al. (2017) as an augmentation of the well-established MNIST dataset. In MNIST, the task is to classify a given hand-drawn digit (Figure 1, left). For MultiMNIST, two digits are overlaid, the first shifted to the top left and the second to the bottom right (Figure 1, right). The two tasks are to recognize each of the super-imposed digits. The resulting MultiMNIST dataset contains a total of 10.000 test and 60,000 training instances, of which we sample 10,000 at random to use for validation.\nFor all Multitask experiments on MultiMNIST, we use a classification architecture similar to Sener & Koltun (2018), as depicted in Figure 2. The model comprises a shared convolutional encoder, where the first convoluational layer has 10 filters with kernel size 5 and the second uses the same kernel size but 20 filters. Both max pooling layers are of size 2x2, and the shared fully connected layer is of dimensionality 320x50. The encoded images are passed into task-specific heads of size 50x10, used to classify the top-left and bottom-right digit respectively.\nWe compare αVIL to a single task baseline, where we remove one of the classification heads from the model, as well as a standard multitask baseline in which both heads are trained jointly, each of which receives the shared encoder output for a given image and updates the model to solve its specific task, averaging their updates for the image. For the standard multitask baseline, we save two snapshots of the model during training, each of which performs best on either the top-left or the bottom-right digit on the development sets. For αVIL, we predefine either one of these tasks as the target and try to optimize its performance using the other task as auxiliary data. We also compare our method to the Discriminative Importance Weighting (DIW) approach of Sivasankaran et al. (2017), which provides a very strong target task-oriented optimization method. Their approach is relatively similar to our own however, in contrast to αVIL, DIW collects unweighted single-task updates, and evaluates each individual task on the specified target task. It then performs actual weighted multitask updates in an inner loop, evaluates the jointly updated model, and adjusts individual task weights with respect to the difference in evaluation results, after which the model is reset and re-trained with\nthe new weights, until an improvement over the previous iteration’s target task performance has been achieved.\nFor all experiments, we use a batch size of 256 and SGD as the model optimizer with learning rate set to 0.05, momentum to 0.9. Dropout is not used. We train on the entire MultiMNIST training set before evaluating (single task, multitask) or weight tuning on the development data (DIW, αVIL). For αVIL, we set the number of tuning steps s=10, and use SGD with learning rate set to 0.005 and momentum to 0.5 as the meta-optimizer to tune the α parameters. The delta collection weights, wti , as well as the task importance weights of DIW are clamped to [10\n−6,∞)1. Similarly, we set an early stopping criterion for the re-weighting loop of DIW to break after 10 epochs without improving over the previous performance2. We train all models for 100 episodes, keeping the bestperforming snapshots with respect to the development accuracy and target task3. We average over 20 random seeds, and report in Table 1 the minimum, maximum and mean classification accuracy on the MultiMNIST development and test sets, as well as the models’ standard deviation.\nWe can see from Table 1 that the second task, classifying the bottom-right digit, seems to be somewhat harder for the model to learn than the first, as reflected by the worse model performance in all settings for this task. Furthermore, when training in the standard multitask setting, model performance actually decreases for both tasks. This indicates that in MultiMNIST, there exists negative transfer between the two tasks. We can also see that both target task-oriented Multitask approaches are able to rectify the negative transfer problem, and in fact even improve over the models trained on the single tasks in isolation.\nAcross the board, αVIL achieves the best overall mean accuracy, on both tasks’ development and test sets, while keeping a comparably low standard deviation. Crucially, αVIL not only brings Multitask performance back to the single task level, but outperforms the single task baseline, as well as the DIW target task-oriented training approach.\nFigure 3 shows the α–parameters over the course of training (left) and the corresponding normalized weights of the two tasks (right). The target task t∗ is set to task 1. As expected, due to the existence of negative transfer between the tasks, the algorithm initially weights model updates originating from the main task and auxiliary task with α1>1 and α2<1 respectively, quickly driving the task weights down. Finally, α’s settle at ≈ 1, as task 2 has been essentially excluded from the training.\n1In practice, training with even a slightly negative loss weight causes parameters to massively overshoot. 2In their original paper, the authors mention no such early stopping criterion however, we found that training can enter an infinite loop without it. 3For standard multitask training, we save 5 snapshots of the model, one for each best performance on the individual tasks" }, { "heading": "4.2 NATURAL LANGUAGE UNDERSTANDING", "text": "We also test αVIL in the domain Natural Language Processing. In particular, we are interested in the learning of Natural Language Understanding (NLU) tasks.\nThe field of NLU has recently regained traction as a research area, with the introduction of the Transformer architecture (Vaswani et al., 2017) and subsequent advent of very large language models such as BERT (Devlin et al., 2019) and similar models. These very deep models are trained on a plethora of data, and (at least seem to) incorporate a large amount of linguistic knowledge (Tenney et al., 2019; Staliūnaitė & Iacobacci, 2020), which makes them ideal for downstream tasks like NLU.\nNatural Language Understanding comprises a wide variety of established datasets and tasks, each dealing with different aspects of the broader field. This provides a rich and interesting resource for Multitask Learning research, as NLU tasks are at least at first glance related in the kind of linguistic knowledge they require. They therefore lend themselves to being trained jointly, yet tasks might in reality make use of different aspects of the underlying shared model, leading potentially to negative transfer.\nFor a first test of αVIL in the NLU domain, we limit our experiments to 5 commonly used tasks that are also represented in the GLUE and SuperGLUE (Wang et al., 2019a;b) research benchmarks: CommitmentBank (De Marneffe et al., 2019, CB), Choice of Plausible Alternatives (Roemmele et al., 2011, CoPA), Microsoft Research Paraphrase Corpus (Quirk et al., 2004, MRPC), Recognizing Textual Entailment (Dagan et al., 2006; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009, RTE), and Winograd Natural Language Inference (WNLI), itself based on the Winograd Schema Challenge (Levesque et al., 2011). Brief descriptions and examples for each of the tasks are given below. The tasks were chosen for their relatively small data sizes, to allow for faster experimentation.\nCommitmentBank: CB is a three-way classification task, where model inputs consist of a premise text of one or more sentences, e.g, “B: Oh, well that’s good. A: but she really doesn’t. Nobody thought she would adjust”, and a hypothesis like “she would adjust” which can either be entailed in the premise, contradict it, or be neutral. CB is the smallest of our NLU tasks, and contains 250 examples of training data, as well as 56 for validation and 250 for testing.\nCoPA: This task provides a premise (“My body cast a shadow over the grass.”), two alternative choices (“The sun was rising.”/“The grass was cut.”), and a relation (“cause”) as input. The model’s task is to determine whether choice1 or choice2 is the more likely continuation of the premise, given the relation. The CoPA dataset provides 400 training as well as 100 and 500 validation and test instances.\nMRPC: This is a paraphrase recognition task, in which the model is shown two sentences, and to classify whether or not they are paraphrases. MRPC constitutes our largest dataset for NLU experiments, with 3668 training and 408 validation examples, with 1725 for testing.\nRecognizing Textual Entailment: RTE is a two-way classification task, where given a premise, for example “Things are easy when you’re big in Japan.” and a hypothesis, such as “You’re big\nin Japan.”, the task is to correctly classify whether the former entails the latter. The RTE data is our second larges dataset, and contains 2490 instances for training, as well as 277 and 3000 for validation and testing respectively.\nWNLI: This task is about classifying Natural Language Inference. The input consists of a short text (“I stuck a pin through a carrot. When I pulled the pin out, it had a hole.”) and a follow-up sentence (“The carrot had a hole.”), and the model has to correctly classify whether the sentence can be inferred from the preceding text. WNLI comprises training, validation, and test sets containing 635, 71, and 146 instances, respectively.\nTo address the Multitask learning problem on these 5 NLU tasks, similar to the MultiMNIST image classification setup above, we employ an architecture of joint encoder and task-specific heads which is depicted in Figure 4. We use a RoBERTa model (Liu et al., 2019) as the underlying shared encoder, and prepare the data for all tasks such that is suitable as Transformer encoder input. For our experiments, we employ the pre-trained RoBERTabase encoder provided by Huggingface4, which consists of 12 encoder layers, and outputs a 768-dimensional embedding vector for each input token. On top of the shared encoder we add one classification head for each of the 5 NLU tasks. Each head takes as input the final RoBERTa encoding of the start-of-sequence token<S>, and employs a linear layer of dimensionality 768x256, ReLU, followed by another linear layer into the classification space (2 output units for CoPA, MRPC, RTE, and WNLI, 3 output units for CB).\nAs for the Computer Vision experiments above, we perform experiments on the 5 NLU tasks with standard single task and multitask baselines, Discriminative Importance Weighting, and αVIL. All experiments use the same model architecture, except in the single task setup where all heads but that for the target task are disabled. We use AdamW (Loshchilov & Hutter, 2017) as the optimizer for the base model, with a learning rate of 5e−6, = 1e−6, and weight decay of 0.01. For DIW, we use a weight update learning rate of 0.1, and we use SGD with a learning rate of 0.001 and momentum of 0.5 for α-variable optimization. Parallel to the previous experiments, we use 10 α update steps for αVIL, and an early stopping patience of 10 for DIW. Other than for MultiMNIST, where both tasks share a common input, each NLU task comes with distinct model inputs and outputs, as well as with datasets of very different sizes. We therefore do not wait for an entire episode before model evaluation, but instead increase the evaluation frequency. This is necessary as some of the tasks tend to overshoot their optimal point during training, and we might miss the true best model performance if we wait for an entire episode to finish. To increase the evaluation frequency, we sample at each epoch 25% of the respective training sets, train the model on batches of this quarter of the total data, and evaluate. For DIW and αVIL we also use this 25% split for weight adjustment. The batch size is set to 8 for all tasks, and we train for a total of 20 epochs, keeping the best performing snapshot per task.\n4https://huggingface.co/roberta-base\nTable 2 summarizes the results of the NLU experiment, with model accuracies on the tasks’ development sets over 4 random seeds. To obtain test set accuracies, we submit for each method the predictions of ensembles of the trained models to the GLUE and SuperGLUE benchmarks5.\nEven though these experiments were carried out on a very limited number of tasks and random seeds, and should accordingly be interpreted with care, it is clear that Discriminative Importance Weighting constitutes a very strong Multitask Learning system for comparison, especially with respect to development set performance.\nThis should be of little surprise: First, DIW does optimize its task weights directly based on the target task’s development accuracy, re-weighting and re-training at each epoch until a new highest score is found. This is in contrast to αVIL, which performs a predetermined number of optimization steps on the average development set loss, and just one interpolation of model updates per epoch. Second, as there is no perfect correlation between a lower average loss and higher absolute accuracy – decreasing the first slightly is not guaranteed to immediately increase the second – αVIL might not find a new best development accuracy each epoch.\nHowever, the picture changes when considering actual performance on unseen test data. While DIW almost consistently performs best on the development data, αVIL actually outperforms it on the final test scores. For 3 out of the 5 tested NLU tasks, the αVIL ensembles are ranked first on test (shared with DIW on MRPC, and singletask and standard multitask on WNLI). For one more dataset (CoPA), αVIL comes second, trailing the best system by just 1 point.\nWe conjecture that DIW is more prone to overfitting than αVIL, precisely because of DIW’s heavy reliance on tuning on dev multiple times each epoch until finding a new best performance. This might be less severe in settings where training, development, and test datasets are both large and sufficiently similar, as is the case in the MultiMNIST experiment above where no such performance discrepancies seem to manifest. However, in the NLU domain with less data and potentially large difference between training, development, and test examples, overfitting constitutes a more severe problem, which αVIL seems to be able to better avoid." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "In this work we have introduced αVIL, a novel algorithm for target task-oriented multitask training. αVIL uses task-specific weights which are tuned via metaoptimization of additional model parameters, with respect to target task loss. Experiments in two different domains, Computer Vision and NLP, indicate that αVIL is able to successfully learn good task weights, and can lead to increased target-task performance over singletask and standard multitask baselines, as well as a strong target task-oriented optimization approach.\nαVIL’s formulation is very flexible and allows for many variations in its (meta)optimization approach. In the future, we would like to experiment more with different ways to optimize α parameters than standard SGD. Also, αVIL does not currently perform a joint optimization iteration after its α estimation task and re-weighting, which could lead to further performance gains.\n5https://gluebenchmark.com ; https://super.gluebenchmark.com/" } ]
2,020
null
SP:e23149389db2c50bba31eacfdef723e015e58386
[ "This paper introduces an algorithm, called deep reward learning by simulating the past (deep RLSP), that seeks to infer a reward function by looking at states in demonstration data. An example of this described in the paper is an environment with a vase: if demonstration data shows an intact vase in the presence of an embodied agent then breaking the vase is unlikely to be the intended behavior. Otherwise the vase would already be broken in the demo." ]
Since reward functions are hard to specify, recent work has focused on learning policies from human feedback. However, such approaches are impeded by the expense of acquiring such feedback. Recent work proposed that agents have access to a source of information that is effectively free: in any environment that humans have acted in, the state will already be optimized for human preferences, and thus an agent can extract information about what humans want from the state (Shah et al., 2019). Such learning is possible in principle, but requires simulating all possible past trajectories that could have led to the observed state. This is feasible in gridworlds, but how do we scale it to complex tasks? In this work, we show that by combining a learned feature encoder with learned inverse models, we can enable agents to simulate human actions backwards in time to infer what they must have done. The resulting algorithm is able to reproduce a specific skill in MuJoCo environments given a single state sampled from the optimal policy for that skill.
[ { "affiliations": [], "name": "David Lindner" }, { "affiliations": [], "name": "Rohin Shah" }, { "affiliations": [], "name": "Pieter Abbeel" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2004 }, { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in AI safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "Dzmitry Bahdanau", "Felix Hill", "Jan Leike", "Edward Hughes", "Arian Hosseini", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Learning to understand goal specifications by modelling reward", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Andrea Bajcsy", "Dylan P Losey", "Marcia K O’Malley", "Anca D Dragan" ], "title": "Learning robot objectives from physical human interaction", "venue": "In Conference on Robot Learning (CoRL),", "year": 2017 }, { "authors": [ "Christopher M Bishop" ], "title": "Mixture density networks", "venue": "Neural Computing Research Group Report, Aston University,", "year": 1994 }, { "authors": [ "Lars Buesing", "Theophane Weber", "Sébastien Racaniere", "SM Eslami", "Danilo Rezende", "David P Reichert", "Fabio Viola", "Frederic Besse", "Karol Gregor", "Demis Hassabis" ], "title": "Learning and querying fast generative models for reinforcement learning", "venue": "In FAIM workshop “Prediction and Generative Modeling in Reinforcement Learning”,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Paul F Christiano", "Jan Leike", "Tom Brown", "Miljan Martic", "Shane Legg", "Dario Amodei" ], "title": "Deep reinforcement learning from human preferences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jack Clark", "Dario Amodei" ], "title": "Faulty reward functions in the wild, 2016", "venue": "URL https://blog. openai.com/faulty-reward-functions", "year": 2016 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019", "venue": null, "year": 2019 }, { "authors": [ "Andreas Doerr", "Christian Daniel", "Martin Schiegg", "Duy Nguyen-Tuong", "Stefan Schaal", "Marc Toussaint", "Sebastian Trimpe" ], "title": "Probabilistic recurrent state-space models", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adversarial inverse reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Sunil Gandhi", "Tim Oates", "Tinoosh Mohsenin", "Nicholas Waytowich" ], "title": "Learning from observations using a single video demonstration and human feedback", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Anssi Kanervisto", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/ hill-a/stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Borja Ibarz", "Jan Leike", "Tobias Pohlen", "Geoffrey Irving", "Shane Legg", "Dario Amodei" ], "title": "Reward learning from human preferences and demonstrations in Atari", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick Van der Smagt" ], "title": "Deep variational Bayes filters: Unsupervised learning of state space models from raw data", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Victoria Krakovna" ], "title": "Specification gaming examples in AI, 2018", "venue": "URL https://vkrakovna", "year": 2018 }, { "authors": [ "Thanard Kurutach", "Aviv Tamar", "Ge Yang", "Stuart J Russell", "Pieter Abbeel" ], "title": "Learning plannable representations with causal infogan", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2000 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners, 2019", "venue": null, "year": 2019 }, { "authors": [ "Rohin Shah", "Dmitrii Krasheninnikov", "Jordan Alexander", "Pieter Abbeel", "Anca Dragan" ], "title": "Preferences implicit in the state of the world", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Archit Sharma", "Shane Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised skill discovery", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Nisan Stiennon", "Long Ouyang", "Jeff Wu", "Daniel M Ziegler", "Ryan Lowe", "Chelsea Voss", "Alec Radford", "Dario Amodei", "Paul Christiano" ], "title": "Learning to summarize from human feedback", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "MuJoCo: A physics engine for model-based control", "venue": "In International Conference on Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Faraz Torabi", "Garrett Warnell", "Peter Stone" ], "title": "Generative adversarial imitation from observation. In Imitation, Intent, and Interaction", "venue": "Workshop at ICML,", "year": 2019 }, { "authors": [ "Steven Wang", "Sam Toyer", "Adam Gleave", "Scott Emmons" ], "title": "The imitation library for imitation learning and inverse reinforcement learning", "venue": "https://github.com/ HumanCompatibleAI/imitation,", "year": 2020 }, { "authors": [ "Christian Wirth", "Riad Akrour", "Gerhard Neumann", "Johannes Fürnkranz" ], "title": "A survey of preferencebased reinforcement learning methods", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Brian D Ziebart", "J Andrew Bagnell", "Anind K Dey" ], "title": "Modeling interaction via the principle of maximum causal entropy", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2010 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "As deep learning has become popular, many parts of AI systems that were previously designed by hand have been replaced with learned components. Neural architecture search has automated architecture design (Zoph & Le, 2017; Elsken et al., 2019), population-based training has automated hyperparameter tuning (Jaderberg et al., 2017), and self-supervised learning has led to impressive results in language modeling (Devlin et al., 2019; Radford et al., 2019; Clark et al., 2020) and reduced the need for labels in image classification (Oord et al., 2018; He et al., 2020; Chen et al., 2020). However, in reinforcement learning, one component continues to be designed by humans: the task specification. Handcoded reward functions are notoriously difficult to specify (Clark & Amodei, 2016; Krakovna, 2018), and learning from demonstrations (Ng et al., 2000; Fu et al., 2018) or preferences (Wirth et al., 2017; Christiano et al., 2017) requires a lot of human input. Is there a way that we can automate even the specification of what must be done?\nIt turns out that we can learn part of what the user wants simply by looking at the state of the environment: after all, the user will already have optimized the state towards their own preferences (Shah et al., 2019). For example, when a robot is deployed in a room containing an intact vase, it can reason that if its user wanted the vase to be broken, it would already have been broken; thus she probably wants the vase to remain intact.\nHowever, we must ensure that the agent distinguishes between aspects of the state that the user couldn’t control from aspects that the user deliberately designed. This requires us to simulate what the user must have done to lead to the observed state: anything that the user put effort into in the past is probably something the agent should do as well. As illustrated in Figure 1, if we observe a Cheetah balancing on its front leg, we can infer how it must have launched itself into that position. Unfortunately, it is unclear how to simulate these past trajectories that lead to the observed state. So far, this has only been done in gridworlds, where all possible trajectories can be considered using dynamic programming (Shah et al., 2019).\nOur key insight is that we can sample such trajectories by starting at the observed state and simulating backwards in time. To enable this, we derive a gradient that is amenable to estimation through backwards simulation, and learn an inverse policy and inverse dynamics model using supervised\n∗Work done at the Center for Human-Compatible AI, UC Berkeley.\nlearning to perform the backwards simulation. Then, the only remaining challenge is finding a reward representation that can be meaningfully updated from a single state observation. To that end, rather than defining the reward directly on the raw input space, we represent it as a linear combination of features learned through self-supervised representation learning. Putting these components together, we propose the Deep Reward Learning by Simulating the Past (Deep RLSP) algorithm.\nWe evaluate Deep RLSP on MuJoCo environments and show that it can recover fairly good performance on the task reward given access to a small number of states sampled from a policy optimized for that reward. We also use Deep RLSP to imitate skills generated using a skill discovery algorithm (Sharma et al., 2020), in some cases given just a single state sampled from the policy for that skill.\nInformation from the environment state cannot completely replace reward supervision. For example, it would be hard to infer how clean Bob would ideally want his room to be, if the room is currently messy because Bob is too busy to clean it. Nonetheless, we are optimistic that information from the environment state can be used to significantly reduce the burden of human supervision required to train useful, capable agents." }, { "heading": "2 METHOD", "text": "In this section, we describe how Deep RLSP can learn a reward function for high dimensional environments given access only to a simulator and the observed state s0.\nNotation. A finite-horizon Markov Decision Process (MDP)M = 〈S,A, T , r,P, T 〉 contains a set of states S and a set of actions A. The transition function T : S × A × S 7→ [0, 1] determines the distribution over next states given a state and an action, and P is a prior distribution over initial states. The reward function r : S 7→ R determines the agent’s objective. T ∈ Z+ is a finite planning horizon. A policy π : S ×A 7→ [0, 1] specifies how to choose actions given a state. Given an initial state distribution, a policy and the transition function, we can sample a trajectory τ by sampling the first state from P , every subsequent action from π, and every subsequent state from T . We denote the probability distribution over trajectories as 〈P, π, T 〉 and write τ ∼ 〈P, π, T 〉 for the sampling step. We will sometimes write a single state s instead of a distribution P if the initial state is deterministic. The goal of reinforcement learning (RL) is to find a policy π∗ that maximizes the expected cumulative reward Eτ∼〈P,π,T 〉 [∑T t=1 r(st) ] .\nWe use φ : S → Rn to denote a feature function (whether handcoded or learned) that produces a feature vector of length n for every state. The reward function r is linear over φ if it can be expressed in the form r(s) = θTφ(s) for some θ ∈ Rn. We assume that some past trajectory τ−T :0 = s−Ta−T . . . a−1s0 produced the observed state s0." }, { "heading": "2.1 IDEALIZED ALGORITHM", "text": "We first explain what we would ideally do, if we had a handcoded a feature function φ and an enumerable (small) state space S that affords dynamic programming. This is a recap of Reward Learning by Simulating the Past (RLSP; Shah et al., 2019).\nWe assume the human follows a Boltzmann-rational policy πt(a | s, θ) ∝ exp(Qt(s, a; θ)), where the Q values are computed using soft value iteration. Marginalizing over past trajectories, yields a distribution over the observed state p(s0 | θ) = ∑ s−T ...a−1\np(τ = s−Ta−T . . . a−1s0 | θ). We compute the maximum likelihood estimate, argmaxθ ln p(s0 | θ), via gradient ascent, by expressing the gradient of the observed state as a weighted combination of gradients of consistent trajectories (Shah et al., 2019, Appendix B):\n∇θ ln p(s0 | θ) = E τ−T :−1 ∼ p(τ−T :−1|s0,θ) [∇θ ln p(τ | θ)] (1)\n∇θ ln p(τ | θ) is a gradient for inverse reinforcement learning. Since we assume a Boltzmann-rational human, this is the gradient for Maximum Causal Entropy Inverse Reinforcement Learning (MCEIRL; Ziebart et al., 2010). However, we still need to compute an expectation over all trajectories that end in s0, which is in general intractable. Shah et al. (2019) use dynamic programming to compute this gradient in tabular settings." }, { "heading": "2.2 GRADIENT AS BACKWARDS-FORWARDS CONSISTENCY", "text": "Approximating the expectation. For higher-dimensional environments, we must approximate the expectation over past trajectories p(τ−T :−1 | s0, θ). We would like to sample from the distribution, but it is not clear how to sample the past conditioned on the present. Our key idea is that just as we can sample the future by rolling out forwards in time, we should be able to sample the past by rolling out backwards in time. Note that by the Markov property we have:\np(τ−T :−1 | s0, θ) = −1∏\nt=−T p(st | at, st+1, . . . s0, θ)p(at | st+1, at+1, . . . s0, θ)\n= −1∏ t=−T p(st | at, st+1, θ)p(at | st+1, θ)\nThus, given the inverse policy π−1t (at | st+1, θ), the inverse dynamics T −1t (st | at, st+1, θ), and the observed state s0, we can sample a past trajectory τ−T :−1 ∼ p(τ−T :−1 | s0, θ) by iteratively applying π−1 and T −1, starting from s0. Analogous to forward trajectories, we express the sampling as τ−T :−1 ∼ 〈s0, π−1, T −1〉. Thus, we can write the gradient in Equation 1 as Eτ−T :−1 ∼ 〈s0,π−1,T −1〉 [∇θ ln p(τ | θ)].\nLearning π, π−1 and T −1. In order to learn π−1, we must first know π. We assumed that the human was Boltzmann-rational, which corresponds to the maximum entropy reinforcement learning objective (Levine, 2018). We use the Soft Actor-Critic algorithm (SAC; Haarnoja et al., 2018) to estimate the policy π(a | s, θ), since it explicitly optimizes the maximum entropy RL objective. Given the forward policy π(a | s, θ) and simulator T , we can construct a dataset of sampled forward trajectories, and learn the inverse policy π−1 and the inverse dynamics T −1 using supervised learning. Given these, we can then sample τ−T :−1, allowing us to approximate the expectation in the gradient. In general, both π−1 and T −1 could be stochastic and time-dependent. Estimating the gradient for a trajectory. We now turn to the term within the expectation, which is the inverse reinforcement learning gradient given a demonstration trajectory τ = s−Ta−T . . . s0. Assuming that the user is Boltzmann-rational, this is the MCEIRL gradient (Ziebart et al., 2010), which can be written as (Shah et al., 2019, Appendix A):\n∇θ ln p(τ | θ) =\n( 0∑\nt=−T φ(st)\n) −F−T (s−T )+\n−1∑ t=−T\n( E\ns′t+1∼T (·|st,at)\n[ Ft+1(s′t+1) ] −Ft+1(st+1) ) (2)\nF is the expected feature count under π, that is, F−t(s−t) , Eτ−t:0 ∼ 〈s−t,π,T 〉 [∑0 t′=−t φ(st′) ] .\nThe first term computes the feature counts of the demonstrated trajectory τ , while the second term computes the feature counts obtained by the policy for the current reward function θ (starting from\nthe initial state s−T ). Since r(s) = θTφ(s), these terms increase the reward of features present in the demonstration τ and decrease the reward of features under the current policy. Thus, the gradient incentivizes consistency between the demonstration and rollouts from the learned policy.\nThe last term is essentially a correction for the observed dynamics: if we see that st, at led to st+1, it corrects for the fact that we “could have” seen some other state s′t+1. Since this correction is zero in expectation (and expensive to compute), we drop it for our estimator.\nGradient estimator. After dropping the last term in Equation 2, expanding the definition of F , and substituting in to Equation 1, our final gradient estimator is:\n∇θ ln p(s0 | θ) = E τ−T :−1 ∼ 〈s0,π−1,T −1〉\n[( 0∑\nt=−T φ(st) ) − E τ ′ ∼ 〈s−T ,π,T 〉 [( 0∑ t=−T φ(s′t) )]] (3)\nThus, given s0, θ, π, T , π−1, and T −1, computing the gradient consists of three steps:\n1. Simulate backwards from s0, and compute the feature counts of the resulting trajectories. 2. Simulate forwards from s−T of these trajectories, and compute their feature counts. 3. Take the difference between these two quantities.\nThis again incentivizes consistency, this time between the backwards and forwards trajectories: the gradient leads to movement towards “what the human must have done” and away from “what the human would do if they had this reward”. The gradient becomes zero when they are identical.\nIt may seem like the backwards and forwards trajectories should always be consistent with each other, since π−1 and T −1 are inverses of π and T . The key difference is that s0 imposes constraints on the backwards trajectories, but not on the forward trajectories. For example, suppose we observe s0 in which a vase is unbroken, and our current hypothesis is that the user wants to break the vase. When we simulate backwards, our trajectory will contain an unbroken vase, but when we simulate forwards from s−T , π will break the vase. The gradient would then reduce the reward for a broken vase and increase the reward for an unbroken vase." }, { "heading": "2.3 LEARNING A LATENT MDP", "text": "Our gradient still relies on a feature function φ, with the reward parameterized as r(s) = θTφ(s). A natural way to remove this assumption would be to instead allow θ to parameterize a neural network, which can then learn whatever features are relevant to the reward from the RLSP gradient.\nHowever, this approach will not work because the information contained in the RLSP gradient is insufficient to identify the appropriate features to construct: after all, it is derived from a single state. If we were to learn a single unified reward using the same gradient, the resulting reward would likely be degenerate: for example, it may simply identify the observed state, that is R(s) = 1[s = s0].\nThus, we continue to assume that the reward is linear in features, and instead learn the feature function using self-supervised learning (Oord et al., 2018; He et al., 2020). In our experiments, we use a variational autoencoder (VAE; Kingma & Welling, 2014) to learn the feature function. The VAE encodes the states into a latent feature representation, which we can use to learn a reward function if the environment is fully observable, i.e., the states contain all relevant information.\nFor partially observable environments recurrent state space models (RSSMs; Karl et al., 2017; Doerr et al., 2018; Buesing et al., 2018; Kurutach et al., 2018; Hafner et al., 2019; 2020) could be used instead. These methods aim to learn a latent MDP, by computing the states using a recurrent model over the observations, thus allowing the states to encode the history. For such a model, we can imagine that the underlying POMDP has been converted into a latent MDP whose feature function φ is the identity. We can then compute gradients directly in this latent MDP." }, { "heading": "2.4 DEEP RLSP", "text": "Putting these components together gives us the Deep RLSP algorithm (Algorithm 1). We first learn a feature function φ using self-supervised learning, and then train an inverse dynamics model T −1, all using a dataset of environment interactions (such as random rollouts). Then, we update θ using\nAlgorithm 1 The DEEP RLSP algorithm. The initial dataset of environment interactions D can be constructed in many different ways: random rollouts, human play data, curiosity-driven exploration, etc. The specific method will determine the quality of the learned features.\nprocedure DEEP RLSP({s0}, T ) D ← dataset of environment interactions Initialize φe, φd, π, π−1, T −1, θ randomly. φe, φd ← SelfSupervisedLearning(D) . Train encoder and decoder for latent MDP Initialize experience replay E with data in D. T −1 ← SupervisedLearning(D) . Train inverse dynamics T ← 1 . Start horizon at 1 for i in [1..num_epochs] do\nπ ← SAC(θ) . Train policy π−1 ← SupervisedLearning(φe, E) . Train inverse policy θ ← θ + α × COMPUTEGRAD({s0}, π, T , π−1, T −1, T , φe) . Update θ if gradient magnitudes are sufficiently low then\nT ← T + 1 . Advance horizon return θ, φe procedure COMPUTEGRAD({s0}, π, T , π−1, T −1, T , φe) {τbackward} ← Rollout({s0}, π−1, T −1, T ) . Simulate backwards from s0 φbackward ← AverageFeatureCounts(φe, {τbackward}) . Compute backward feature counts {s−T } ← FinalStates({τbackward}) {τforward} ← Rollout({s−T }, π, T , T ) . Simulate forwards from s−T φforward ← AverageFeatureCounts(φe, {τforward}) . Compute forward feature counts Relabel {τbackward}, {τforward} and add them to E. return φbackward − φforward\nEquation 3, and continually train π, and π−1 alongside θ to keep them up to date. The full algorithm also adds a few bells and whistles that we describe next.\nInitial state distribution P . The attentive reader may wonder why our gradient appears to be independent of P . This is actually not the case: while π and T are independent of P , π−1 and T −1 do depend on it. For example, if we observe Alice exiting the San Francisco airport, the corresponding π−1 should hypothesize different flights if she started from New York than if she started from Tokyo.\nHowever, in order to actually produce such explanations, we must train π−1 and T −1 solely on trajectories of length T starting from s−T ∼ P . We instead train π−1 and T −1 on a variety of trajectory data, which loses the useful information in P , but leads to several benefits. First, we can train the models on exactly the distributions that they will be used on, allowing us to avoid failures due to distribution shift. Second, the horizon T is no longer critical: previously, T encoded the separation in time between s−T and s0, and as a result misspecification of T could cause bad results. Since we now only have information about s0, it doesn’t matter much what we set T to, and as a result we can use it to set a curriculum (discussed next). Finally, this allows Deep RLSP to be used in domains where an initial state distribution is not available.\nNote that we are no longer able to use information about P through π−1 and T −1. However, having information about P might be crucial in some applications to prevent Deep RLSP from converging to a degenerate solution with s−T = s0 and a policy π that does nothing. While we did not find this to be a problem in our experiments, we discuss a heuristic to incorporate information about s−T into Deep RLSP in Appendix C.\nCurriculum. Since the horizon T is no longer crucial, we can use it to provide a curriculum. We initially calculate gradients with low values of T , to prevent compounding errors in our learned models, and making it easier to enforce backwards-forwards consistency, and then slowly grow T , making the problem harder. In practice, we found this crucial for performance: intuitively, it is much easier to make short backwards and forwards trajectories consistent than with longer trajectories; the latter would likely have much higher variance.\nMultiple input states. If we get multiple independent s0 as input, we average their gradients.\nExperience replay. We maintain an experience replay buffer E that persists across policy training steps. We initialize E with the same set of environment interactions that the feature function and inverse dynamics model are trained on. When computing the gradient, we collect all backward and forward trajectories and add them to E. To avoid compounding errors from the inverse dynamics model, we relabel all transitions using a simulator of the environment. Whenever we’d add a transition (s, a, s′) to E, we initialize the simulator at s and execute a to obtain s̃ and add transition (s1, a, s̃) to E instead." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 SETUP", "text": "To demonstrate that Deep RLSP can be scaled to complex, continuous, high-dimensional environments, we use the MuJoCo physics simulator (Todorov et al., 2012). We consider the Inverted Pendulum, Half-Cheetah and Hopper environments implemented in Open AI Gym (Brockman et al., 2016). The hyperparameters of our experiments are described in detail in Appendix B. We provide code to replicate our experiments at https://github.com/HumanCompatibleAI/deep-rlsp.\nBaselines. To our knowledge, this is the first work to train policies using a single state as input. Due to lack of alternatives, we compare against GAIL (Ho & Ermon, 2016) using the implementation from the imitation library (Wang et al., 2020). For each state we provide to Deep RLSP, we provide a transition (s, a, s′) to GAIL.\nAblations. In Section 2.2, we derived a gradient for Deep RLSP that enforces consistency between the backwards and forwards trajectories. However, we could also ignore the temporal information altogether. If an optimal policy led to the observed state s0, then it is probably a good bet that s0 is high reward, and that the agent should try to keep the state similar to s0. Thus, we can simply set θ = φ(s0)||φ(s0)|| , and not deal with π −1 and T −1 at all.\nHow should we handle multiple states s10, . . . , s N 0 ? Given that these are all sampled i.i.d. from rollouts of an optimal policy, a natural choice is to simply average the feature vectors of all of the states, which we call AverageFeatures. Alternatively, we could view each of the observed states as a potential waypoint of the optimal policy, and reward an agent for being near any one of them. We implement this Waypoints method as R(s) = maxi φ(si0)\n||φ(si0)|| · φ(s). Note that both of these ablations\nstill require us to learn the feature function φ.\nFeature learning dataset. By default, we use random rollouts to generate the initial dataset that is used to train the features φ and the inverse model T −1. (This is D in Algorithm 1.) However, in the inverted pendulum environment, the pendulum falls very quickly in random rollouts, and T −1 never learns what a balanced pendulum looks like. So, for this environment only, we combine random rollouts with rollouts from an expert policy that balances the pendulum." }, { "heading": "3.2 GRIDWORLD ENVIRONMENTS", "text": "As a first check, we consider the gridworld environments in Shah et al. (2019). In these stylized gridworlds, self-supervised learning should not be expected to learn the necessary features. For example, in the room with vase environment, the two door features are just particular locations, with no distinguishing features that would allow self-supervised learning to identify these locations as important. So, we run Algorithm 1 without the feature learning and instead use the pre-defined feature function of the environments. With this setup we are able to use Deep RLSP to recover the desired behavior from a single state in all environments in which the exact RLSP algorithm is able to recover it. However, AverageFeatures fails on several of the environments. Since only one state is provided, Waypoints is equivalent to AverageFeatures. It is not clear how to apply GAIL to these environments, and so we do not compare to it. Further details on all of the environments and results can be found in Appendix A." }, { "heading": "3.3 SOLVING THE ENVIRONMENTS WITHOUT ACCESS TO THE REWARD FUNCTION", "text": "First we look at the typical target behavior in each environment: balancing the inverted pendulum, and making the half-cheetah and the hopper move forwards. Additionally we consider the goal of making the cheetah run backwards (that is, the negative of its usual reward function). We aim to use Deep RLSP to learn these behaviors without having access to the reward function.\nWe train a policy using soft actor critic (SAC; Haarnoja et al., 2018) to optimize for the true reward function, and sample either 1, 10 or 50 states from rollouts of this policy to use as input. We then use Deep RLSP to infer a reward and policy. Ideally we would evaluate this learned policy rather than reoptimizing the learned reward, since learned reward models can often be gamed (Stiennon et al., 2020), but it would be too computationally expensive to run the required number of SAC steps during each policy learning step. As a result, we run SAC for many more iterations on the inferred reward function, and evaluate the resulting policy on the true reward function (which Deep RLSP does not have access to).\nResults are shown in Table 1. In Hopper, we noticed that videos of the policies learned by Deep RLSP looked okay, but the quantitative evaluation said otherwise. It turns out that the policies learned by Deep RLSP do jump, as we might want, but they often fall down, terminating the episode; in contrast GAIL policies stand still or fall over slowly, leading to later termination and explaining their better quantitative performance. We wanted to also evaluate the policies without this termination bias, and so we evaluate the same policies in an environment that does not terminate the episode, but provides a negative reward instead; in this evaluation both Deep RLSP and AverageFeatures perform much better. We also provide videos of the learned policies at https://sites.google.com/view/ deep-rlsp, which show that the policies learned by Deep RLSP do exhibit hopping behavior (though with a strong tendency to fall forward).\nGAIL is only able to learn a truly good policy for the (very simple) inverted pendulum, even though it gets states and actions as input. Deep RLSP on the other hand achieves reasonable behavior (though clearly not expert behavior) in all of the environments, using only states as input. Surprisingly, the AverageFeatures method also performs quite well, even beating the full algorithm on some tasks, though failing quite badly on Pendulum. It seems that the task of running forward or backward is very well specified by a single state, since it can be inferred even without any information about the dynamics (except that which is encoded in the features learned from the initial dataset)." }, { "heading": "3.4 LEARNING SKILLS FROM A SINGLE STATE", "text": "We investigate to what extent Deep RLSP can learn other skills where the reward is not clear. Evaluation on these tasks is much harder, because there is no ground truth reward. Therefore we evaluate qualitatively how similar the policies learned by Deep RLSP are to the original skill. We also attempted to quantify similarity by checking how quickly a discriminator could learn to distinguish between the learned policy and the original skill, but unfortunately this metric was not conclusive (results are reported in Appendix D.1). Unlike the previous case, we do not reoptimize the learned reward and only look at the policies learned by Deep RLSP.\nWe consider skills learned by running Dynamics-Aware Unsupervised Discovery of Skills (DADS; Sharma et al., 2020). Since we are not interested in navigation, we remove the “x-y prior” used to get directional skills in DADS. We run DADS on the half-cheetah environment and select all skills that are not some form of running. This resulted in two skills: one in which the cheetah is moving forward making big leaps (“jumping”) and one in which it is slowly moving forward on one leg (“balancing”). As before we roll out these policies and sample individual states from the trajectories to provide as an input for Deep RLSP. We then evaluate the policy learned by Deep RLSP. Since the best evaluation here is to simply watch what the learned policy does, we provide videos of the learned policies at https://sites.google.com/view/deep-rlsp. We also provide visualizations in Appendix D.2.\nThe first thing to notice is that relative to the ablations, only Deep RLSP is close to imitating the skill. None of the other policies resemble the original skills at all. While AverageFeatures could perform well on simple tasks such as running, the full algorithm is crucial to imitate more complex behavior.\nBetween Deep RLSP and GAIL the comparison is less clear. Deep RLSP can learn the balancing skill fairly well from a single state, which we visualize in Figure 2 (though we emphasize that the videos are much clearer). Like the original skill, the learned policy balances on one leg and slowly moves forward by jumping, though with slightly more erratic behavior. However, the learned policy sometimes drops back to its feet or falls over on its back. We suspect this is an artifact of the short horizon (T ≤ 10) used for simulating the past in our algorithm. A small horizon is necessary to avoid compounding errors in the learned inverse dynamics model, but can cause the resulting behavior to be more unstable on timescales greater than T . We see similar behavior when given 10 or 50 states. GAIL leads to a good policy given a single transition, where the cheetah balances on its front leg and head (rather than just the front leg), but does not move forward very much. However, with 10 or 50 transition, the policies learned by GAIL do not look at all like balancing.\nHowever, the jumping behavior is harder to learn, especially from a single state. We speculate that here a single state is less informative than the balancing state. In the balancing state, the low joint velocities tell us that the cheetah is not performing a flip, suggesting that we had optimized for this specific balancing state. On the other hand, with the jumping behavior, we only get a single state of the cheetah in the air with high velocity, which is likely not sufficient to determine what the jump looked like exactly. In line with this hypothesis, at 1 state Deep RLSP learns to erratically hop, at 10 states it executes slightly bigger jumps, and at 50 states it matches the original skill relatively closely.\nThe GAIL policies for jumping are also reasonable, though in a different way that makes it hard to compare. Using 1 or 10 transitions, the policy doesn’t move very much, staying in contact with the ground most of the time. However, at 50 transitions, it performs noticeably forward hops slightly smoother than the policy learned by Deep RLSP." }, { "heading": "4 RELATED WORK", "text": "Learning from human feedback. Many algorithms aim to learn good policies from human demonstrations, including ones in imitation learning (Ho & Ermon, 2016) and inverse reinforcement learning (IRL; Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2018). Useful policies can also be learned from other types of feedback, such as preferences (Christiano et al., 2017), corrections (Bajcsy et al., 2017), instructions (Bahdanau et al., 2019), or combinations of feedback modalities (Ibarz et al., 2018).\nWhile these methods require expensive human feedback, Deep RLSP instead simulates the trajectories that must have happened. This is reflected in the algorithm: in Equation 1, the inner gradient corresponds to an inverse reinforcement learning problem. While we used the MCEIRL formulation (Ziebart et al., 2010), other IRL algorithms could be used instead (Fu et al., 2018).\nLearning from observations. For many tasks, we have demonstrations without action labels, e.g., YouTube videos. Learning from Observations (LfO; Torabi et al., 2019; Gandhi et al., 2019) aims to recover a policy from such demonstrations. Similarly to LfO, we do not have access to action labels, but our setting is further restricted to observing only a small number of states." }, { "heading": "5 LIMITATIONS AND FUTURE WORK", "text": "Summary. Learning useful policies with neural networks requires significant human effort, whether it is done by writing down a reward function by hand, or by learning from explicit human feedback such as preferences or demonstrations. We showed that it is possible to reduce this burden by extracting “free” information present in the current state of the environment. This enables us to imitate policies in MuJoCo environments with access to just a few states sampled from those policies. We hope that Deep RLSP will help us train agents that are better aligned with human preferences.\nLearned models. The Deep RLSP gradient depends on having access to a good model of π, T , π−1, and T −1. In practice, it was quite hard to train sufficiently good versions of the inverse models. This could be a significant barrier to practical implementations of Deep RLSP. It can also be taken as a sign for optimism: self-supervised representation learning through deep learning is fairly recent and is advancing rapidly; such advances will likely translate directly into improvements in Deep RLSP.\nComputational cost. Imitation learning with full demonstrations can already be quite computationally expensive. Deep RLSP learns several distinct neural network models, and then simulates potential demonstrations, and finally imitates them. Unsurprisingly, this leads to increased computational cost.\nSafe RL. Shah et al. (2019) discuss how the exact RLSP algorithm can be used to avoid negative side-effects in RL by combining preferences learned from the initial state with a reward function. While we focused on learning hard to specify behavior, Deep RLSP can also be used to learn to avoid negative side-effects, which is crucial for safely deploying RL systems in the real world (Amodei et al., 2016).\nMultiagent settings. In any realistic environment, there is not just a single “user” who is influencing the environment: many people act simultaneously, and the state is a result of joint optimization by all of them. However, our model assumes that the environment state resulted from optimization by a single agent, which will not take into account the fact that each agent will have constraints imposed upon them by other agents. We will likely require new algorithms for such a setting." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was partially supported by Open Philanthropy, AFOSR, ONR YIP, NSF CAREER, NSF NRI, and Microsoft Swiss JRC. We thank researchers at the Center for Human-Compatible AI and the InterACT lab for helpful discussion and feedback." }, { "heading": "A GRIDWORLD ENVIRONMENTS", "text": "Here we go into more detail on the experiments in Section 3.2, in which we ran Deep RLSP on the environment suite constructed in Shah et al. (2019).\nIn this test suite, each environment comes equipped with an observed state s0, an initial state s−T , a specified reward Rspec, and a true reward Rtrue. A given algorithm should be run on s0 and optionally also s−T and produce an inferred reward Rinferred. This is then added to the specified reward to produce Rfinal = Rspec + λRinferred, where λ is a hyperparameter that determines the weighting between the two. An optimal policy for Rfinal is then found using value iteration, and the resulting policy is evaluated according to Rtrue.\nThere is no clear way to set λ: it depends on the scales of the rewards. We leverage the fact that Rspec is deliberately chosen to incentivize bad behavior, such that we know λ = 0 will always give incorrect behavior. So, we normalize Rinferred, and then increase λ from 0 until the behavior displayed by the final policy changes.\nSince GAIL does not produce a reward function as output, we do not run it here. We do however report results with AverageFeatures (which is equivalent to Waypoints here, because there is only a single observed state).\nFrom left to right, the environments are:\n1. Room with vase: Rspec has weight 1 for the purple door feature, and 0 for all other weights. Rtrue additionally has weight -1 for the broken vases feature. Since we observe a state in which the vase is unbroken, we can infer that the human avoided breaking the vase, and so that there should be a negative weight on broken vases. Deep RLSP indeed does this and so avoids breaking the vase. AverageFeatures fails to do so, though this is due to a quirk in the feature encoding. In particular, the feature counts the number of broken vases, and so the inferred θ has a value of zero for this feature, effectively ignoring it. If we change the featurization to instead count the number of unbroken vases, then AverageFeatures would likely get the right behavior.\n2. Toy train: In this environment, we observe a state in which an operational train is moving around a track. Once again, Rspec just has weight 1 on the purple door feature. Rtrue additionally has weight -1 on broken vases and trains. Deep RLSP appropriately avoids breaking objects, but AverageFeatures does not.\n3. Batteries: We observe a state in which the human has put a battery in the train to keep it operational (s−T has two batteries while s0 only has one). Rspec still has weight 1 on the purple door feature. Rtrue additionally has weight -1 on allowing the train to run out of power. Algorithms should infer that it is good to put batteries in the train to keep it operational, even though this irreversibly uses up the battery. Deep RLSP correctly does this, while AverageFeatures does not. In fact, AverageFeatures incorrectly infers that batteries should not be used up.\n4. Apples: We observe a state in which the human has collected some apples and placed them in a basket. Rspec is always zero, while Rtrue has weight 1 on the number of apples in the basket. The environment tests whether algorithms can infer that it is good for there to be apples in the basket. Deep RLSP does this, learning a policy that continues to collect apples\nand place them in the basket. AverageFeatures also learns to place apples in the basket, but does not do so as effectively as Deep RLSP, because AverageFeatures also rewards the agent for staying in the original location, leading it to avoid picking apples from the tree that is furthest away.\n5. Room with far away vase: This is an environment that aims to show what can’t be learned: in this case, the breakable vase is so far away, that it is not much evidence that the human has not broken it so far. As a result, algorithms should not learn anything significant about whether or not to break vases. This is indeed the case for Deep RLSP, as well as AverageFeatures (though once again, in the latter case, this is dependent on the specific form of the feature).\nOverall, Deep RLSP has the same behavior on these environments as RLSP, while AverageFeatures does not." }, { "heading": "B ARCHITECTURE AND HYPERPARAMETER CHOICES", "text": "In this section we describe the architecture choices for the models used in our algorithm and the hyperparameter choices in our experiments. All models are implemented using the TensorFlow framework.\nB.1 FEATURE FUNCTION\nWe use a variational autoencoder (VAE; Kingma & Welling, 2014) to learn the feature function. The encoder and decoder consist of 3 feed-forward layers of size 512. The latent space has dimension 30. The model is trained for 100 epochs on 100 rollouts of a random policy in the environment. During training we use a batch size of 500 and a learning rate of 10−5. We use the standard VAE loss function, but weight the KL-divergence term with a factor c = 0.001, which reduces the regularization and empirically improved the reconstruction of the model significantly. We hypothesize that the standard VAE regularizes too much in our setting, because the latent space has a higher dimension than the input space, which is not the case in typical dimensionality reduction settings.\nB.2 INVERSE DYNAMICS MODEL\nOur inverse dynamics model is a feed-forward neural network with 5 layers of size 1024 with ReLU activations. We train it on 1000 rollouts of a random policy in the environment for 100 epochs, with a batch size of 500 and a learning rate of 10−5.\nNote that the model predicts the previous observation given the current observation and action; it does not use the feature representation. We found the model to perform better if it predicts the residual ot−1 − ot given ot and at instead of directly predicting ot−1. We normalize all inputs to the model to have zero mean and unit variance. To increase robustness, we also add zero-mean Gaussian noise with standard deviation 0.001 to the inputs and labels during training and clip the outputs of the model to the range of values observed during training.\nB.3 POLICY\nFor learning the policy we use the stable-baselines implementation of Soft Actor-Critic (SAC) with its default parameters for the MuJoCo environments (Haarnoja et al., 2018; Hill et al., 2018). Each policy update during Deep RLSP uses 104 total timesteps for the cheetah, 2× 104 for the hopper. We perform the policy updates usually starting from the last iteration’s policy, except in the pendulum environment, where we randomly initialize the policy in each iteration and train it using 5 × 104 iterations of SAC. We evaluate the final reward function generally using 2× 106 timesteps, except for the pendulum, where we use 6× 104.\nB.4 INVERSE POLICY\nBecause the inverse policy is not deterministic, we represent it with a mixture density network, a feed-forward neural network that outputs a mixture of Gaussian distributions (Bishop, 1994).\nThe network has 3 layers of size 512 with ReLU activations and outputs a mixture of 5 Gaussians with a fixed variance of 0.05.\nTo update the inverse policy we sample batches with batch size 500 from the experience replay, apply the forward policy and the forward transition model on the states to label the data. We then train the model with a learning rate of 10−4.\nB.5 DEEP RLSP HYPERPARAMETERS\nWe run Deep RLSP with a learning rate of 0.01, and use 200 forward and backward trajectories to estimate the gradients. Starting with T = 1 we increment the horizon when the gradient norm drops below 2.0 or after 10 steps, whichever comes first. We run the algorithm until T = 10." }, { "heading": "C HEURISTIC FOR INCORPORATING INFORMATION ABOUT THE INITIAL STATE", "text": "In Section 2.4 we discussed that it might be necessary for Deep RLSP to have information about the distribution P of the initial state s−T . Since in our setup Deep RLSP can not obtain any information about P through π−1 and T −1, here we present a heuristic to incorporate the information elsewhere. Specifically, we weight every backwards trajectory by the cosine similarity between the final state s−T , and a sample ŝ−T ∼ P . This weights gradient terms higher that correspond to trajectories that are more likely given our knowledge about P and weights trajectories lower that end in a state s−T that has low probability under P . To test whether this modification improves the performance of Deep RLSP, we compared Deep RLSP with this gradient weighting heuristic to Deep RLSP without it as it was presented in the main paper.\nFirst, we ran Deep RLSP with the gradient weighting on the gridworld environments from Shah et al. (2019), described in Section 3.2 and Appendix A. The results are identical to the case when using the heuristics.\nNext, we tested on the tasks in the MuJoCo environments described in Section 3.3. We report the results in Table 2, alongside the previously reported results without the gradient weighting. The results are quite similar, suggesting that the gradient weighting does not make much of a difference in these environments." }, { "heading": "D ANALYSIS OF THE LEARNED SKILLS", "text": "D.1 TRAINING A DISCRIMINATOR\nIn the main text, we focused on visual evaluation of the learned skills, because it is difficult to define a metric that properly measures the similarity between an original skill and one learned by Deep RLSP. In this section, we attempt to quantify the similarity between policies by training a discriminator to distinguish trajectories from the policies. Conceptually, the easier it is to train this discriminator, the more different the two policies are. We could thus use this to check how similar our learned policies are to the original skills.\nWe train a neural network with a single hidden layer of size 10 with ReLU activation functions. We sample trajectories from both policies and randomly sample trajectory pieces consisting of 5 observations to train the model on. We label the trajectory pieces with a binary label depending on which policy they come from, and then use a cross-entropy loss to train the model. To ensure comparable results, we keep this setup the same for all policies and average the resulting learning curves over 10 different random seeds.\nThe resulting learning curves are shown in fig. 4. The differences between the learning curves are relatively small overall, suggesting that we cannot draw strong conclusions from this experiment. In addition, while the AverageFeatures and Waypoints ablations can be seen to be extremely bad visually relative to GAIL and Deep RLSP, this is not apparent from the learning curves. As a result, we conclude that this is not actually a good metric to judge performance. (Note that if we were to use the metric, it would suggest that Deep RLSP is best for the balancing learning skill, while for the jumping skill GAIL is better for 1 and 50 states and Deep RLSP is better for 10 states.)\nD.2 VISUALIZATION OF LEARNED SKILLS\nHere we provide larger visualizations of the skills learned in the experiments discussed in Section 3.4 of the main paper. For each experiment we show the original policy, the states sampled from this policy and given as an input to Deep RLSP, the policy learned by the AverageFeatures ablation, and the policy learned by Deep RLSP in figs. 5 to 10 (on future pages). Again, we emphasize that the visual comparison is easier with videos of the policies which we provide at https://sites. google.com/view/deep-rlsp (including Waypoints and AverageFeatures ablations)." }, { "heading": "E THINGS WE TRIED THAT DID NOT WORK", "text": "Here we list a few variations of the Deep RLSP algorithm that we tested on the MuJoCo environments that failed to provide good results.\n• We tried to learn a latent state-space jointly with a latent dynamics model using a recurrent state-space model (RSSM). However, we found existing models too brittle to reliably learn a good dynamics model. The reward function and policy learned by Deep RLSP worked in the RSSM but did not generalize to the actual environment.\n• We also tried learning a forward dynamics model from the initial set of rollouts, similarly to how we learn an inverse dynamics model, rather than relying on the simulator T . However, we found this to cause a similar issue as the RSSM: the reward function and policy learned by Deep RLSP did not generalize to the actual environment. However, we hope that progress in model-based RL will allow us to implement Deep RLSP using only learned dynamics models in the future.\n• Using an mixture density network instead of an MLP to model the inverse dynamics did not improve the performance of the algorithm. We suspect this to be because in the MuJoCo simulator the dynamics and the inverse dynamics are “almost deterministic”.\n• Updating the inverse dynamics model and the feature function during Deep RLSP by training it on data from the experience replay did not improve performance and in some cases significantly decreased performance. The decrease in performance seems to have been caused by the feature function changing too much and the training of the other models suffering from catastrophic forgetting as a result.\n• In the main paper we evaluated the policies learned by Deep RLSP from jumping and balancing skills. However, we also looked at policies obtained by optimizing for the learned reward. These also showed similarities to the original skills but they were significantly worse then the policies directly learned by Deep RLSP. For the jumping skill the optimized policies jump very erratically, and for the balancing skill they tend to fall over or perform forward flips. This discrepency is a result of the policy updates during Deep RLSP only using a limited number of iterations. It seems like in these experiments the learned reward functions lead to good policies when optimized for weakly but do not produce good policies when optimized for strongly. We saw in preliminary experiments that increasing the number of iterations for updating the policies during Deep RLSP reduces this discrepency. However, the resulting algorithm was computationally too expensive to evaluate with our resources. • We tried running Deep RLSP for longer horizons up to T = 30, but found the results to be\nworse than for T = 10 which we reported in the main paper. We hypothesize that this is caused by compounding errors in the inverse transition model. This hypothesis is supported by manually looking at trajectories generated by the inverse transition model. While they look reasonable for short horizons T ≤ 10, compounding errors become significantly bigger for horizons 10 ≤ T ≤ 30.\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 5:\nD ee\np R\nL SP\nle ar\nni ng\nth e\nba la\nnc in\ng sk\nill fr\nom a\nsi ng\nle st\nat e.\nT he\nfir st\nro w\nsh ow\ns th\ne or\nig in\nal po\nlic y\nfr om\nD A\nD S,\nth e\nse co\nnd ro\nw sh\now s\nth e\nsa m\npl ed\nst at e fr om th is po lic y, th e th ir d ro w is th e G A IL al go ri th m ,a nd th e la st ro w sh ow s th e po lic y le ar ne d by D ee p R L SP .\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 6:\nD ee\np R\nL SP\nle ar\nni ng\nth e\nba la\nnc in\ng sk\nill fr\nom 10\nst at\nes .T\nhe fir\nst ro\nw sh\now s\nth e\nor ig\nin al\npo lic\ny fr\nom D\nA D\nS, th\ne se\nco nd\nro w\nsh ow\ns th\ne sa\nm pl\ned st\nat es\nfr om\nth is\npo lic\ny, th\ne th\nir d\nro w\nis th\ne G\nA IL\nal go\nri th\nm ,a\nnd th\ne fin\nal ro\nw sh\now s\nth e\npo lic\ny le\nar ne\nd by\nD ee\np R\nL SP\n.\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 7:\nD ee\np R\nLS P\nle ar\nni ng\nth e\nba la\nnc in\ng sk\nill fr\nom 50\nst at\nes .T\nhe fir\nst ro\nw sh\now s\nth e\nor ig\nin al\npo lic\ny fr\nom D\nA D\nS, th\ne ne\nxt fiv\ne ro\nw s\nsh ow\nth e\nsa m\npl ed\nst at\nes fr om th is po lic y, th e se co nd to la st ro w is th e G A IL al go ri th m ,a nd th e la st ro w sh ow s th e po lic y le ar ne d by D ee p R L SP .\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 8:\nD ee\np R\nL SP\nle ar\nni ng\nth e\nju m\npi ng\nsk ill\nfr om\na si\nng le\nst at\ne. T\nhe fir\nst ro\nw sh\now s\nth e\nor ig\nin al\npo lic\ny fr\nom D\nA D\nS, th\ne se\nco nd\nro w\nsh ow\ns th\ne sa\nm pl\ned st\nat e\nfr om\nth is\npo lic\ny, th\ne th\nir d\nro w\nis th\ne G\nA IL\nal go\nri th\nm ,a\nnd th\ne la\nst ro\nw sh\now s\nth e\npo lic\ny le\nar ne\nd by\nD ee\np R\nL SP\n.\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 9:\nD ee\np R\nL SP\nle ar\nni ng\nth e\nju m\npi ng\nsk ill\nfr om\n10 st\nat es\n.T he\nfir st\nro w\nsh ow\ns th\ne or\nig in\nal po\nlic y\nfr om\nD A\nD S,\nth e\nse co\nnd ro\nw sh\now s\nth e\nsa m\npl ed\nst at\nes fr om th is po lic y, th e th ir d ro w is th e G A IL al go ri th m ,a nd th e fin al ro w sh ow s th e po lic y le ar ne d by D ee p R L SP .\nO ri\ngi na\nlP ol\nic y\nSa m\npl ed\nSt at\nes\nG A\nIL\nD ee\np R\nL SP\nFi gu\nre 10\n:D ee\np R\nLS P\nle ar\nni ng\nth e\nju m\npi ng\nsk ill\nfr om\n50 st\nat es\n.T he\nfir st\nro w\nsh ow\ns th\ne or\nig in\nal po\nlic y\nfr om\nD A\nD S,\nth e\nne xt\nfiv e\nro w\ns sh\now th\ne sa\nm pl\ned st\nat es\nfr om\nth is\npo lic\ny, th\ne se\nco nd\nto la\nst ro\nw is\nth e\nG A\nIL al\ngo ri\nth m\n,a nd\nth e\nla st\nro w\nsh ow\ns th\ne po\nlic y\nle ar\nne d\nby D\nee p\nR L\nSP ." } ]
2,021
null
SP:134b968f05fc55567e46428a36359228efa15c85
[ "The paper presents an approach based on conditional normalized maximum likelihood (CNML) for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks. CNML is intractable to compute in general and therefore the authors propose a tractable approximation which uses approximate Bayesian inference techniques. Experimentally, the authors show that their new approach is competitive and sometimes better than existing approaches for uncertainty estimation and calibration on out-of-distribution test data points." ]
While deep neural networks provide good performance for a range of challenging tasks, calibration and uncertainty estimation remain major challenges. In this paper, we propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation, calibration, and out-of-distribution robustness with deep networks. Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle, but is computationally intractable to evaluate exactly for all but the simplest of model classes. We propose to use approximate Bayesian inference technqiues to produce a tractable approximation to the CNML distribution. Our approach can be combined with any approximate inference algorithm that provides tractable posterior densities over model parameters. We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
[]
[ { "authors": [ "K. Bibas", "Y. Fogel", "M. Feder" ], "title": "A New Look at an Old Problem: A Universal Learning Approach to Linear Regression", "venue": "IEEE International Symposium on Information Theory - Proceedings, 2019July:2304–2308,", "year": 2019 }, { "authors": [ "K. Bibas", "Y. Fogel", "M. Feder" ], "title": "Deep pnml: Predictive normalized maximum likelihood for deep neural networks", "venue": "arXiv preprint arXiv:1904.12286,", "year": 2019 }, { "authors": [ "C. Blundell", "J. Cornebise", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Weight Uncertainty in Neural Networks", "venue": "32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "R.D. Cook", "S. Weisberg" ], "title": "Residuals and influence in regression", "venue": null, "year": 1982 }, { "authors": [ "T.M. Cover", "J.A. Thomas" ], "title": "Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)", "venue": null, "year": 2006 }, { "authors": [ "M.W. Dusenberry", "G. Jerfel", "Y. Wen", "Y.-a. Ma", "J. Snoek", "K. Heller", "B. Lakshminarayanan", "D. Tran" ], "title": "Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors", "venue": null, "year": 2005 }, { "authors": [ "Y. Fogel", "M. Feder" ], "title": "Universal Supervised Learning for Individual Data. 12 2018a", "venue": "URL http://arxiv.org/abs/1812.09520", "year": 2018 }, { "authors": [ "Y. Fogel", "M. Feder" ], "title": "Universal batch learning with log-loss", "venue": "IEEE International Symposium on Information Theory (ISIT),", "year": 2018 }, { "authors": [ "Y. Gal", "Z. Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "R. Giordano", "W. Stephenson", "R. Liu", "M. Jordan", "T. Broderick" ], "title": "A swiss army infinitesimal jackknife", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "P. Grunwald" ], "title": "A tutorial introduction to the minimum description length principle", "venue": null, "year": 2004 }, { "authors": [ "P. Grünwald", "T. Van Ommen", "others" ], "title": "Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it", "venue": "Bayesian Analysis,", "year": 2017 }, { "authors": [ "P.D. Grünwald" ], "title": "The Minimum Description Length Principle (Adaptive Computation and Machine Learning)", "venue": null, "year": 2007 }, { "authors": [ "C. Guo", "G. Pleiss", "Y. Sun", "K.Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "D. Hendrycks", "T. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261,", "year": 2019 }, { "authors": [ "G.E. Hinton", "D. van Camp" ], "title": "Keeping neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the Sixth Annual Conference on Computational Learning Theory,", "year": 1993 }, { "authors": [ "M.D. Hoffman", "D.M. Blei", "C. Wang", "J. Paisley" ], "title": "Stochastic variational inference", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "P. Izmailov", "D. Podoprikhin", "T. Garipov", "D.P. Vetrov", "A.G. Wilson" ], "title": "Averaging Weights Leads to Wider Optima and Better Generalization", "venue": null, "year": 2018 }, { "authors": [ "S.M. Kakade", "M.W. Seeger", "D.P. Foster" ], "title": "Worst-case bounds for Gaussian process models", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "B. Lakshminarayanan", "A. Pritzel", "C. Blundell" ], "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "W.J. Maddox", "P. Izmailov", "T. Garipov", "D.P. Vetrov", "A.G. Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "J. Martens", "R. Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "M.P. Naeini", "G. Cooper", "M. Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Y. Ovadia", "E. Fertig", "J. Ren", "Z. Nado", "D. Sculley", "S. Nowozin", "J.V. Dillon", "B. Lakshminarayanan", "J. Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift, 2019", "venue": null, "year": 2019 }, { "authors": [ "J. Rissanen" ], "title": "Stochastic Complexity in Statistical Inquiry Theory", "venue": "World Scientific Publishing Co., Inc.,", "year": 1989 }, { "authors": [ "J. Rissanen", "T. Roos" ], "title": "Conditional NML universal models", "venue": "Information Theory and Applications Workshop,", "year": 2007 }, { "authors": [ "J.J. Rissanen" ], "title": "Fisher information and stochastic complexity", "venue": "IEEE Transactions on Information Theory,", "year": 1996 }, { "authors": [ "H. Ritter", "A. Botev", "D. Barber" ], "title": "A scalable laplace approximation for neural networks", "venue": "In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings,", "year": 2018 }, { "authors": [ "T. Roos", "T. Silander", "P. Kontkanen", "P. Myllymaki" ], "title": "Bayesian network structure learning using factorized NML universal models", "venue": "Information Theory and Applications Workshop,", "year": 2008 }, { "authors": [ "Y. Shtarkov" ], "title": "Universal sequential coding of single messages", "venue": "Problems of Information Transmission,", "year": 1987 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "N. Srivastava", "G. Hinton", "A. Krizhevsky", "I. Sutskever", "R. Salakhutdinov" ], "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "V.G. Vovk" ], "title": "Aggregating Strategies", "venue": "In Proceedings of the Third Annual Workshop on Computational Learning Theory,", "year": 1990 }, { "authors": [ "Blundell" ], "title": "2015)) would not require this step. To select α for each model class, we swept over values [0.25, 0.5, 1, 1.5, 2] and selected the highest value such that accuracy and NLL on the validation set did not degrade significantly compared to SWA. For VGG16, we use α = 0.5 and for WideResNet28x10, we used α = 1.5", "venue": null, "year": 2015 }, { "authors": [ "Maddox" ], "title": "For the reliability diagrams", "venue": null, "year": 2019 }, { "authors": [ "Maddox" ], "title": "B FURTHER EXPERIMENTAL RESULTS AND COMPARISONS ON CIFAR10 In addition to the comparisons in the main paper, we additionally compare to SWA-Gaussian (SWAG), which uses a more expressive posterior than SWAG-D, and SWA with Monte Carlo Dropout (Gal and Ghahramani, 2015) (SWA-Drop)", "venue": "SWA-Temp and SGD", "year": 2019 }, { "authors": [ "E regularizer" ], "title": "DETAILS OF ANALYSIS IN SECTION 3.2 E.1 BOUNDING ERROR IN PARAMETER ESTIMATION Here we state the primary theorem of Giordano et al. (2019) along with the necessary definitions and assumptions", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Current machine learning methods provide unprecedented accuracy across a range of domains, from computer vision to natural language processing. However, in many high-stakes applications, such as medical diagnosis or autonomous driving, rare mistakes can be extremely costly, and thus effective deployment of learned models requires not only high expected accuracy, but also a way to measure the certainty in a model’s predictions in order to assess risk and allow the model to abstain from making decisions when there is low confidence in the prediction. While deep networks offer excellent prediction accuracy, they generally do not provide the means to accurately quantify their uncertainty. This is especially true on out-of-distribution inputs, where deep networks tend to make overconfident incorrect predictions (Ovadia et al., 2019). In this paper, we tackle the problem of obtaining reliable uncertainty estimates under distribution shift.\nMost prior work approaches the problem of uncertainty estimation from the standpoint of Bayesian inference. By treating parameters as random variables with some prior distribution, Bayesian inference can compute posterior distributions that capture a notion of epistemic uncertainty and allow us to quantitatively reason about uncertainty in model predictions. However, computing accurate posterior distributions becomes intractable as we use very complex models like deep neural nets, and current approaches require highly approximate inference methods that fall short of the promise of full Bayesian modeling in practice.\nBayesian methods also have a deep connection with the minimum description length (MDL) principle, a formalization of Occam’s razor that recasts learning as performing efficient lossless data compression and has been widely used as a motivation for model selection techniques. Codes corresponding to maximum-a-posteriori estimators and Bayes marginal likelihoods have been commonly used within the MDL framework. However, other coding schemes have been proposed in MDL centered around achieving different notions of minimax optimality. Interpreting coding schemes as predictive distributions, such methods can directly inspire prediction strategies that give conservative predictions and do not suffer from excessive overconfidence due to their minimax formulation.\nOne such predictive distribution is the conditional normalized maximum likelihood (CNML) (Grünwald, 2007; Rissanen and Roos, 2007; Roos et al., 2008) model, also known as sequential NML or predictive NML (Fogel and Feder, 2018b). To make a prediction on a new input, CNML considers\nevery possible label and tries to find the model that best explains that label for the query point together with the training set. It then uses that corresponding model to assign probabilities for each input and normalizes to obtain a valid probability distribution. Intuitively, instead of relying on a learned model to extrapolate from the training set to the new (potentially out-of-distribution) input, CNML can obtain more reasonable predictive distributions by asking “given the training data, which labels would make sense for this input?”\nWhile CNML provides compelling minimax regret guarantees, practical instantiations have been exceptionally difficult, because computing predictions for a test point requires retraining the model on the test point concatenated with the entire training set. With large models like deep neural networks, this can potentially require hours of training for every prediction.\nIn this paper, we proposed amortized CNML (ACNML), a tractable and practical algorithm for approximating CNML utilizing approximate Bayesian inference. ACNML avoids the need to optimize over large datasets during inference by using an approximate posterior in place of the training set. We demonstrate that our proposed approach is substantially more feasible and computationally efficient than prior techniques for using CNML predictions with deep neural networks and compares favorably to a number of prior techniques for uncertainty estimation on out-of-distribution inputs." }, { "heading": "2 MINIMUM DESCRIPTION LENGTH: BACKGROUND AND PRELIMINARIES", "text": "ACNML is motivated from the minimum description length (MDL) principle, which can be used to derive a connection between optimal codes and prediction. We begin with a review of the MDL principle and discuss the challenges in implementing minimax codes that motivate our method. For more comprehensive treatments of MDL, we refer the readers to (Grünwald, 2007; Rissanen, 1989).\nMinimum description length. The MDL principle states that any regularities in a dataset can be exploited to compress it, and hence learning is reformulated as losslessly transmitting the data with the fewest number of bits (Rissanen, 1989; Grünwald, 2007). Simplicity is thus formalized as the length of the resulting description. MDL was originally formulated in a generative setting where the goal is to code arbitrary data, and we will present a brief overview in this setting. We can translate the results to a supervised learning setting, which corresponds to transmitting the labels after assuming either a fixed coding scheme for the inputs or that the inputs are known beforehand. While MDL is typically described in terms of code lengths, in general, we can associate codes with probability distributions, with the code length of an object corresponding to the negative log-likelihood under that probability distribution (Cover and Thomas, 2006).\nNormalized Maximum Likelihood. Let θ̂(x1:n) denote the maximum likelihood estimator for a sequence of data x1:n over all θ ∈ Θ. For any x1:n ∈ Xn and distribution q over Xn, we can define a regret relative to the model class Θ as\nR(q,Θ, x1:n) def = log pθ̂(x1:n)(x1:n)− log q(x1:n). (1)\nThis regret corresponds to the excess number of bits q uses to encode x1:n compared to the best distribution in Θ, denoted θ̂(x1:n). We can then define the normalized maximum likelihood distribution (NML) with respect to Θ as\npNML(x1:n) = pθ̂(x1:n)(x1:n)∑\nx̃1:n∈Xn pθ̂(x̃1:n)(x̃1:n) (2)\nwhen the denominator is finite. The NML distribution can be shown to achieve minimax regret (Shtarkov, 1987; Rissanen, 1996)\npNML = argmin q max x1:n∈Xn R(q,Θ, x1:n). (3)\nThis corresponds, in a sense, to an optimal coding scheme for sequences of known fixed length.\nConditional NML. Instead of making predictions across entire sequences at once, we can adapt NML to the setting where we make predictions about the next data point based on the previously seen data, resulting in conditional NML (CNML) (Rissanen and Roos, 2007; Grünwald, 2007; Fogel and Feder, 2018a). While several variations on CNML exist, we consider the following:\npCNML(xn|x1:n−1) ∝ pθ̂(x1:n)(xn). (4)\nFor any fixed sequence x1:n−1, pCNML solves the minimax regret problem\npCNML = argmin q max xn log pθ̂(x1:n)(xn)− log q(xn), (5)\nwhere the inner maximization is only over the last data point xn.\nWe can extend this approach to the supervised classification setting, where our models represent conditional distributions pθ(y|x). The CNML distribution, given a sequence of already seen datapoints (x1:n−1, y1:n−1) and the next input xn, then takes the form\npCNML(yn|xn;x1:n−1, y1:n−1) ∝ pθ̂(y1:n|x1:n)(yn|xn), (6)\nand solves the minimax problem\npCNML = argmin q max yn log pθ̂(y1:n|x1:n)(yn|xn)− log q(yn). (7)\nWe see that this conditional distribution is amenable to our usual inductive learning procedure, where (x1:n−1, y1:n−1) is our training set, and we want to output a predictive distribution over labels yn for a new test input xn.\nCNML provides conservative predictions. For each query point, CNML considers each potential label and finds the model that would be most consistent with that label and with the training set. If that model assigns high probability to the label, then minimizing the worst-case regret forces CNML to assign relatively high probability to it. In particular, compared to simply letting a model trained only on the training set extrapolate blindly, we expect CNML to give more conservative predictions on out-of-distribution inputs, since it explicitly considers what would have happened if the new data point had been included in the training dataset with each particular label.\nWe use a 2D logistic regression example to illustrate CNML’s conservative predictions, showing a heatmap of CNML probabilities in Figure 1. CNML provides uniform predictions on most of the input space away from the training samples. In Figure 2, we illustrate how CNML arrives at these predictions, showing the predictions for the parameters θ̂0 and θ̂1, corresponding to labeling the test point (shown in pink in Figure 2, left) with either the label 0 or 1.\nHowever, CNML may be too conservative when the model class Θ is very expressive. Naïvely applying CNML with large model classes can result in the per-label models fitting their labels for the\nquery point arbitrarily well, such that CNML gives unhelpful uniform predictions even on inputs we would hope to reasonably extrapolate on. We see this in the 2D logistic regression example in Figure 1. Thus, the model class Θ would need to be restricted in some form, for example by only considering only parameters within a certain distance from the training set solution as a hard constraint.\nAnother approach for controlling the expressivity of the model class is to generalize CNML to use regularized estimators instead of maximum likelihood, resulting in normalized maximum a posteriori (NMAP) (Kakade et al., 2006) codes. Instead of using maximum likelihood parameters, NMAP selects θ̂s to be the parameter that maximizes both data likelihood and a regularization term, or prior, over parameters, and we can define slightly altered notions of regret using these MAP estimators in all the previous equations to get a conditional normalized maximum a posteriori distribution instead. See Appendix D for completeness.\nGoing back to the logistic regression example, we plot heatmaps of CNMAP predictions in Figure 3, adding different amounts of L2 regularization to the logistic regression weights. As we add more regularization, the model class becomes effectively less expressive, and the CNMAP predictions become less conservative.\nComputational Costs of CNML. A major practical issue with actually utilizing CNML or CNMAP with neural networks is the prohibitive computational costs of computing the maximum likelihood estimators for each new input and label combination. To evaluate the distribution on a new test point, one must solve a nonconvex optimization problem for each possible label, with each problem involving the entire training dataset along with the new test point. This direct evaluation of CNML therefore becomes computationally infeasible with large datasets and high-capacity models, and further requires that the model carry around the entire training set even when it is deployed. In settings where critical decisions must be made in real time, even running a single epoch of additional training would be infeasible. For this reason, NML-based methods have not gained much traction as a practical tool for improving the predictive performance of high-capacity models." }, { "heading": "3 AMORTIZED CNML", "text": "In this section, we derive our method, amortized conditional normalized maximum likelihood (ACNML). ACNML provides a tractable approximation for CNML and CNMAP via approximate Bayesian inference. Instead of directly computing maximum likelihood parameters over the query point and training set, our method uses an approximate posterior distribution over parameters to capture the necessary information about the training set, and thus reduces the maximization to only the single new point. The computational cost at test-time therefore does not increase with training set size. We specialize our notation to the supervised learning setting, where our aim is to obtain a predictive distribution p(yn|xn) after observing a training set (x1:n−1, y1:n−1) and a test input xn." }, { "heading": "3.1 ALGORITHM DERIVATION", "text": "Incorporating an exact posterior into CNML. Given a prior distribution p(θ), the Bayesian posterior likelihood conditioned on the training data is given by\np(θ|x1:n−1, y1:n−1) ∝ p(θ)pθ(y1:n−1|x1:n−1). (8) We can write the MAP estimators in the CNMAP distribution for a fixed query input xn as\nθ̂y = argmax θ∈Θ log pθ(y|xn) + log pθ(y1:n−1|x1:n−1) + log p(θ)︸ ︷︷ ︸ log p(θ|x1:n−1,y1:n−1)\n(9)\nAlgorithm 1 Amortized CNML (ACNML) Input: Model class Θ, Training Data (x1:n−1, y1:n−1), Test Point: xn, Classes (1, . . . , k) Output: Predictive distribution p(y|xn) Training: Run approximate inference algorithm on training data (x1:n−1, y1:n−1) to get posterior density q(θ) for all possible labels i ∈ (1, . . . , k) do\nCompute θ̂i = argmaxθ log pθ(i|xn) + log q(θ) end for Return p(y|xn) =\npθ̂y (y|xn)∑k i=1 pθ̂i (i|xn)\nWe can thus replace the training data log-likelihood pθ(y1:n−1|x1:n−1) with the Bayesian posterior density log p(θ|x1:n−1, y1:n−1) when computing θ̂y . We can also recover CNML as a special case of CNMAP by using a uniform prior, but as discussed previously, CNML with highly expressive model classes can lead to overly conservative predictions, so we will opt to use non-uniform priors that help control model complexity instead. For example, with deep neural networks, we may elect to use a zero-mean Gaussian prior p(θ) on the network weights, corresponding to L2 regularization.\nACNML with an approximate posterior. Of course, the exact Bayesian likelihood is no easier to compute than the original training log likelihood. However, we can derive a tractable approximation by replacing the exact posterior p(θ|x1:n−1, y1:n−1) with an approximate posterior q(θ) instead. We can obtain an approximate posteriors via standard approximate Bayesian techniques such as variational inference or Laplace approximations. We focus on Gaussian posterior approximations for computational efficiency, and discuss in Section 3.2 why this class of distributions provides a reasonable approximation.\nFor practical purposes, we expect the approximate posterior log-likelihood to ensure the optimal θ̂y selected for each label retains good performance on the training set. By replacing the likelihood over the training data with the probability under an approximate posterior, it becomes unnecessary to retain the training data at test time, only the parameters of the approximate distribution. Optimization also becomes much simpler, as it no longer requires stochastic gradients, and the Gaussian posterior log density log q(θ) can serve as a strong convex regularizer.\nACNML algorithm summary A summary of the ACNML algorithm is presented in Algorithm 1. The training process for obtaining q(θ) only needs to be performed once on the training set, whereas the inference step is performed for each test point. However, this inference step only requires optimizing the model parameters on a single data point, with the regularizer provided by log q(θ)." }, { "heading": "3.2 ANALYSIS OF GAUSSIAN APPROXIMATIONS IN ACNML", "text": "In this section, we argue that using a Gaussian approximate posteriors in ACNML, which correspond to second-order approximations to the training set log-likelihood, suffice for accurately computing the CNML distributions when the training set is large. The intuition is that for large training sets, the combined likelihoods of all the training points dominate over the single new test point, so the perturbed MLEs θ̂y remains close to the original training set MLE θ̂, letting us rely on local approximations to the training loss.\nUnder some simplifying assumptions, we can formalize this argument using the concept of influence functions, which measure how maximum likelihood parameters (and more general M -estimators) for a dataset would change if the dataset were perturbed by reweighting inputs an infinitesimal amount.\nWe recall that maximum likelihood estimators for a dataset with n datapoints (x1:n, y1:n) is given by\nθ̂ = argmax θ\n1\nn n∑ i=1 log pθ(yi|xi). (10)\nInfluence functions analyze how θ̂ relates to the MLE of a perturbed dataset\nθ̂x,y, = argmax θ\n( log pθ(y|x) + 1\nn n∑ i=1 log pθ(yi|xi)\n) , (11)\nwhere θ̂x,y, is the new MLE if we perturb the training set by adding a datapoint (x, y) with a weight . A classical result (Cook and Weisberg, 1982) shows that θ̂x,y, is differentiable (under appropriate regularity conditions) with respect to with derivative given by the influence function\ndθ̂x,y, d | =0= −H−1θ̂ ∇θ log pθ̂(y|x), (12)\nwhere θ̂ is the MLE for the original dataset and Hθ̂ the Hessian of the mean training set log-likelihood evaluated at θ̂. CNML computes the MLE after adding datapoint (x, y) with equal weight to points in the training set, which is precisely given by θ̂x,y, evaluated at = 1/n. Thus, for sufficiently large n, a first order Taylor expansion around θ̂ should be accurate and the new parameter can be estimated by\nθ̃x,y = θ̂ − 1\nn H−1 θ̂ ∇θ log pθ̂(y|x), (13)\nwhich is equivalent to solving\nθ̃x,y = argmax θ\n1 n (θ − θ̂)T∇θ log pθ̂(y|x) + 1 2 (θ − θ̂)THθ̂(θ − θ̂). (14)\nThis suggests that, with large training datasets, the perturbed MLE parameters θ̂y in Equation 9 can be approximated accurately using a quadratic approximation to the training log-likelihood, corresponding to a Gaussian posterior obtained via a Laplace approximation. We can explicitly quantify the accuracy of this approximation in the theorem below, which is based on Theorem 1 from Giordano et al. (2019), with full details and proof in Appendix E. Theorem 3.1. (Adapted from Giordano et al. (2019)) Consider a training set with n datapoints and an additional datapoint (x, y). Assume assumptions 1-5 hold with constants Cop, CIJ,∆δ as defined in Appendix E. Let θ̂x,y denote the exact MLE if we had appended (x, y) to the training set, and θ̃x,y the parameter obtained via the approximation in Equation 13.\nLet\nδ = 1\nn+ 2 max{sup θ∈Θ ‖∇θ log pθ(y|x)‖1 , sup θ∈Θ ww∇2θ log pθ(y|x)ww1}. (15) If δ ≤ ∆δ , then wwwθ̂x,y − θ̃x,ywww\n2 ≤ 2C2opCIJδ2, (16)\nGiven a bound on how accurately we estimate the new parameters for CNML, we can also explicitly quantify the accuracy of the resulting normalized distributions, with proof in Appendix E. Proposition 3.2. Suppose y ∈ Y with |Y| = k (classification with k classes). Let θx,y be the exact MLE after appending the datapoint (x, y) to the training set, and let θ̃x,y be an approximate MLE\nwith wwwθ̂x,y − θ̃x,ywww ≤ δ for each y. Further suppose log pθ(y|x) is L-Lipschitz in θ.\nDenote the exact CNML distribution for the fixed input x to be pCNML(y) ∝ pθ̂x,y (y|x) and an approximate CNML distribution pACNML(y) ∝ pθ̃x,y (y|x). We then have\nsup y |log pCNML(y)− log pACNML(y)| ≤ 2Lδ. (17)\nTheorem 3.1 and Proposition 3.2 together suggest that the approximation produced by ACNML will be increasingly close to the exact CNML distribution as the training set size n grows. However, this formal theoretical result only holds for sufficiently large datasets and under strong simplifying assumptions including smoothness and strong convexity of the training loss, so does not necessarily hold in practical settings with deep neural networks.\nIn the context of interpreting how different data points influence the predictions of neural networks, Koh and Liang showed that influence function approximations were able to provide useful predictions for estimating leave-one-out retraining with deep convolutional neural networks. This closely resembles the conditions we encounter when computing parameters for each label of the query point with ACNML, with the key difference being that ACNML adds a datapoint while leave-one-out retraining removes one. This suggests second-order approximations to the training loss, corresponding to Gaussian approximations in ACNML, may suffice to yield useful predictions about how parameters change when the query point is added, despite lacking formal guarantees with deep neural networks." }, { "heading": "4 RELATED WORK", "text": "Minimum description length has been used to motivate neural net methods dating back to Hinton and van Camp (1993), who treat description length as a regularizer to mitigate overfitting. The idea of preferring flat minima (Hochreiter and Schmidhuber, 1997) also has its origins in the MDL framework, as it allows a coarser discretization of the weights (and thus fewer bits needed).\nBayesian methods typically serve as the starting point for uncertainty estimation in deep networks, and a commonly used approach is to use simple tractable distributions to approximate the true posterior (Hoffman et al., 2013; Blundell et al., 2015; Ritter et al., 2018). Recent work (Maddox et al., 2019; Dusenberry et al., 2020) has shown fairly simple posterior approximations are able to achieve well-calibrated predictions with marginalization. Our method builds on top of these approximate posterior methods, but in contrast to the Bayesian methods, where the posterior is typically used to efficiently sample models for Bayesian model averaging, our method uses the posterior density to enable efficient optimization for computing the CNML, without needing to retain the training data.\nOvadia et al. (2019) evaluate various proposed methods for uncertainty estimates in deep learning under different types of distribution shift. They found that good calibration on in-distribution points did not necessarily indicate good calibration under distribution shift, and that methods relying on marginalizing predictions over multiple models (Lakshminarayanan et al., 2016; Srivastava et al., 2014) gave better uncertainty estimates under distribution shift than other techniques. We show that our method ACNML maintains much better calibration under distribution shift than prior methods.\nPerhaps most closely related to our work, Fogel and Feder (2018b) advocate for the use of the CNML distribution in the context of supervised learning (under the name predictive NML), citing its minimax properties. Bibas et al. (2019b) estimate the CNML distribution with deep networks by finetuning the last layers of the network on every test input and label combination appended to the training set. Since this finetuning procedure trains for several epochs, it is very computationally intensive at test-time and requires continued access to the entire training set when evaluating. In contrast, our method amortizes this procedure by condensing the information in the training data into a distribution over parameters, allowing for much faster test-time inference without needing the training data.\nIn the analysis for our approximation, we use influence functions (Cook and Weisberg, 1982), which have been studied as asymptotic approximations to how M -estimators change when perturbing a dataset. In deep learning, Koh and Liang advocated for using influence functions to interpret neural nets, generate adversarial examples, and diagnose errors in datasets. We use a theorem from Giordano et al. (2019), which broadened the necessary assumptions for these infinitisemal approximations to be accurate and provides explicit guarantees for fixed datasets rather than asymptotic results." }, { "heading": "5 EXPERIMENTS", "text": "To instantiate ACNML, we must select a method for obtaining the approximate posterior. In principle, any technique for computing a tractable posterior over parameters can be used, and we demonstrate this flexibility by implementing ACNML on top of Stochastic Weight Averaging - Gaussian (SWAG) (Maddox et al., 2019), KFAC-Laplace (Ritter et al., 2018), and Bayes-by-backprop (Blundell et al., 2015). SWAG computes a posterior by fitting a Gaussian distribution to the trajectory of SGD iterates. For simplicity and computational efficiency, we instantiate ACNML with the SWAG-D variant, which uses a Gaussian posterior with only a diagonal covariance. KFAC-Laplace uses a Gaussian posterior approximation with the MAP solution as the mean and the inverse Hessian of the negative log likelihood as covariance, approximating the Hessian using KFAC (Martens and Grosse, 2015) to allow for tractable inversion and storage. Bayes-by-backprop (Blundell et al., 2015) uses the reparameterization trick to learn a diagonal Gaussian posterior via the variational lower bound.\nFor each model, we report results across 3 seeds. We compare negative log likelihood (NLL), accuracy, and expected calibration error (ECE) (Naeini et al., 2015) as well as showing reliability diagrams (Guo et al., 2017) to further assess calibration. For reliability diagrams, we sort data points by confidence and divide them into twenty equal sized buckets, plotting the mean accuracy against the mean confidence for each bucket. This allows to see qualitatively see how well the confidence of the prediction relates to the actual accuracy, as well as showing how the confidences are distributed for each method.\nMNIST. We start with a simple illustrative task based on the MNIST dataset, where we construct out-ofdistribution inputs by randomly rotating the images in the MNIST test set. Here, ACNML is implemented on top of Bayes-by-backprop (Blundell et al.,\n2015), and we compare to the MAP estimate and the marginal over models obtained from the same Bayes-by-backprop posterior. The results in Table 1 show that all methods perform well on the in-distribution MNIST test set, though ACNML likelihoods are somewhat worse due to the more conservative CNML distribution. On OOD rotated digits, we see that ACNML exhibits substantial improvements in calibration as measured by the ECE metric, as well as slightly better NLL value. In general, this agrees with what we expect from ACNML: the predictions are more conservative across the board, which does not necessarily improve results in-distribution, particularly for easy domains like MNIST, but offer considerable improvements in calibration for out-of-distribution inputs where errors are prevalent. We additionally compared to a much more computationally expensive instantiation of CNML used by Bibas et al. (2019a) (denoted naive CNML in Table 1), which directly finetunes for several epochs using the training set to obtain the optimal parameters for each query point and label, rather than using the approximate posterior like ACNML does. This direct instantiation of CNML performs the best in terms of accuracy and NLL on the in-distribution test set, while also improving over the MAP solution in terms of NLL and calibration on the OOD inputs. However, we find that ACNML is overall more conservative when using this particular posterior approximation, resulting in better NLL and calibration on the OOD inputs (see Appendix C for more detailed comparisons between ACNML and naive CNML).\nCIFAR and Corruptions We evaluate all methods using the VGG16 (Simonyan and Zisserman, 2014) network architecture. Focusing on the most direct comparisons, we compare against the MAP solution for the given posterior, which is equivalent to Stochastic Weight Averaging (SWA) (Izmailov et al., 2018), and Bayes model averaging with SWAGD and KFAC-Laplace, which provide an apples-to-apples comparison to the two versions of our method that directly utilize the posteriors from these prior approaches. We use CIFAR10 (Krizhevsky, 2012) for training and in-distribution testing. Following (Ovadia et al., 2019), we evaluate predictive uncertainty in out-of-distribution settings using the CIFAR10-Corrupted (Hendrycks and Dietterich, 2019) datasets, which apply different severities of 15 common corruptions to the test set images. With this, we can assess performance over a wide range of distribution shifts, as well as how performance degrades as shifts become more extreme. We include additional comparisons across other methods and architectures in Appendix B.\nExamining the reliability diagrams in Figure 4, we see that ACNML provides more conservative (less confident) predictions than other methods, to the point of being underconfident on the in-distribution CIFAR10 test set, while other methods tend toward being overconfident. On out-of-distribution datasets, where accuracy degrades, we see that ACNML’s conservative predictions lead to many better calibrated low-confidence predictions, while other methods drastically overestimate confidence.\nAll methods perform similarly in terms of accuracy in all domains, and we find that ACNML’s more conservative estimates perform competitively with Bayesian methods in NLL and calibration on in-distribution datasets, with all evaluated methods performing reasonably well in-distribution (see Table 3 in Appendix B). However, differences in calibration are much more pronounced for the OOD results in Figure 5. We see that as the corruption strength increases, ACNML variants provide much better calibration while performing similarly to or slightly better than other methods in terms of NLL.\nTiming Comparison vs. standard CNML: In Table 2, we examine the computational costs of our method. We compare against a naïve implementation of CNML that fine-tunes forN epochs on each test point and label, similarly to the method proposed\nby Bibas et al. (2019b). In total, predicting a single input with k possible labels involves running kN epochs of training. While ACNML is over two orders of magnitude faster than naïve CNML even with just a single epoch of training (our experiments with naive CNML on MNIST used 5 epochs), it is still slower than standard inference. The computational requirements of our method scale linearly with the number of classes, but are constant with respect to dataset size. It is also not easily amenable to data batching, as new copies of the model parameters are needed for each data point. Timing experiments are run using a single NVIDIA 1080Ti, using MNIST for the MNIST MLP timing reselts and using CIFAR10 for VGG16 and WideResNet28x10, with no parallelization over data points." }, { "heading": "6 DISCUSSION", "text": "In this paper, we present amortized CNML (ACNML) as an alternative to Bayesian marginalization for obtaining uncertainty estimates and calibrated predictions with high-capacity models, such as deep neural networks. The CNML distribution is a theoretically well-motivated strategy derived from the MDL principle with strong minimax optimality properties, but actually evaluating this distribution is computationally daunting. ACNML utilizes approximate Bayesian posteriors to tractably approximate it, and can be instantiated on top of a wide range of approximate Bayesian methods. We view ACNML as a step towards practical uncertainty aware predictions that would be essential for real-world decision making. Future work could further improve on our proposed method, for example by combining ACNML with more complex and expressive posterior approximations. In particular, training losses are highly non-convex and have many local minima, so incorporating local approximations around multiple diverse minima could allow for even more reliable uncertainty estimation. More broadly, tractable algorithms inspired by ACNML could in the future provide for substantial improvement in our ability to produce accurate and reliable confidence estimates on out-of-distribution inputs, improving the reliability and safety of learning-enabled systems." }, { "heading": "P. D. Grünwald. The Minimum Description Length Principle (Adaptive Computation and Machine", "text": "Learning). The MIT Press, 2007. ISBN 0262072815.\nC. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1321–1330, 2017.\nD. Hendrycks and T. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019.\nG. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, pages 5–13, 1993.\nS. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.\nM. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.\nP. Izmailov, D. Podoprikhin, T. Garipov, D. P. Vetrov, and A. G. Wilson. Averaging Weights Leads to Wider Optima and Better Generalization. In UAI, 2018.\nS. M. Kakade, M. W. Seeger, and D. P. Foster. Worst-case bounds for Gaussian process models. In Advances in neural information processing systems, pages 619–626, 2006.\nP. W. Koh and P. Liang. Understanding black-box predictions via influence functions.\nA. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. University of Toronto, 6 2012.\nB. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Advances in Neural Information Processing Systems, 2016.\nW. J. Maddox, P. Izmailov, T. Garipov, D. P. Vetrov, and A. G. Wilson. A simple baseline for bayesian uncertainty in deep learning. In Advances in Neural Information Processing Systems, 2019.\nJ. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pages 2408–2417, 2015.\nM. P. Naeini, G. Cooper, and M. Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.\nY. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. V. Dillon, B. Lakshminarayanan, and J. Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift, 2019.\nJ. Rissanen. Stochastic Complexity in Statistical Inquiry Theory. World Scientific Publishing Co., Inc., USA, 1989. ISBN 9971508591.\nJ. Rissanen and T. Roos. Conditional NML universal models. In 2007 Information Theory and Applications Workshop, pages 337–341, 2007.\nJ. J. Rissanen. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40–47, 1996. ISSN 00189448. doi: 10.1109/18.481776.\nH. Ritter, A. Botev, and D. Barber. A scalable laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings, volume 6. International Conference on Representation Learning, 2018.\nT. Roos, T. Silander, P. Kontkanen, and P. Myllymaki. Bayesian network structure learning using factorized NML universal models. In 2008 Information Theory and Applications Workshop, pages 272–276, 2008.\nY. Shtarkov. Universal sequential coding of single messages. Problems of Information Transmission, 23(3):186, 1987.\nK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nN. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 2014.\nV. G. Vovk. Aggregating Strategies. In Proceedings of the Third Annual Workshop on Computational Learning Theory, 1990." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "For obtaining approximate posteriors with SWAG and KFAC-Laplace, we follow the exact training procedures given in Maddox et al. (2019). We then implement ACNML on top of the diagonal SWAG posterior and the KFAC-Laplace posterior.\nThe variance of the SWAG posterior depends in a complex way on the learning rate and gradient covariances. To account for this, we introduce an additional temperature hyperparameter α and solve for the ACNML approximation using\nθ∗ = argmax θ∈Θ\nlog pθ(yn|xn) + 1\nα log q(θ). (18)\nTo calibrate α, we can calculate the CNML distribution using a validation set, by training on the entire training set and the validation point, and then selecting α such that our ACNML procedure produces similar likelihoods. We can also treat α as a tunable hyperparameter and select it using a validation set, similarly to how temperature scaling (Guo et al., 2017) is used to achieve better calibration for prediction, or how the relative weighting of priors and likelihoods are used in generalized Bayesian inference (Vovk, 1990) or safe Bayesian inference (Grünwald et al., 2017) as a way to deal with model misspecification. For our experiments using the SWAGD posterior, we heuristically tune α to be as large as possible without degrading the accuracy compared to the MAP solution. Note, however, that this procedure is specific to the particular way in which SWAG estimates the parameter distribution, and any posterior inference procedure that explicitly approximates the posterior likelihood (e.g., Blundell et al. (2015)) would not require this step. To select α for each model class, we swept over values [0.25, 0.5, 1, 1.5, 2] and selected the highest value such that accuracy and NLL on the validation set did not degrade significantly compared to SWA. For VGG16, we use α = 0.5 and for WideResNet28x10, we used α = 1.5.\nWith our posterior q(θ) being a Gaussian with covariance Σ, we approximately compute the MAP solution for each label y as per Algorithm 1 by initializing θ0 to be the posterior mean and iterating\nθt+1 = θt + tΣ(α∇ log pθt(y|xn) +∇ log q(θt)), (19)\nusing the covariance as a preconditioner. For our experiments, we run 5 steps of gradient ascent on this objective, with a constant step size = 0.5. We empirically find that 5 steps was often enough to find an approximate stationary point with the SWAG-D posterior, and 10 steps for the KFAC-Laplace posterior.\nFor the reliability diagrams in Figure 4, we again follow the procedure used by Maddox et al. (2019). We first divide the points into twenty bins uniformly based on confidence (each bin has the same number of points), then plot the mean accuracy vs mean confidence within each bin. This differs from the reliability diagrams used by Guo et al. (2017), where they divide the range of confidence values into bins uniformly, resulting in unevenly filled bins.\nFor our expected calibration error (ECE) numbers, we use the same bins as computed for our reliability diagrams, and compute\nECE = K∑ i=1 P (i) · |oi − ei| , (20)\nwhere P (i) is the empirical probability a randomly chosen point lies in bin i, oi is the accuracy within bin i, and ei is the average confidence in bin i.\nWe adapted the SWAG authors’ implementation at https://github.com/wjmaddox/swa_gaussian to include the ACNML procedure for test time evaluation, and include a copy of the modified codebase in the supplementary materials with instructions on how to reproduce our experiments. We additionally include pretrained models that were used for our experiments. Experiments were conducted using a mix of local GPU servers and Google Cloud Program compute resources.\nFor the MNIST experiments, we used a feedforward network with 2 hidden layers of size 1200, with no data augmentation. The posterior is factored as independent Gaussians for each parameter, with the prior for each parameter being a zero-mean Gaussian with standard deviation 0.1." }, { "heading": "B FURTHER EXPERIMENTAL RESULTS AND COMPARISONS ON CIFAR10", "text": "In addition to the comparisons in the main paper, we additionally compare to SWA-Gaussian (SWAG), which uses a more expressive posterior than SWAG-D, and SWA with Monte Carlo Dropout (Gal and Ghahramani, 2015) (SWA-Drop). For reference, we show in-distribution performance of all methods in Table 3. Overall, performance differences between all methods are quite small, and ACNML’s conservative predictions do not improve on NLL or ECE over some baselines on in-distribution performance, which is to be expected, since the main aim of our method is produce more calibrated predictions on out-of-distribution tasks.\nFor completeness, we show expanded results on CIFAR10-Corrupted in Figures 6, 7, and 8. With the same architecture, all methods generally have very similar accuracy. ACNML consistently achieves significantly better ECE on the more severe corruptions, and generally comparable or slightly better NLL.\nWhile evaluating MC-Dropout, we found that adding dropout before each layer in VGG16 (labelled VGG16Drop in 7) significantly improved performance on CIFAR10-C. For fair comparisons, we reran all methods with the VGG16Drop architecture as well." }, { "heading": "C COMPARISONS BETWEEN ACNML AND NAIVE CNML ON MNIST", "text": "In this section, we include expanded comparisons between ACNML and a naive implementation of CNML from Bibas et al. (2019b) that computes the MLE/MAP θ̂y for each label by appending the query point and label to the dataset and finetuning for N epochs. Both ACNML and naive CNML are initialized from the same MAP solution, with ACNML taking 5 gradient steps on the query point and posterior and naive CNML finetuning with the query point and training set for 5 epochs.\nThis naive implementation differs slightly from Bibas et al. (2019b) in that we finetune the entire network, while Bibas et al. (2019b) proposed only tuning the last few layers. During the finetuning, we also append the query point and label to every batch in optimization, and downweighting that portion of the loss accordingly to get unbiased gradient estimates. We found this led to more efficient optimization than randomly sampling\nWe first examine how closely ACNML and naive CNML’s predictions match on the same datapoint. To assess this, we compare the CNML normalization terms ∑ y pθ̂y (y|x), NLLs, and the confidences of the two methods. The CNML normalization term captures how much each procedure was able to adapt to different labels for that input. A higher normalization term for an input means that we were flexible enough to fit multiple different labels well together with the training set (or approximate posterior in the case of ACNML), and typically means a less confident prediction on that input.\nIn Figures 9 and 10, We show scatter plots over 1000 randomly selected test points (from the indistribution test set and the rotated OOD images respectively) comparing the CNML normalizers, NLLs, and confidences of ACNML and naive CNML. In each scatter plot, we include a diagonal red line to illustrate where points would lie if predictions of ACNML and naive CNML matched exactly.\nWe additionally plot reliability diagrams for MNIST experiments in Figure 11.\nFor the in-distribution test set, we see from the CNML normalizer plot that the ACNML adaptation procedure using the approximate posterior is much less constraining than using the training set, resulting in the normalizers being higher for ACNML than naive CNML for almost all inputs. This leads to excess conservatism, with ACNML almost always having lower confidence its predictions. As a result, we see that on many points where naive CNML outputted confident correct answers and achieved close to 0 NLL loss, ACNML still incurs some higher losses due to its less confident predictions.\nOn the OOD rotated images, we again see that ACNML typically adapts more than CNML as measured by the CNML normalizers, though the difference is much less extreme compared to the in-distribution dataset. In the confidence scatter plot, we again see that ACNML tends to make lower confidence predictions than naive CNML (especially when naive CNML’s predictions are confident), and as seen in Table 1 and Figure 11, result in ACNML having better NLL and calibration on the OOD inputs.\nHandling multiple MLEs in CNML: Strictly speaking, the CNML distribution is not well defined when there exist multiple potential MLEs θ̂y that can output different predictions (prior references to CNML typically assume such MLEs are unique). However, the non-convexity of the objective for deep neural networks means multiple MLEs can exist, and to properly define CNML in this case, we would need to select a particular MLE to use when assigning probabilities in CNML. In line with the min-max formulation of CNML, we propose to select the MLE θ̂y that maximizes the likelihood pθ̂y (y|x) of the query point and proposed label, as this is the choice that maximizes the regret for that particular label over all MLEs.\nWith our naive CNML instantiation, we observe that during the finetuning for each query point x and label y, the predicted probability of that label pθ(y|x) does not monotonically increase over iterations as we might hope (since we initialize θ to be the MLE of the training set, then update it to maximize likelihood of the training set with the query point and label), but can potentially oscillate substantially throughout the finetuning process. We suspect this is due to the stochasticity in the optimization procedure from to sampling minibatches of the training data causing the trajectory of parameters can potentially visit several different (approximate) local optima that output different predictions on the query point. While our instantiation of naive CNML simply used the parameter found at the end of 5 epochs, we additionally compare against a variant that explicitly tries to select the MLE that maximizes the likelihood of the proposed label. This variant heuristically uses the bset value of pθ(y|x) over all θ encountered in the last epoch of finetuning. We see in Table 4 that this variant,\ndenoted naive CNML (max over itrs), gives more conservative predictions than naive CNML and improves in NLL and calibration on the OOD dataset." }, { "heading": "D NMAP AND ACNML", "text": "NML type methods can be extended with a prior-like regularization term on the selected parameter, resulting in Normalized Maximum a Posteriori (NMAP)(Kakade et al., 2006), also referred to as Luckiness NML (Grunwald, 2004). For a regularizer given by log p(θ), NMAP assigns probabilities according to\npNMAP(xn) ∝ pθ̂(xn)(x n) θ̂(xn) = argmax\nθ log pθ(x\nn) + log p(θ).\nSimilarly to CNML, there are several variations on NMAP or LNML that predict slightly different distributions, but we adopt the one of the same form as our CNML. Similarly to how NML was extended to CNML, NMAP can be extended to a conditional version, again with the θ̂’s being chosen via MAP rather than MLE. As mentioned in Section 3.1, with a non-uniform prior, ACNML actually approximates a version of conditional NMAP, with the Bayesian prior term on the parameters corresponding to the additional regularizer.\nWe also note that with the calculations in section 3.1, we see that CNML can be viewed as performing NMAP on the new test point, with a regularizer corresponding to the likelihoods on the training data. In this perspective, ACNML approximates CNML by using an approximation to that training loss regularizer." }, { "heading": "E DETAILS OF ANALYSIS IN SECTION 3.2", "text": "E.1 BOUNDING ERROR IN PARAMETER ESTIMATION\nHere we state the primary theorem of Giordano et al. (2019) along with the necessary definitions and assumptions.\nHere, we attempt to estimate an unknown parameter θ ∈ Ωθ ⊆ RD where Ωθ is compact. Suppose we have a dataset N datapoints and a weight vector w1, . . . , wN . Let gi(θ) denote the gradient of the loss at datapoint i evaluated at θ, and hi(θ) the Hessian. We can then define\nG(θ, w) = 1\nN N∑ i=1 wigi(θ) (21)\nH(θ, w) = 1\nN N∑ i=1 wihi(θ). (22)\nThe MLE θ̂(w) for the dataset weighted by w is given by solving for G(θ̂(w), w) = 0. Let 1w denote the vector of weights consisting of all 1s. We define θ̂1 to be the MLE for the whole unweighted dataset, which is equivalent to evaluating θ̂(1w) and also define the corresponding Hessian H1 = H(θ̂1, 1w). We now wish to estimate θ̂(w) using a first order approximation around θ̂1 given by\nθ̂IJ(w) = θ̂1 −H−11 G(θ̂1,∆w), (23)\nwhere we define ∆w = w − 1w. The theorem will proceed to bound wwwθ̂(w)− θ̂IJwww\n2 for suitable\nweights w.\nNow we further define g(θ) ∈ RN×D to be the concatenation of all gi(θ)s and similarly for h(θ) ∈ RN×D×D. We let ‖g(θ)‖p and ‖h(θ)‖p to refer to the p-norms when treating those as vector quantities.\nAssumption 1 (Smoothness): For all θ ∈ Ωθ each gn(θ) is continuously differentiable. Assumption 2 (Non-degeneracy): For all θ ∈ Ωθ, H(θ, 1w) is nonsingular and\nsup θ∈Ωθ wwH(θ, 1w)−1wwop ≤ Cop ≤ ∞. (24) Assumption 3 (Bounded averages): There exist finite constants Cg and Ch such that supθ∈Ωθ 1√ N ‖g(θ)‖2 ≤ Cg and supθ∈Ωθ 1√ N ‖h(θ)‖2 ≤ Ch. Assumption 4 (Local Smoothness): There exists a ∆θ > 0 and a finite constant Lh such thatwwwθ − θ̂1www 2 ≤ ∆θ implies ‖h(θ)−h(θ̂1)‖ 2√ N ≤ Lh wwwθ − θ̂1www 2 .\nAssumption 5 (Bounded weight averages). 1√ N ‖w‖2 is uniformly bounded for all w ∈ W by a finite constant Cw.\nWe note that assumption 2 is equivalent to H1 being strongly positive definite. Assumption 5 is not relevant for our use cases, but is stated for completeness.\nCondition 1 (Set Complexity): There exists a δ ≥ 0 and corresponding set Wδ ⊆W such that\nmax w∈Wδ sup θ∈Ωθ wwwww 1N N∑ i=1 (wi − 1)gi(θ) wwwww 1 ≤ δ. (25)\nmax w∈Wδ sup θ∈Ωθ wwwww 1N N∑ i=1 (wi − 1)hi(θ) wwwww 1 ≤ δ. (26)\nCondition 1 essentially describes the set of weight vectors for which θ̂IJ will be an accurate approximation within order δ.\nDefinition 1: Given assumptions 1-5, define\nCIJ = 1 +DCwLhCop (27)\n∆δ = min{∆θC−1op , 1 n C−1IJ C −1 op }. (28)\nWe now state the main theorem of Giordano et al. (2019).\nTheorem (Error Bound for the approximation). Under assumptions 1-5 and condition 1,\nδ ≤ ∆δ ⇒ max w∈Wδ wwwθ̂IJ(w)− θ̂(w)www 2 ≤ 2C2opCIJδ2. (29)\nWe can now apply the above theorem to provide error bounds for a setting where we have a training set of n datapoints and wish to consider the MLE after adding a new datapoint z. The issue is that the theorem as stated bounds the error of the approximation when the approximation is centered around the uniform weighting over all the datapoints, which would be appropriate for considering the impact of removing datapoints from the dataset.\nTo apply the theorem to bound the effects of adding a datapoint, we have to do some slight manipulation. We apply the previous theorem with N = n+ 2, where gi(θ) correspond to the gradients of training data point i for i in (1, . . . , n), gn+1 = −∇ log pθ(z), and gn+2 = ∇ log pθ(z), and similarly for the Hessians hi(θ). We have thus added the query point to the dataset, as well as another fake point that serves to cancel out the contribution of the query point under a uniform weighting, so G(θ, 1w) and H(θ, 1w) are the mean gradients and Hessians for just the training set. Now supposing\nassumptions 1-5 are met for this problem, then we need to check condition 1 for the particular Wδ that contains the vector w̄ of all 1s, except for a 2 in the last entry. We can then find the smallest δ that satisfies\nsup θ∈Ωθ wwww 1N + 2gn+2(θ) wwww 1 ≤ δ (30)\nsup θ∈Ωθ wwww 1N + 2hn+2(θ) wwww 1 ≤ δ, (31)\nand so long as δ ≤ ∆δ , applying the theorem bounds wwwθ̂IJ(w̄)− θ̂(w̄)www\n2 .\nCommentary: The above theorem gives explicit conditions for the accuracy of the approximation that we can verify for a particular training set and query point. Under assumptions that we have some limiting procedure for growing the training set such that the constants defined hold uniformly, we can extend this to an asymptotic statement to explicitly say that the approximation error decays as O(n−2).\nE.2 BOUNDING ERROR IN THE RESULTING CNML DISTRIBUTION\nWe now provide the proof for Proposition 3.2, which we restate here. For notational simplicity, we ignore any dependence on the input x, which we consider fixed. Proposition E.1 (3.2). Suppose z ∈ Z with |Z| = k (for example classification with k classes). Let θ̂z be the exact MLE after appending z to the training set, and let θ̃z be an approximate MLE withwwwθ̂z − θ̃zwww ≤ δ for all z. Further suppose log pθ(z) is L-Lipschitz in θ. Denote the exact CNML distribution pCNML(z) ∝ pθ̂z (z) and an approximate CNML distribution pACNML(z) ∝ pθ̃z (z). Then, we have the bound\nsup z |log pCNML(z)− log pACNML(z)| ≤ 2Lδ. (32)\nProof. The assumed bound wwwθ̂z − θ̃zwww\n2 ≤ δ combined with L-Lipschitzness implies a bound on\ndifferences of logits of each class ∣∣∣log pθ̂z (z)− log pθ̂z (z)∣∣∣ ≤ Lδ. (33) We note that the log probabilities of the exact CNML distribution pCNML (pACNML is given by a similar expression using θ̃z instead of θ̂z) is given by\nlog pCNML(z) = log pθ̂z (z)− log ∑ z′∈Z pθ̂z′ (z′). (34)\nFor any z ∈ Z , we can then expand, apply the triangle inequality and then Equation 33 to obtain\n|log pCNML(z)− log pACNML(z)| = ∣∣∣∣∣log pθ̂z (z)− log pθ̃z (z)− log ∑ z′∈Z pθ̂z′ (z′) + log ∑ z′∈Z pθ̃z′ (z ′) ∣∣∣∣∣ (35)\n≤ ∣∣∣log pθ̂z (z)− log pθ̃z (z)∣∣∣+ ∣∣∣∣∣log ∑ z′∈Z pθ̂z′ (z′)− log ∑ z′∈Z pθ̃z′ (z ′) ∣∣∣∣∣ (36)\n≤ Lδ + ∣∣∣∣∣log ∑ z′∈Z pθ̂z′ (z′)− log ∑ z′∈Z pθ̃z′ (z ′) ∣∣∣∣∣ . (37) We now bound the difference between the log-normalizers ∣∣∣log∑z′ pθ̂z′ (z′)− log∑z′ pθ̃z′ (z′)∣∣∣.\nWe first let pmin(z) = min{pθ̂z (z), pθ̃z (z)} and pmax(z) = max{pθ̂z (z), pθ̃z (z)}, and note that Equation 33 implies log pmax(z) ≤ log pmin(z) + Lδ for all z. We then bound the difference in log-normalizers∣∣∣∣∣log ∑\nz′∈Z pθ̂z′ (z′)− log ∑ z′∈Z pθ̃z′ (z ′) ∣∣∣∣∣ ≤ log ∑ z′∈Z pmax(z ′)− log ∑ z′∈Z pmin(z ′) (38)\n= log\n∑ z′∈Z pmax(z\n′)∑ z′∈Z pmin(z ′) (39)\n= log\n∑ z′∈Z exp(log pmax(z\n′))∑ z′∈Z pmin(z ′) (40)\n≤ log ∑ z′∈Z exp(log pmin(z\n′) + Lδ)∑ z′∈Z pmin(z ′) (41)\n= log exp(Lδ)\n∑ z′∈Z pmin(z\n′)∑ z′∈Z pmin(z ′) (42)\n= Lδ. (43)\nPlugging back into Equation 37, we have the following bound for all z ∈ Z\n|log pCNML(z)− log pACNML(z)| ≤ 2Lδ. (44)" } ]
2,020
AMORTIZED CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD
SP:81573408426e479610a9d751ebed97dc74f63fb1
[ "Learning disentangled representation is often considered an important step to achieve human-like generalization. This paper studies how the degree of disentanglement affects various forms of generalization. Variational autoencoders (VAEs) is trained with different levels of disentanglement on an unsupervised task by excluding combinations of generative factors during training. At test time the models are used to reconstruct the missing combinations in order to measure generalization performance. The paper shows that the models support only weak combinatorial generalization. The paper also tests the models in a more complex task which explicitly required independent generative factors to be controlled. The paper concludes that learning disentanglement representation is not sufficient for supporting more difficult forms of generalization." ]
Combinatorial generalisation — the ability to understand and produce novel combinations of familiar elements — is a core capacity of human intelligence that current AI systems struggle with. Recently, it has been suggested that learning disentangled representations may help address this problem. It is claimed that such representations should be able to capture the compositional structure of the world which can then be combined to support combinatorial generalisation. In this study, we systematically tested how the degree of disentanglement affects various forms of generalisation, including two forms of combinatorial generalisation that varied in difficulty. We trained three classes of variational autoencoders (VAEs) on two datasets on an unsupervised task by excluding combinations of generative factors during training. At test time we ask the models to reconstruct the missing combinations in order to measure generalisation performance. Irrespective of the degree of disentanglement, we found that the models supported only weak combinatorial generalisation. We obtained the same outcome when we directly input perfectly disentangled representations as the latents, and when we tested a model on a more complex task that explicitly required independent generative factors to be controlled. While learning disentangled representations does improve interpretability and sample efficiency in some downstream tasks, our results suggest that they are not sufficient for supporting more difficult forms of generalisation.
[ { "affiliations": [], "name": "Milton L. Montero" }, { "affiliations": [], "name": "Casimir J.H. Ludwig" }, { "affiliations": [], "name": "Rui Ponte Costa" }, { "affiliations": [], "name": "Gaurav Malhotra" }, { "affiliations": [], "name": "Jeffrey S. Bowers" } ]
[ { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation Learning: A Review and New Perspectives", "venue": "[cs],", "year": 2014 }, { "authors": [ "Jeffrey S. Bowers", "Ivan I. Vankov", "Markus F. Damian", "Colin J. Davis" ], "title": "Why do some neurons in cortex respond to information in a selective manner? Insights from artificial neural networks. Cognition, 148:47–63", "venue": "doi: 10.1016/j.cognition.2015.12.009. URL http://www.sciencedirect.com/science/article/pii/S0010027715301232", "year": 2016 }, { "authors": [ "Christopher P. Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in $\\beta$-VAE", "venue": "URL http://arxiv.org/abs/1804.03599", "year": 2018 }, { "authors": [ "Rahma Chaabouni", "Eugene Kharitonov", "Diane Bouchacourt", "Emmanuel Dupoux", "Marco Baroni" ], "title": "Compositionality and generalization in emergent languages", "venue": "arXiv preprint arXiv:2004.09124,", "year": 2020 }, { "authors": [ "Sunny Duan", "Loic Matthey", "Andre Saraiva", "Nicholas Watters", "Christopher P. Burgess", "Alexander Lerchner", "Irina Higgins" ], "title": "Unsupervised Model Selection for Variational Disentangled Representation Learning. arXiv:1905.12614 [cs, stat", "venue": "URL http://arxiv.org/ abs/1905.12614", "year": 2020 }, { "authors": [ "Cian Eastwood", "Christopher KI Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "S.M. Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S. Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A. Rusu", "Ivo Danihelka", "Karol Gregor", "David P. Reichert", "Lars Buesing", "Theophane Weber", "Oriol Vinyals", "Dan Rosenbaum", "Neil Rabinowitz", "Helen King", "Chloe Hillier", "Matt Botvinick", "Daan Wierstra", "Koray Kavukcuoglu", "Demis Hassabis" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Babak Esmaeili", "Hao Wu", "Sarthak Jain", "Alican Bozkurt", "N. Siddharth", "Brooks Paige", "Dana H. Brooks", "Jennifer Dy", "Jan-Willem Meent" ], "title": "Structured Disentangled Representations", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Jerry Fodor", "Brian P. McLaughlin" ], "title": "Connectionism and the problem of systematicity: Why Smolensky’s solution doesn’t", "venue": "work. Cognition,", "year": 1990 }, { "authors": [ "Jerry A. Fodor", "Zenon W" ], "title": "Pylyshyn. Connectionism and cognitive architecture: A critical analysis", "venue": null, "year": 1988 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei A. Rusu", "Loic Matthey", "Christopher P. Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner" ], "title": "DARLA: Improving Zero-Shot Transfer in Reinforcement Learning", "venue": "[cs, stat], June 2018b. URL http: //arxiv.org/abs/1707.08475", "year": 2018 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher P. Burgess", "Matko Bosnjak", "Murray Shanahan", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "SCAN: Learning Hierarchical Compositional Visual Concepts", "venue": "[cs, stat], June 2018c. URL http://arxiv.org/abs/1707.03389", "year": 2018 }, { "authors": [ "Felix Hill", "Andrew Lampinen", "Rosalia Schneider", "Stephen Clark", "Matthew Botvinick", "James L. McClelland", "Adam Santoro" ], "title": "Environmental drivers of systematicity and generalization in a situated agent", "venue": "[cs],", "year": 2020 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Workshop in Advances in Approximate Bayesian Inference, NIPS,", "year": 2016 }, { "authors": [ "J.E. Hummel" ], "title": "Localism as a first step toward symbolic representation", "venue": "Behavioral and Brain Sciences,", "year": 2000 }, { "authors": [ "John E Hummel", "Irving Biederman" ], "title": "Dynamic binding in a neural network for shape recognition", "venue": "Psychological review,", "year": 1992 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by Factorising", "venue": "[cs, stat],", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "[cs, stat],", "year": 2013 }, { "authors": [ "Klaus Greff", "Aaron Klein", "Martin Chovanec", "Frank Hutter", "Jürgen Schmidhuber" ], "title": "The Sacred Infrastructure for Computational Research", "venue": "Proceedings of the 16th Python in Science Conference,", "year": 2017 }, { "authors": [ "Brenden Lake", "Marco Baroni" ], "title": "Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew K Lampinen", "James L McClelland" ], "title": "Transforming task representations to allow deep learning models to perform novel tasks", "venue": "arXiv preprint arXiv:2005.04318,", "year": 2020 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "venue": null, "year": 2019 }, { "authors": [ "Gary Marcus" ], "title": "Deep learning: A critical appraisal", "venue": "arXiv preprint arXiv:1801.00631,", "year": 2018 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling Disentanglement in Variational Autoencoders", "venue": "URL http://arxiv. org/abs/1812.02833", "year": 2019 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dSprites: Disentanglement testing Sprites dataset. 2017", "venue": "URL https://github.com/deepmind/dsprites-dataset/", "year": 2017 }, { "authors": [ "James L McClelland", "David E Rumelhart", "PDP Research Group", "others" ], "title": "Parallel distributed processing", "venue": "Explorations in the Microstructure of Cognition,", "year": 1986 }, { "authors": [ "Jeff Mitchell", "Jeffrey S. Bowers" ], "title": "Harnessing the Symmetry of Convolutions for Systematic Generalisation", "venue": "In 2020 International Joint Conference on Neural Networks (IJCNN),", "year": 2020 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv:1401.4082 [cs, stat", "venue": "URL http://arxiv.org/abs/1401.4082", "year": 2014 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Paul Smolensky" ], "title": "The constituent structure of connectionist mental states: A reply to Fodor and Pylyshyn", "venue": "Southern Journal of Philosophy,", "year": 1987 }, { "authors": [ "Paul Smolensky" ], "title": "Connectionism, constituency, and the language of thought", "venue": "University of Colorado at Boulder,", "year": 1988 }, { "authors": [ "S. Desroziers J. Kriss V. Fomin", "J. Anmol", "A. Tejani" ], "title": "High-level library to help with training neural networks in pytorch", "venue": "https://github.com/pytorch/ignite,", "year": 2020 }, { "authors": [ "Sjoerd van Steenkiste", "Jürgen Schmidhuber", "Francesco Locatello", "Olivier Bachem" ], "title": "Are Disentangled Representations Helpful for Abstract Visual Reasoning", "venue": null, "year": 2019 }, { "authors": [ "Ivan I. Vankov", "Jeffrey S. Bowers" ], "title": "Training neural networks to encode symbols enables combinatorial generalization", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2020 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Christopher P. Burgess", "Alexander Lerchner" ], "title": "Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs", "venue": "URL http://arxiv.org/abs/1901.07017", "year": 2019 }, { "authors": [ "Shengjia Zhao", "Hongyu Ren", "Arianna Yuan", "Jiaming Song", "Noah Goodman", "Stefano Ermon" ], "title": "Bias and Generalization in Deep Generative Models: An Empirical Study", "venue": "URL http://arxiv.org/abs/1811.03259", "year": 2018 }, { "authors": [ "Tanh non-linearity" ], "title": "The second architecture is the one found in Burgess et al", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generalisation to unseen data has been a key challenge for neural networks since the early days of connectionism, with considerable debate about whether these models can emulate the kinds of behaviours that are present in humans (McClelland et al., 1986; Fodor & Pylyshyn, 1988; Smolensky, 1987; 1988; Fodor & McLaughlin, 1990). While the modern successes of Deep Learning do indeed point to impressive gains in this regard, human level generalisation still remains elusive (Lake & Baroni, 2018; Marcus, 2018). One explanation for this is that humans encode stimuli in a compositional manner, with a small set of independent and more primitive features (e.g., separate representations of size, position, line orientation, etc.) being used to build more complex representation (e.g., a square of a given size and position). The meaning of the more complex representation comes from the meaning of it’s parts. Critically, compositional representations afford the ability to recombine primitives in novel ways: if a person has learnt to recognize squares and circles in context where all squares are blue and all circles are red, they can nevertheless also recognise red squares, even though they have never seen these in the training data. This ability to perform combinatorial generalisation based on compositional representations is thought to be a hallmark of human level intelligence (Fodor & Pylyshyn, 1988) (See McClelland et al. (1986) for a diverging opinion).\nRecently it has been proposed that generalisation in neural networks can be improved by extracting disentangled representations (Higgins et al., 2017) from data using (variational) generative models (Kingma & Welling, 2013; Rezende et al., 2014). In this view, disentangled representations capture the compositional structure of the world (Higgins et al., 2018a; Duan et al., 2020), separating the generative factors present in the stimuli into separate components of the internal representation (Higgins et al., 2017; Burgess et al., 2018). It has been argued that these representations allow\ndownstream models to perform better due to the structured nature of the representations (Higgins et al., 2017; 2018b) and to share information across related tasks (Bengio et al., 2014). Here we are interested in the question of whether networks can support combinatorial generalisation and extrapolation by exploiting these disentangled representations.\nIn this study we systematically tested whether and how disentangled representations support three forms of generalisation: two forms of combinatorial generalisation that varied in difficulty as well as extrapolation, as detailed below. We explored this issue by assessing how well models could render images when we varied (1) the image datasets (dSprites and 3DShape), (2) the models used to reconstruct these images (β-VAEs and FactorVAEs with different disentanglement pressures, and decoder models in which we dropped the encoders and directly input perfectly disentangled latents), and (3) the tasks that varied in their combinatorial requirements (image reconstruction vs. image transformation). Across all conditions we found that models only supported the simplest versions of combinatorial generalisation and the degree of disentanglement had no impact on the degree of generalisation. These findings suggest that models with entangled and disentangled representations are both generalising on the basis of overall similarity of the trained and test images (interpolation), and that combinatorial generalisation requires more than learning disentangled representations." }, { "heading": "1.1 PREVIOUS WORK", "text": "Recent work on learning disentangled representations in unsupervised generative models has indeed shown some promise in improving the performance of downstream tasks (Higgins et al., 2018b; van Steenkiste et al., 2019) but this benefit is mainly related to sample efficiency rather than generalisation. Indeed, we are only aware of two studies that have considered the importance of learned disentanglement for combinatorial generalisation and they have used different network architectures and have reached opposite conclusions. Bowers et al. (2016) showed that a recurrent model of shortterm memory tested on lists of words that required some degree of combinatorial generalisation (recalling a sequence of words when one or more of words at test were novel) only succeeded when it had learned highly selective (disentangled) representations (\"grandmother cell\" units for letters). By contrast, Chaabouni et al. (2020) found that models with disentangled representations do not confer significant improvements in generalisation over entangled ones in a language modeling setting, with both entangled and disentangled representations supporting combinatorial generalisation as long as the training set was rich enough. At the same time, they found that languages generated through compositional representations were easier to learn, suggesting this as a pressure to learn disentangled representations.\nA number of recent papers have reported that VAEs can support some degree of combinatorial generalisation, but there is no clear understanding of whether and how disentangled representations played any role in supporting this performance. Esmaeili et al. (2019) showed that a model trained on the MNIST dataset could reconstruct images even when some particular combination of factors were removed during training, such as a thick number 7 or a narrow 0. The authors also showed that the model had learned disentangled representations and concluded that the disentangled representations played a role in the successful performance. However, the authors did not vary the degree of disentanglement in their models and, accordingly, it is possible that a VAE that learned entangled representations would do just as well. Similarly, Higgins et al. (2018c) have highlighted how VAEs that learn disentangled representations can support some forms of combinatorial generalisation when generating images from text. For example, their model could render a room with white walls, pink floor and blue ceiling even though it was never shown that combination in the training set. This is an impressive form of combinatorial generalisation but, as we show below, truly compositional representations should be able to support several other forms of combinatorial generalisations that were not tested in this study. Moreover, it is not clear what role disentanglement played in this successful instance of generalisation. Finally, Zhao et al. (2018) assessed VAE performance on a range of combinatorial generalisation tasks that varied in difficulty, and found that the model performed well in the simplest settings but struggled in more difficult ones. But again, they did not consider whether learning disentangled representations was relevant to generalisation performance.\nAnother work that has significant relation to ours is Locatello et al. (2019), who examine how hard it is to learn disentangled representations and their relation to sampling efficiency for downstream tasks. We are interested in a related, but different question: even if a model learns a disentangled representation in an intermediate layer, does this enable models to achieve combinatorial generali-\nsation? So while Locatello et al. (2019) train their models on complete datasets to investigate the degree of disentanglement and sampling efficiency, we systematically exclude generative factors from training in order to test for combinatorial generalisation (see Methods and Results)." }, { "heading": "2 METHODS AND RESULTS", "text": "We assessed combinatorial generalisation on two different datasets. The dSprites image dataset (Matthey et al., 2017) contains 2D images in black and white that vary along five generative factors: shape, scale, orientation, position-x and position-y and focuses on manipulations of single objects. The 3D Shapes dataset (Burgess & Kim, 2018) contains 3D images in colour that vary along six generative factors: floor-hue, wall-hue, object-hue, object-shape, object-scale, object-orientation. In contrast to dSprites, the images are more realistic, which has been shown to aid reconstruction performance (Locatello et al., 2019). To test combinatorial generalisation, we systematically excluded some combinations of these generative factors from the training data and tested reconstruction on these unseen values. Test cases can be divided into three broad categories based on the number of combinations excluded from training.\n• Recombination-to-Element (red squares in Figure 1): The model has never been trained on one combination of all of the generative factors. In dSprites, an example of this case would be excluding the combination: [shape=ellipse, scale=1, orientation< 120◦, position-x>0.5, positiony>0.5] from the training set – i.e. the model has never seen a large ellipse at < 120◦ in the bottom-right corner, though it has seen all other combinations.\n• Recombination-to-Range (green squares in Figure 1): The model has never been trained on all combinations of some of the factors (i.e. a subset of generative factors). For example, in the 3D Shapes dataset, all combinations with [object-hue=1, shape=sphere] have been left out of the training set – i.e. none of the training images contain a blue sphere. This condition is more complex than Recombination-to-Element as an entire range of combinations [floor-hue=0. . . 1, wall-hue=0. . . 1, ob ject-hue=1, shape=sphere, scale=0. . . 1, orientation=0. . . 1] have been left out (here bold text indicates the range of values excluded). When the number of generative factors is larger than three, “Recombination-to-Range” is, in fact, a set of conditions that vary in difficulty, depending upon how many generative factors have been excluded. Another example would be excluding all combinations where [floor-hue=1, wall-hue=1, object-hue=1, shape=1, scale=1]. Here a smaller range of\ncombinations [floor-hue=1, wall-hue=1, object-hue=1, shape=1, scale=1, orientation=0. . . 1] have been excluded.\n• Extrapolation (blue squares in Figure 1): This is the most challenging form of generalisation where models are tested on values of generative factors that are beyond the range of values observed in the training dataset. For example, in the dSprites dataset, all combinations where [posi tion-x> 0.5] have never been seen.\nEach of these conditions is interesting for different reasons. A model that learns compositional representations should be able to combine observed values of shape (ellipses), translation (bottom-right) and rotation (0◦ to 120◦) to generalise to all unseen combination of factors. The simplest case is the recombination-to-element condition in which all combinations but one have been trained, but a model that learns entangled representations might also succeed based on its training on highly similar patterns (generalisation by interpolation). A more challenging case is recombination-to-range condition given that more combinations have been excluded, making generalisation by similarity (interpolation) more difficult. The final condition is not a form of combinatorial generalisation as the model cannot combine observed values of generative factors to render images. Indeed compositional representations may be inadequate for this form of generalisation." }, { "heading": "2.1 IMAGE RECONSTRUCTION WITH DSPRITES DATASET", "text": "In the dSprites dataset, for testing the Recombination-to-element case, we split each range of values of a generative factor into three bins, so that we had 3 × 3 × 3 × 3 × 3 such combinations of bins for all five generative factors. We then remove one of these 243 combinations during training, namely those that satisfied [shape=ellipsis, position-x >= 0.6, , position-y >= 0.6, 120◦ <= rotation <= 240◦, scale < 0.6]. In other words ellipses in the bottom right corner with those given rotations, which is a relatively small number of combinations that are all very similar to each other.\nFor the Recombination-to-range case, we tested three different variants. First, we excluded all combinations where [shape=square, position-x>0.5]. The model sees other shapes at those positions during training and it sees squares on the left-hand side of the screen. Thus the models experiences both generative factor values independently and has to recombine them to produce a novel image at test time. In the second case, we excluded all combinations where [shape=square, scale>0.5]. In the third case, we excluded all combinations where [shape=square, rotation> 90◦]. We observed very similar results for all three cases and below we report the results for the first variant.\nFinally, for the Extrapolation case, we excluded all combinations of generative factors where [po sition-x > x]. We chose a set of different values for x: x ∈ 0.16, 0.25, 0.50, 0.75, where x is normalised in the range [0, 1] (results shown in Figure 2 for x = 0.50). At test time the model needed to reconstruct images where translation along the x-axis, x, was greater than the cutoff value.\nWe tested three classes of models on all three types of generalisation: standard Variational Autoencoder (VAEs, Kingma & Welling (2013); Rezende et al. (2014)), β-VAE (Higgins et al., 2017; Burgess et al., 2018) with β = 8 and β = 12, FactorVAE (Kim & Mnih, 2019) with γ = 20, γ = 50 and γ = 100. The architectures are the ones found in Higgins et al. (2017), Burgess et al. (2018) and Kim & Mnih (2019) (Details in the Appendix). We used a batch size of 64 and a learning rate of 5e− 4 for the Adam optimizer (Kingma & Ba, 2017). In each case, we simulated three seeds and we report results for runs where we obtained largest disentanglement.\nAs shown by Locatello et al. (2019), none of the models trained end-to-end in an unsupervised manner produce perfectly disentangled representations. Since we were interested in studying the effect of disentanglement on generalisation, we compared our results with a model where we removed the encoder and directly gave disentangled latents as inputs to the decoder. We call this model the ground-truth decoder (GT Decoder from here on). This decoder uses the same MLP architecture as the one used in Higgins et al. (2017). We tested deeper decoders with convolutions and batch norm as well, but found no benefit or a decrease in performance.\nWe measured the level of disentanglement using the framework introduced in Eastwood & Williams (2018). The procedure consists of using the latent representations generated for each image to predict the true generative factors using a regression model (in our case, Lasso regression; see Ap-\nto-Element condition where the models did not see [shape = ellipse, scale = 1, orientation < 120◦, po sition-x > 0.5, position-y > 0.5], Middle) Recombination-to-Range condition where models did not see [shape = square, position-x > 0.5], Right) Extrapolation condition where models did not see [posi tion-x > 0.5] (b) Visualisation of disentanglement. In each panel, columns show latent variables and rows show the generative factors. The size of the square represents the relative importance of the latent variable for predicting the generative factor. Sparse matrices indicate higher disentanglement (Eastwood & Williams, 2018). Each disentanglement matrix corresponds to the model on that row in (a) in the Reconstruction-to-Range condition. The visualisation of the entire set of models and all conditions is shown in Appendix B\npendix A). The level of disentanglement is quantified by their ‘Overall disentanglement metric’, which we call D-score here.\nFigure 2 shows examples of model reconstructions for each of the conditions which help assess the reconstruction success qualitatively (more examples are shown in Appendix C). A more quantitative assessment of the models can be made by examining the negative-log-likelihood of reconstructions for different conditions, plotted in Figure 3. The amount of disentanglement achieved by the models trained end-to-end varied over a broad range and was a function of model architecture and the hyperparameter (β and γ) values. In general, reconstruction accuracy was better for smaller values of β both during training and testing. This has been observed before and is a known issue encountered when increasing the value of β parameter (Hoffman & Johnson, 2016). We found that models were able to perform the Recombination-to-Element generalisation but failed in the Recombination-toRange and Extrapolation cases. In these cases, models either showed really poor reconstruction of the critical element or substituted one of the excluded combination with a combination that had been observed during training (see reconstructions for test cases in Figure 2(a)). Moreover, the amount of generalisation did not depend on the degree of disentanglement. Indeed, the GT Decoder using perfectly disentangled representations was no better than the end-to-end models. Even though this model achieved a lower NLL score, examining the image reconstructions showed that it failed to reconstruct the essential combination excluded from the training data (see Appendix B).\nThe Recombination-to-Range condition shows another interesting qualitative difference between the entangled and disentangled models. All models failed to generalise, but in different ways. Entangled models tended to put a blob in the correct location, which allows them to minimise loss in pixel space over a large set of test examples. In contrast, the models with higher level of disentanglement fell back to the most similar shape (in pixel space) that they had seen at that location.\nFinally, the Recombination-to-Element condition was solved by all the models, regardless of disentanglement score. In fact, the entangled models tended to achieve better reconstructions as evidenced by the disentangled models with β=12 which had a hard time reconstructing ellipses at small scales and tended to just produce a circle instead.\nThe second panel in Figure 2 shows the coefficients computed by the disentanglement metric for the Reconstruction-to-Range condition. The size of each square denotes the relative importance of a latent (column) in predicting the corresponding generative factor (row). The higher the disentanglement, the sparser the matrices. An examination of these matrices revealed that different models achieved a large range of disentanglement though none of the end-to-end models achieved perfect disentanglement." }, { "heading": "2.2 IMAGE RECONSTRUCTION WITH 3D SHAPES DATASET", "text": "The procedure for testing on the 3D Shapes dataset parallels the dSprites dataset above. The 3D Shapes dataset has six generative factors: floor-hue, wall-hue, object-hue, object-shape, object-scale, object-orientation. For the Recombination-to-Element condition, we excluded one combination from training: [floor-hue > 0.5, wall-hue > 0.5, object-hue > 0.5, object-shape=cylinder, object-scale=1, object-orientation=0]. For the Recombination-to-Range condition, we excluded all combinations where [object-hue >= 0.5 (cyan), object-shape = oblong] and trained all other combinations. This means that the models saw several combinations where object-hue was >= 0.5 and where object-shape was oblong but never the combination together. For the Extrapolation condition, we excluded all combinations where [floor-hue >= 0.5].\nWe trained the same set of six end-to-end models as above as well as the GT Decoder. All endto-end models were trained for 65 epochs (around 500000 iterations as in the original articles), while the GT Decoder was trained for 1000 epochs. Reconstructions for the training set are shown in Appendix C and clearly show that the models were able to learn the task. The results for the test conditions are shown in Figure 3 (bottom row) and some examples of typical reconstructions are shown in Figure 4. As it was the case with the dSprites dataset, we observed that the level of disentanglement varied across models, with VAE showing a low D-score and Factor-VAE showing a high Dscore. We also tested the perfectly disentangled model where a decoder learns to construct images from disentangled latents.\nAll models managed to reconstruct the held-out combination in the Recombination-to-element condition. However, none of the models succeeded in correctly reconstructing the held-out combinations in the Recombination-to-range or Extrapolation conditions. In both cases, we observed a large reconstruction error either due to poor overall reconstruction (Extrapolation case) or because the critical combination, [object-hue, object-shape] was replaced with a combination observed during training. And again, we did not see any correlation between disentanglement and the extent of combinatorial generalisation. Even though the perfectly disentangled model had a lower NLL score (see\nFigure 3, bottom row), like other models it failed to reconstruct the critical [object-hue, object-shape] combination that was left out in the training data (see example images of reconstruction in Figure 4 and Appendix C)." }, { "heading": "2.3 IMAGE COMPOSITION EXPERIMENTS", "text": "The limited combinatorial generalisation in the experiments above could be because of the limitations of the task rather than the models or their internal representations. Even though the models learned disentangled representations to some extent, or were provided perfectly disentangled representations, it could be that the simple reconstruction task does not provide enough impetus for the decoder to learn how to combine these disentangled representations to enable generalisation. Therefore, in the final set of experiments we designed a variation of the standard unsupervised task that requires combining generative factors in order to solve the task using the dSprites dataset.\nThis new task is illustrated in Figure 5(a). The input consists of two images and an action. The goal of the task is to take the first (reference) image and modify it so that it matches the second (transform) image along the dimension specified by the action. This action is coded using a onehot vector. This design is based on the question answering task in Santoro et al. (2017) and the compositional one in Higgins et al. (2018c). We produced training and test sets for each condition by sampling reference-transform pairs along with an action uniformly from the generative factors. We ensured that this sampling respected the experiment restriction, so that the transformed image is not outside the current set.\nThe standard VAE is inadequate to solve this task. Therefore, we constructed a model with the architecture shown in Figure 5(b). This model first applies an encoder to both images, obtaining low-dimensional latent representations of each image. It then combines these latent representations with the action to obtain a transformed internal representation. There are several ways in which the input representations of the two images could be combined with the action. We tried three different\nmethods: (i) using a standard MLP, (ii) element-wise interpolation between the two representations, with the interpolation coefficients determined by the action, and (iii) concatenating each input representations with the actions and linearly combining the resultant vectors. We obtained qualitatively similar results with all three methods, but found that the method (iii) gave the best results, providing greatest accuracy of reconstruction as well as highest levels of disentanglement. Therefore, in the rest of the manuscript we describe the results obtained using this method. Once this transformed internal representation has been generated, it is decoded to obtain an output image. We use the same encoding and decoding modules as the ones used by Burgess et al. (2018). The results for this model are shown in Table 1. The model managed to solve the task. In doing so, it also comes to rely on representations with a high level of disentanglement (see Figure 6(b)) even though the β parameter is set to 1. Models with higher values could not solve the task altogether, presumably because the constraint is too strong. However, as was the case in the previous experiment, models failed to solve the more challenging generalisation tasks.\nIn Figure 5 we show some examples of the model’s behaviour and its internal representations. The model failed in similar ways as the disentangled models in the previous experiment, confusing shapes when presented with unseen combinations. Even the Recombination to element case showed some failures (like in the example shown in Figure 5(a)) though the models were, in general, successful in this condition, as can be inferred by comparing the negative log-likelihoods for the training and test trials for this condition in Table 1." }, { "heading": "3 DISCUSSION", "text": "It is frequently assumed that disentangled representations are implicitly compositional (Higgins et al., 2018a;c). This raises the question as to whether disentangled representations support combinatorial generalisation, a key feature of compositional representations (Fodor & Pylyshyn, 1988). However, we found no evidence for this. Indeed representations that varied from highly entangled to perfectly disentangled were equally successful at recombination-to-element generalisation, and both failed on recombination-to-range and extrapolation. This was the case even when we trained a VAE on an explicitly combinatorial task that led models to learn highly disentangled representations that were no better at generalisation.\nOur findings might seem to contradict previous reports showing success in combinatorial generalisation tasks. In Eslami et al. (2018), some success was reported when rendering novel 3D shapes\nwith colours that had been previously seen on other shapes. And in Higgins et al. (2018c) it was reported that using a disentangled representation allowed the model to recombine observed shapes and colours in novel ways. However, it is not clear what sorts of combinatorial generalisation was tested. For example, consider the SCAN model (Higgins et al., 2018c) that could render a room with [white suitcase, blue walls, magenta floor], even though it was never shown this combination during training (see Figure 4 in Higgins et al. (2018c)). But, unlike our training set, it is not clear what exactly was excluded while training this model, and they may have been testing generalisation in a condition similar to our Recombination-to-element condition. Our finding that generalisation was limited to the Recombination-to-element condition suggests that models are simply generalising on the basis of overall similarity (interpolation) rather than exploiting disentangled representations in order to support a more powerful form of compositional generalisation described by Fodor & Pylyshyn (1988).\nThis raises the question as to why disentangled representations are not more effective in supporting combinatorial generalisation. One possibility is that disentangled representations are necessary but not sufficient to support the principle of compositionality. On this view, a model must also include a mechanism for binding these representations in a way that maintains their independence. This point has previously been made in the context of connectionist representations by Hummel in Hummel (2000). Another possibility is that a model may be able to perform combinatorial generalisation without needing disentangled or indeed compositional representations if the training environment is rich enough (Chaabouni et al., 2020; Hill et al., 2020; Lampinen & McClelland, 2020).\nAn important goal for future research is to develop networks that support the more difficult forms of combinatorial generalisation and extrapolation. In fact there is already an active range of research in this direction that include networks with specialized modules (Santoro et al., 2017), mechanisms (Mitchell & Bowers, 2020; Hummel & Biederman, 1992), structured representations (Higgins et al., 2018c; Watters et al., 2019), or learning objectives (Vankov & Bowers, 2020) that may show greater success. It will be interesting to see how these and other approaches fare in the more difficult generalisation settings we have identified here, and the role of disentanglement in any solutions." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Chris Summerfield, Irina Higgins, Ben Evans and Jeff Mitchell for useful discussions and feedback during the development of this research.\nThis research was supported by a ERC Advanced Grant (Generalization in Mind and Machine, #741134)." }, { "heading": "A MODELS AND TRAINING", "text": "For our experiments on the standard unsupervised task we used two different VAE architectures. The first one is the same found in Higgins et al. (2017) and uses a 2-layer MLP as an encoder with 1200 units and ReLU non-linearity. The decoder is a 3-layer with the same number of units and the Tanh non-linearity. The second architecture is the one found in Burgess et al. (2018) and consists of a 3-layer CNN with 32×4×2×1 convolutions and max pooling, followed by a 2-layer MLP with 256 units in each layer. The decoder is defined to be the transpose of this architecture. ReLU non-linearity where applied after each layer of the CNN and the MLP for both the encoder and the decoder. Both models used a Gaussian stochastic layer with 10 units as in the original papers.\nWe also tested two variants of this last architecture, one found in Mathieu et al. (2019) which changes the shape of the convolution and another with batch normalisation. Neither variant exhibited any improvements to disentanglement or reconstruction on the full dSprite data and so were not included in the rest of the experiments.\nFor the image composition task we used same as in Burgess et al. (2018) that we described above. The latent transformation layer was parameterized as:\nhtransformed =Wrcat[zr; action] +Wtcat[zt; action]\nwhere zr and zt are the samples from the stochastic layer for reference and transform image, cat is the concatenation operation performed along the column dimension. The output is another 10- dimensional vector with the transformed latent code.\nAlternatively we also tried a 3 layer MLP with 100 hidden units, but saw no benefit in performance and decreased disentanglement when trainined on the full dataset.\nTraining on the unsupervised tasks ran for 100 epochs for dSprites and 65 epochs for Shapes3D, even though models converged before the end. The learning rate was fixed at 1e − 4 and the batch size at 64. β values used were 1, 4, 8, 12, 16 on the full dSprite dataset. β = 4 and β = 16 where not included in the rest of the experiments since the former offered very little disentanglement and the latter very large reconstruction error. For the FactorVAE we used γ = 20, 50, 100 throughout. In the composition task the models where trained for 100 epochs with β = 1. Using β higher than 1 interfered with the model’s ability to solve the task so they where not used.\nFor the ground-truth decoders (GT Decoder) we used the same MLP decoder of Higgins et al. (2017) mentioned above. Using deeper decoders with convolutions with/without batch norm after each layer was also tested, but did not provide significan benefits and also decreased the performance on some of the conditions.\nAll the models where implemented in PyTorch (Paszke et al., 2019) and the experiments where performed using the Ignite and Sacred frameworks (V. Fomin & Tejani, 2020; Klaus Greff et al., 2017).\nTo measure disentanglement we used the framework proposed by Eastwood & Williams (2018) with a slight modification. The approach consists of predicting each generative factor value, given the latent representations of the training images using a non-linear model. In our case we used the LassoCV regression found in the Sklearn library (Pedregosa et al., 2011) with an α coefficient of 0.01 and 5 cross-validation partitions. Deviating from the original proposal, we do not normalize the inputs to the regression model since we found that this tends to give a lot of weight to dead units (when measured by their KL divergence). This is likely due to the model “killing” these units during training after they start with a high KL value, which might not completely erase the information they carry about a given generative factor.\nWorking code for running these experiments and analyses can be downloaded at https://github. com/mmrl/disent-and-gen." }, { "heading": "B EXTRA PLOTS FOR DSPRITES DATASET", "text": "C EXTRA PLOTS FOR THE 3D SHAPES DATASET" } ]
2,021
null
SP:f7f0a0d566d1de41a8a8edfed70d363a57c671ef
[ "The research question of this paper is the existence of an extremely sparse network with an initial weight assignment that can be trained online to perform multiple tasks to compete with a dense network, in a lifelong continual learning configuration. Another research question of this paper is how to identify this sparse network and achieve competitive performance. To address these questions, the authors proposed to incrementally introduce new non-zero weights when learning incoming tasks (Figure 2 and Equation 1). The network considered by the authors has a common base for all models and a head for individual tasks." ]
The lottery ticket hypothesis states that a highly sparsified sub-network can be trained in isolation, given the appropriate weight initialization. This paper extends that hypothesis from one-shot task learning, and demonstrates for the first time that such extremely compact and independently trainable sub-networks can be also identified in the lifelong learning scenario, which we call lifelong tickets. We show that the resulting lifelong ticket can further be leveraged to improve the performance of learning over continual tasks. However, it is highly non-trivial to conduct network pruning in the lifelong setting. Two critical roadblocks arise: i) As many tasks now arrive sequentially, finding tickets in a greedy weight pruning fashion will inevitably suffer from the intrinsic bias, that the earlier emerging tasks impact more; ii) As lifelong learning is consistently challenged by catastrophic forgetting, the compact network capacity of tickets might amplify the risk of forgetting. In view of those, we introduce two pruning options, e.g., top-down and bottom-up, for finding lifelong tickets. Compared to the top-down pruning that extends vanilla (iterative) pruning over sequential tasks, we show that the bottomup one, which can dynamically shrink and (re-)expand model capacity, effectively avoids the undesirable excessive pruning in the early stage. We additionally introduce lottery teaching that further overcomes forgetting via knowledge distillation aided by external unlabeled data. Unifying those ingredients, we demonstrate the existence of very competitive lifelong tickets, e.g., achieving 3− 8% of the dense model size with even higher accuracy, compared to strong class-incremental learning baselines on CIFAR-10/CIFAR-100/Tiny-ImageNet datasets. Codes available at https://github.com/VITA-Group/Lifelong-Learning-LTH.
[ { "affiliations": [], "name": "LONG LIVE" }, { "affiliations": [], "name": "LIFELONG LEARNING" }, { "affiliations": [], "name": "Tianlong Chen" }, { "affiliations": [], "name": "Zhenyu Zhang" }, { "affiliations": [], "name": "Sijia Liu" }, { "affiliations": [], "name": "Shiyu Chang" }, { "affiliations": [], "name": "Zhangyang Wang" } ]
[ { "authors": [ "Davide Abati", "Jakub Tomczak", "Tijmen Blankevoort", "Simone Calderara", "Rita Cucchiara", "Babak Ehteshami Bejnordi" ], "title": "Conditional channel gated networks for task-aware continual learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Rahaf Aljundi", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Selfless sequential learning", "venue": "arXiv preprint arXiv:1806.05421,", "year": 2018 }, { "authors": [ "Eden Belouadah", "Adrian Popescu" ], "title": "Il2m: Class incremental learning with dual memory", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Eden Belouadah", "Adrian Popescu" ], "title": "Scail: Classifier weights scaling for class incremental learning", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Francisco M Castro", "Manuel J Marı́n-Jiménez", "Nicolás Guil", "Cordelia Schmid", "Karteek Alahari" ], "title": "End-to-end incremental learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "arXiv preprint arXiv:1812.00420,", "year": 2018 }, { "authors": [ "Tianlong Chen", "Jonathan Frankle", "Shiyu Chang", "Sijia Liu", "Yang Zhang", "Michael Carbin", "Zhangyang Wang" ], "title": "The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models", "venue": "arXiv preprint arXiv:2012.06908,", "year": 2020 }, { "authors": [ "Tianlong Chen", "Jonathan Frankle", "Shiyu Chang", "Sijia Liu", "Yang Zhang", "Zhangyang Wang", "Michael Carbin" ], "title": "The lottery ticket hypothesis for pre-trained bert networks", "venue": "arXiv preprint arXiv:2007.12223,", "year": 2020 }, { "authors": [ "Tianlong Chen", "Yongduo Sui", "Xuxi Chen", "Aston Zhang", "Zhangyang Wang" ], "title": "A unified lottery ticket hypothesis for graph neural networks, 2021a", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Justin Cosentino", "Federico Zaiter", "Dan Pei", "Jun Zhu" ], "title": "The search for sparse, robust neural networks", "venue": "arXiv preprint arXiv:1912.02386,", "year": 2019 }, { "authors": [ "Shrey Desai", "Hongyuan Zhan", "Ahmed Aly" ], "title": "Evaluating lottery tickets under distributional shifts", "venue": "arXiv preprint arXiv:1910.12708,", "year": 2019 }, { "authors": [ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Uncertainty-guided continual learning with bayesian neural networks", "venue": "arXiv preprint arXiv:1906.02425,", "year": 2019 }, { "authors": [ "Mohamed Elhoseiny", "Francesca Babiloni", "Rahaf Aljundi", "Marcus Rohrbach", "Manohar Paluri", "Tinne Tuytelaars" ], "title": "Exploring the challenges towards lifelong fact learning", "venue": "In Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Utku Evci", "Fabian Pedregosa", "Aidan Gomez", "Erich Elsen" ], "title": "The difficulty of training sparse neural networks", "venue": "arXiv preprint arXiv:1906.10732,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks. 2019", "venue": "URL https://openreview.net/forum?id=rJl-b3RcF7", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M Roy", "Michael Carbin" ], "title": "The lottery ticket hypothesis at scale", "venue": "arXiv preprint arXiv:1903.01611,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "David J. Schwab", "Ari S. Morcos" ], "title": "The early phase of neural network training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks", "venue": "arXiv preprint arXiv:1902.09574,", "year": 2019 }, { "authors": [ "Siavash Golkar", "Michael Kagan", "Kyunghyun Cho" ], "title": "Continual learning via neural pruning", "venue": "arXiv preprint arXiv:1903.04476,", "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Tyler L Hayes", "Christopher Kanan" ], "title": "Lifelong machine learning with deep streaming linear discriminant analysis", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Chen He", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Exemplar-supported generative reproduction for class incremental learning", "venue": "In British Machine Vision Conference,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Ching-Yi Hung", "Cheng-Hao Tu", "Cheng-En Wu", "Chien-Hung Chen", "Yi-Ming Chan", "Chu-Song Chen" ], "title": "Compacting, picking and growing for unforgetting continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Khurram Javed", "Faisal Shafait" ], "title": "Revisiting distillation and incremental classifier learning", "venue": "In Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Ronald Kemker", "Christopher Kanan" ], "title": "Fearnet: Brain-inspired model for incremental learning", "venue": "arXiv preprint arXiv:1711.10563,", "year": 2017 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "Advances in Neural Information Processing Systems", "year": 1990 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Xialei Liu", "Chenshen Wu", "Mikel Menta", "Luis Herranz", "Bogdan Raducanu", "Andrew D Bagdanov", "Shangling Jui", "Joost van de Weijer" ], "title": "Generative feature replay for class-incremental learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Haoyu Ma", "Tianlong Chen", "Ting-Kuei Hu", "Chenyu You", "Xiaohui Xie", "Zhangyang Wang" ], "title": "Good students play big lottery better", "venue": "arXiv preprint arXiv:2101.03255,", "year": 2021 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Rahul Mehta" ], "title": "Sparse transfer learning via winning lottery tickets", "venue": "arXiv preprint arXiv:1905.07785,", "year": 2019 }, { "authors": [ "Ari Morcos", "Haonan Yu", "Michela Paganini", "Yuandong Tian" ], "title": "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "German I Parisi", "Ronald Kemker", "Jose L Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Jathushan Rajasegaran", "Munawar Hayat", "Salman H Khan", "Fahad Shahbaz Khan", "Ling Shao" ], "title": "Random path selection for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Alex Renda", "Jonathan Frankle", "Michael Carbin" ], "title": "Comparing rewinding and fine-tuning in neural network pruning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Amir Rosenfeld", "John K Tsotsos" ], "title": "Incremental learning through deep adaptation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Pedro Savarese", "Hugo Silva", "Michael Maire" ], "title": "Winning the lottery with continuous sparsification, 2020", "venue": "URL https://openreview.net/forum?id=BJe4oxHYPB", "year": 2020 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ghada Sokar", "Decebal Constantin Mocanu", "Mykola Pechenizkiy" ], "title": "Spacenet: Make free space for continual learning", "venue": "arXiv preprint arXiv:2007.07617,", "year": 2020 }, { "authors": [ "Sebastian Thrun", "Tom M Mitchell" ], "title": "Lifelong robot learning", "venue": "Robotics and autonomous systems,", "year": 1995 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1958 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "arXiv preprint arXiv:2002.07376,", "year": 2020 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Growing a brain: Fine-tuning by increasing model capacity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Haoran You", "Chaojian Li", "Pengfei Xu", "Yonggan Fu", "Yue Wang", "Xiaohan Chen", "Richard G. Baraniuk", "Zhangyang Wang", "Yingyan Lin" ], "title": "Drawing early-bird tickets: Toward more efficient training of deep networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Haonan Yu", "Sergey Edunov", "Yuandong Tian", "Ari S Morcos" ], "title": "Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp", "venue": null, "year": 1906 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "Proceedings of machine learning research,", "year": 2017 }, { "authors": [ "Junjie Zhang", "Lingqiao Liu", "Peng Wang", "Chunhua Shen" ], "title": "To balance or not to balance: A simple-yet-effective approach for learning with long-tailed distributions", "venue": null, "year": 1912 }, { "authors": [ "Junting Zhang", "Jie Zhang", "Shalini Ghosh", "Dawei Li", "Serafettin Tasci", "Larry Heck", "Heming Zhang", "C-C Jay Kuo" ], "title": "Class-incremental learning via deep model consolidation", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Hao Zhou", "Jose M Alvarez", "Fatih Porikli" ], "title": "Less is more: Towards compact cnns", "venue": "In European Conference on Computer Vision,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The lottery ticket hypothesis (LTH) (Frankle & Carbin, 2019) suggests the existence of an extremely sparse sub-network, within an overparameterized dense neural network, that can reach similar performance as the dense network when trained in isolation with proper initialization. Such a subnetwork together with the used initialization is called a winning ticket (Frankle & Carbin, 2019). The original LTH studies the sparse pattern of neural networks with a single task (classification), leaving the question of generalization across multiple tasks open. Following that, a few works (Morcos et al., 2019; Mehta, 2019) have explored LTH in transfer learning. They study the transferability of a winning ticket found in a source task to another target task. This provides insights on one-shot transferability of LTH. In parallel, lifelong learning not only suffers from notorious catastrophic forgetting over sequentially arriving tasks but also often comes at the price of increasing model capacity. With those in mind, we ask a much more ambitious question:\nDoes LTH hold in the setting of lifelong learning when different tasks arrive sequentially?\nIntuitively, a desirable “ticket” sub-network in lifelong learning (McCloskey & Cohen, 1989; Parisi et al., 2019) needs to be: 1) independently trainable, same as the original LTH; 2) trained to perform\n∗Equal Contribution.\n1\ncompetitively to the dense lifelong model, including both maintaining the performance of previous tasks, and quickly achieving good generalization at newly added tasks; 3) found online, as the tasks sequentially arrive without any pre-assumed order. We define such a sub-network with its initialization as a lifelong lottery ticket.\nThis paper seeks to locate the lifelong ticket in class-incremental learning (CIL) (Wang et al., 2017; Rosenfeld & Tsotsos, 2018; Kemker & Kanan, 2017; Li & Hoiem, 2017; Belouadah & Popescu, 2019; 2020), a popular, realistic and challenging setting of lifelong learning. A natural idea to extend the original LTH is to introduce sequential pruning: we continually prune the dense network until the desired sparsity level, as new tasks are incrementally added. However, we show that the direct application of the iterative magnitude pruning (IMP) used in LTH fails in the scenario of CIL since the pruning schedule becomes critical when tasks arrive sequentially. To circumvent this challenge, we generalize IMP to incorporate a curriculum pruning schedule. We term this technique top-down lifelong pruning. When the total number of tasks is pre-known and small, then with some “lottery” initialization (achieved by rewinding (Frankle et al., 2019) or similar), we find that the pruned sparse ticket can be re-trained to similar performance as the dense network. However, if the number of tasks keeps increasing, the above ticket will soon witness performance collapse as its limited capacity cannot afford the over-pruning.\nThe limitation of top-down lifelong pruning reminds us of two unique dilemmas that might challenge the validity of lifelong tickets. i) Greedy weight pruning v.s. all tasks’ performance: While the sequential pruning has to be performed online, its greedy nature inevitably biases against later arriving tasks, as earlier tasks apparently will contribute to shaping the ticket more (and might even use up the sparsity budget). ii) Catastrophic forgetting v.s. small ticket size: To overcome the notorious catastrophic forgetting (McCloskey & Cohen, 1989; Tishby & Zaslavsky, 2015), many lifelong learning models have to frequently consolidate weights to carefully re-assign the model capacity (Zhang et al., 2020) or even grow model size as tasks come in (Wang et al., 2017). Those seem to contradict our goal of pruning by seeing more tasks.\nTo address the above two limitations, we propose a novel bottom-up lifelong pruning approach, which allows for re-growing the model capacity to compensate for any excessive pruning. It therefore flexibly calibrates between increasing and decreasing tickets throughout the entire learning process, alleviating the intrinsic greedy bias caused by the top-down pruning. We additionally introduce lottery teaching to overcome forgetting, which regularizes previous task models’ soft logit outputs by using free unlabeled data. That is inspired by lifelong knowledge preservation techniques (Castro et al., 2018; He et al., 2018; Javed & Shafait, 2018; Rebuffi et al., 2017).\nFor validating our proposal, we conduct extensive experiments on CIFAR-10, CIFAR-100, and TinyImageNet datasets for class-incremental learning (Rebuffi et al., 2017). The results demonstrate the existence and the high competitiveness of lifelong tickets. Our best lifelong tickets (found by bottom-up pruning and lottery teaching) achieve comparable or better performance across all sequential tasks, with as few as 3.64% parameters, compared to state-of-the-art dense models. Our contributions can be summarized as: • The problem of lottery tickets is formulated and studied in lifelong learning (class incremental\nlearning) for the first time. • Top-down pruning: a generalization of iterative weight magnitude pruning used in the original\nLTH over continual learning tasks. • Bottom-up pruning: a novel pruning method, which is unique to allow for re-growing model\ncapacity, throughout the lifelong process. • Extensive experiments and analyses demonstrating the promise of lifelong tickets, in achieving\nsuperior yet extremely light-weight lifelong learners." }, { "heading": "2 RELATED WORK", "text": "Lifelong Learning A lifelong learning system aims to continually learn sequential tasks and accommodate new information while maintaining previously learned knowledge (Thrun & Mitchell, 1995). One of its major challenges is called catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017; Hayes & Kanan, 2020), i.e., the network cannot maintain expertise on tasks that they have not experienced for a long time.\n2\nThis paper’s study subject is class-incremental learning (CIL) (Rebuffi et al., 2017; Elhoseiny et al., 2018): a popular, realistic, albeit challenging setting of lifelong learning. CIL requires the model to recognize new classes emerging over time while maintaining recognizing ability over old classes without access to the previous data. Typical solutions are based on regularization (Li & Hoiem, 2017; Kirkpatrick et al., 2017; Zenke et al., 2017; Aljundi et al., 2018a; Ebrahimi et al., 2019), for example, knowledge distillation (Hinton et al., 2015) is a common regularizer to inherit previous knowledge through preserving soft logits of those samples (Li & Hoiem, 2017) while learning new tasks. Besides, several approaches are learning with memorized data (Castro et al., 2018; Javed & Shafait, 2018; Rebuffi et al., 2017; Belouadah & Popescu, 2019; 2020; Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2018). And some generative lifelong learning methods (Liu et al., 2020; Shin et al., 2017) mitigate catastrophic forgetting by generating simulated data of previous tasks. There also exist a few architecture-manipulation-based lifelong learning methods (Rajasegaran et al., 2019; Aljundi et al., 2018b; Hung et al., 2019; Abati et al., 2020; Rusu et al., 2016; Kemker & Kanan, 2017), while their target is dividing a dense model into task-specific parts for lifelong learning, rather than localizing sparse networks and the lottery tickets.\nPruning and Lottery Ticket Hypothesis It is well-known that deep networks could be pruned of excess capacity (LeCun et al., 1990b). Pruning algorithms can be categorized into unstructured (Han et al., 2015b; LeCun et al., 1990a; Han et al., 2015a) and structured pruning (Liu et al., 2017; He et al., 2017; Zhou et al., 2016). The former sparsifies weight elements based on magnitudes, while the latter removes network sub-structures such as channels for more hardware friendliness.\nLTH (Frankle & Carbin, 2019) advocates the existence of an independently trainable sparse subnetwork from a dense network. In addition to image classification (Frankle & Carbin, 2019; Liu et al., 2019; Wang et al., 2020; Evci et al., 2019; Frankle et al., 2020; Savarese et al., 2020; You et al., 2020; Ma et al., 2021; Chen et al., 2020a), LTH has been explored widely in numerous contexts, such as natural language processing (Gale et al., 2019; Chen et al., 2020b), reinforcement learning (Yu et al., 2019), generative adversarial networks (Chen et al., 2021b), graph neural networks (Chen et al., 2021a), and adversarial robustness (Cosentino et al., 2019). Most of them adopt unstructured weight magnitude pruning (Han et al., 2015a; Frankle & Carbin, 2019) to obtain the ticket, which we also follow in this work. (Frankle et al., 2019) analyzes large models and datasets, and presents a rewinding technique that re-initializes ticket training from the early training stage rather than from scratch. (Renda et al., 2020) further compares different retraining techniques and endorses the effectiveness of rewinding. (Mehta, 2019; Morcos et al., 2019; Desai et al., 2019) pioneer to study the transferability of the ticket identified on one source task to another target task, which delivers insights on one-shot transferability of LTH.\nOne latest work (Golkar et al., 2019) aimed at lifelong learning in fixed-capacity models based on pruning neurons of low activity. The authors observed that a controlled way of “graceful forgetting” after training each task can regain network capacity for new tasks, meanwhile not suffering from forgetting. Sokar et al. (2020) further compresses the sparse connections of each task during training, which reduces the interference between tasks and alleviates forgetting.\n3 LOTTERY TICKET FROM SINGLE-TASK LEARNING TO CIL\n3.1 PROBLEM SETUP\nIn CIL, a model continuously learns from a sequential data stream in which new tasks (namely, classification tasks with new classes) are added over time, as shown in Figure 1. At the inference stage, the model can operate without having access to the information of task IDs. Following (Castro et al., 2018; He et al., 2018; Rebuffi et al., 2017), a handful of samples from previous classes are stored in a fixed memory buffer.\nMore formally, let T1, T2, · · · represent a sequence of tasks, and the ith task Ti contains data that fall into (ki − ki−1) classes Ci = {cki−1+1, cki−1+2, · · · , cki}, where k0 = 0 by convention. Let Θ(i) = {θ(i),θ(i)c } denote the model of the learner used at task i, where θ(i) corresponds to the base model cross all tasks from T1 to Ti, and θ(i)c denotes the task-specific classification head at Ti.\n3\nThus, the size of θ(i) is fixed, but the dimension of θ(i)c aligns with the number of classes, which have been seen at Ti. In general, the learner has access to two types of information at task i: the current training dataD(i), and certain previous information P(i). The latter includes a small amount of data from previous tasks {Tj}i−11 stored in the memory buffer, and the previous model Θ(i−1) at task Ti−1. This is commonly used to overcome the catastrophic forgetting issue of the current task i against the previous tasks. Based on the aforementioned setting, we state the CIL problem as below.\nProblem of CIL. At the current task i, we aim to learn a full model Θ(i) = {θ(i),θ(i)c } based on the information (D(i),P(i)) such that Θ(i) not only (I) yields the generalization ability to the newly added data at task Ti but also (II) does not lose its power to the previous tasks {Tj}i−11 . We note that the aforementioned problem statement applies to CIL with any fixed-length learning period. That is, for n time stamps (one task per time), the validity of the entire trajectory {Θ(i)}n1 is justified by each Θ(i) from the CIL criteria (I) and (II) stated in ‘Problem of CIL’." }, { "heading": "3.2 LIFELONG LOTTERY TICKETS", "text": "It was shown by LTH (Frankle & Carbin, 2019) that a standard (unstructured) pruning technique can uncover the so-called winning ticket, namely, a sparse sub-network together with proper initialization that can be trained in isolation and reach similar performance as the dense network. In this paper, we aim to prune the base model θ(i) over time. And we ask: Do there exist winning tickets in lifelong learning? If yes, how to obtain them? To answer these questions, a prerequisite is to define the notion of lottery tickets in lifelong learning, which we call lifelong lottery tickets.\nFollowing LTH (Frankle & Carbin, 2019), a lottery ticket consists of two parts: 1) a binary mask m ∈ {0, 1}‖θ(i)‖0 obtained from a one-shot or iterative pruning algorithm, and 2) initial weights or rewinding weights θ0. The ticket (m,θ0) is a winning ticket if training the subnetwork m θ0 ( denotes element-wise product), identified by the sparse pattern m with initialization θ0, wins the initialization lottery to match the performance of the original (fully trained) network. In CIL, at the presence of sequential tasks {T (i)}i=1,2,..., we define lifelong lottery tickets (m(i),θ(i)0 ) recursively from the perspective of dynamical system:\nm(i) = m(i−1) +A(D(i),P(i),m(i−1)), and θ(i)0 ∈ {θ(0),θ(i)rw}, (1)\nwhere A denotes a pruning algorithm used at the current task T (i) based on the information D(i), P(i) and m(i−1), θ(0) denotes the initialization prior to training the model at T (1), and θ(i)rw denotes a rewinding point at T (i). In Eq. (1), we interpret the (non-trivial) pruning operation A by weight perturbations, with values drawn from {−1, 0, 1}, to the previous binary mask. Here−1 denotes the removal of a weight, 0 signifies to keep a weight intact, and 1 represents the addition of a weight. Moreover, the introduction of weight rewinding is spurred by the so-called rewinding ticket (Renda et al., 2020; Frankle et al., 2020). For example, if θ(i)rw = θ(i−1), then we pick the model weights learnt at the previous task T (i−1) to initialize the training at T (i). We also note that θ(0) can be regarded as the point rewound to the earliest stage of the lifelong learning. Based on Eq. (1), we then state the definition of winning tickets in CIL.\nLifelong winning tickets. Given a sequence of tasks {Ti}n1 , the lifelong lottery tickets {(m(i),θ(i)0 )}n1 given by (1) are winning tickets if they can be trained in isolation to match the CIL performance (i.e., criteria I and II) of the corresponding full model {θ(i)}n1 , where n ∈ N+. In the next section, we will design the lifelong pruning algorithm A, together with ticket initialization schemes formulated in Eq. (1)" }, { "heading": "4 PROPOSED PRUNING METHOD TO FIND LIFELONG WINNING TICKETS", "text": "" }, { "heading": "4.1 REVISITING IMP OVER SEQUENTIAL TASKS: TOP-DOWN (TD) PRUNING", "text": "In order to find the potential tickets at the current task T (i), it is natural to specify A in Eq. (1) as the iterative magnitude pruning (IMP) algorithm (Han et al., 2015a) to prune the model from\n4\nm(i−1) θ(i−1). Following (Frankle & Carbin, 2019; Renda et al., 2020), IMP iteratively prunes p 1\nn(i) (%) non-zero weights of m(i−1) θ(i−1) over n(i) rounds at T (i). Thus, the number of nonzero weights in the obtained mask m(i) is given by ((1 − p 1 n(i) )n (i) · ‖m(i−1)‖0). However, in the application of IMP to the sequential tasks {T (i)}, we find that the schedule of IMP over sequential tasks, in terms of {n(i)}, is critical to make pruning successful in lifelong learning. We refer readers to Appendix A2.1 for detailed justifications.\nCurriculum schedule of TD pruning is a key to success The conventional method is to set {n(i)} as a uniform schedule, namely, IMP prunes a fixed portion of non-zeros at each task. However, this direct application fails quickly as the number of incremental tasks increases, implying that “not all tasks are created equal” in the learning/pruning schedule. Inspired by the recent observation that training with more classes helps consolidate a more robust sparse model (Morcos et al., 2019), we propose a curriculum pruning schedule, in which IMP is conducted more aggressively for new tasks arriving later, with n(i) ≥ n(i−1), until reaching the desired sparsity. For example, if there are 12 times of pruning on five sequentially arrived tasks, we arrange them in a linearly increasing way, i.e., (T1:1,T2:1,T3:2,T4:3,T5:5). Note that TD pruning relies on the heuristic curriculum schedule, and thus inevitably greedy and suboptimal over continual learning tasks. In what follows, we propose a more advanced pruning scheme, bottom-up (BU) pruning, that obeys a different principle of design.\n4.2 BOTTOM-UP (BU) LIFELONG PRUNING: AN ADVANCED SCHEME\nWhy we need more than top-down pruning? TD pruning is inevitably greedy and suboptimal. Earlier added tasks contribute more to shaping the final mask, due to the nested dependency between intermediate masks. In the later training stage, we often observe the network is already too heavily pruned to learn more tasks. Inspired by the recently proposed model consolidation (Zhang et al., 2020), we propose the BU alternative of lifelong pruning, to dynamically compensate for the excessive pruning by re-growing previously reduced networks.\nFull reference model & rewinding point For BU lifelong pruning, we maintain a full (unpruned) model θ(i)ref as a reference throughout lifelong learning. First, θ (i) ref provides a reference performance R(i)ref obtained at T (i). Once the validation accuracy of the current sparse model is no worse than the reference performance, the sparse model is considered to still have sufficient capacity and can be further pruned. Otherwise, capacity expansion is needed. On the other hand, the reference model offers a rewinding point for network parameters, which preserves knowledge of all previous tasks prior to T (i). It naturally extends the rewinding concept (Frankle et al., 2019) to lifelong learning.\nBU pruning method BU lifelong pruning expands the previous mask m(i−1) to m(i). Different from TD pruning, the model size grows along the task sequence, namely, ‖m(i)‖0 ≥ ‖m(i−1)‖0. Thus, BU pruning enforces A in Eq. (1) to draw non-negative perturbations. As illustrated in Figure 2, for each newly added Ti, we first re-train the previous sparse model m(i−1) θ(i−1) under\n5\nthe current information (D(i),P(i)) and calculate the validation accuracyR(i). IfR(i) is above than the reference performance R(i)ref , we proceed to keep the sparse mask m(i) = m(i−1) intact and use re-trained θ(i−1) as θ(i) at Ti. Otherwise, an expansion from m(i−1) is required to ensure sufficient learning capacity. To do so, we restart from the full reference model θ(i)ref and iteratively prune its weights using IMP until the performance gets just below R(i)ref . Here the previous non-zero weights localized by m(i−1) are excluded from the pruning scope of IMP but the values of those non-zero weights could be re-trained. As a result, IMP will yield the updated mask m(i) with a larger size than m(i−1). We repeat the aforementioned BU pruning method when a new task arrives.\nAlthough never observed in our CIL experiments, a potential corner case of expansion is that the ticket size may hit the size of the full model. We consider this as an artifact of limited model capacity and suggest future work of combining lifelong tickets with (full) model growing (Wang et al., 2017).\nTicket initialization Given the pruning mask found by the BU (or TD) pruning method, we next determine the initialization scheme of a lifelong ticket. We consider three specifications of θ(i)0 in Eq. (1) to initialize the sparse model m(i) for re-training the found tickets. They include: (I) θ (i) 0 = θ\n(0), i.e., the original “from the same random” initialization (Frankle & Carbin, 2019), (II) a random re-initialization θreinit which is independent of θ(0), and (III) previous-task rewinding, i.e., θ(i)0 = θ\n(i−1). The initialization schemes I-III together with m(i) yield the following tickets m(i) at T (i): (1) BU (or TD) tickets, namely, m(i) found by BU (or TD) pruning with initialization I; (2) random BU (or TD) tickets, namely, m(i) with initialization II; (3) task-rewinding BU (or TD) tickets, namely, m(i) with initialization III. In experiments, we will show that both BU (or TD) tickets and their task-rewinding (TR) counterparts are winning tickets, which outperform unpruned full CIL models. Compared BU with TD pruning, TR-BU tickets surpass the best TD tickets." }, { "heading": "4.3 LOTTERY TEACHING: A PLUG-IN REGULARIZATION", "text": "Catastrophic forgetting poses a severe challenge to class-incremental learning, especially for compact models. (Castro et al., 2018; He et al., 2018; Javed & Shafait, 2018; Rebuffi et al., 2017) are early attempts for undertaking the forgetting dilemma by introducing knowledge distillation regularization (Hinton et al., 2015), which employs a handful of stored previous data in addition to new task data. (Zhang et al., 2020) takes advantage of unlabeled data to handle the forgetting quandary.\nWe adapt their philosophy (Li & Hoiem, 2017; Hinton et al., 2015; Zhang et al., 2020) to presenting lottery teaching, enforcing previous information into the new tickets via a knowledge distillation term on external unlabeled data. Lottery teaching consists of two steps: i) we query more similar unlabeled data “for free” from a public source, by utilizing a small number of prototype samples from previous tasks’ training data. In this way, the storage required for previous tasks could be minimal, while the queried surrogate data functions similarly for our purpose; ii) we then enforce the output soft logits of the current subnetwork {m(i) θ(i),θ(i)c } on each queried unlabeled sample x to be close to the logits from previously trained subnetwork {m(i−1) θ(i−1),θ(i−1)c }, via knowledge distillation (KD) regularization based on the K-L divergence. For all experiments of our methods hereinafter, we by default append the lottery teaching as it is widely beneficial. An ablation study will also follow in Section 5.3." }, { "heading": "5 EXPERIMENTS", "text": "Experimental Setup We briefly discuss the key facts used in our experiments and refer readers to Appendix A2.3 for more implementation details. We evaluate our proposed lifelong tickets on three datasets: CIFAR-10, CIFAR-100, and Tiny-ImageNet. We adopt ResNet18 (He et al., 2016) as our backbone. We evaluate the model performance by standard testing accuracy (SA) averaged over three independent runs.\nCIL baseline: We consider a strong baseline framework derived from (Zhang et al., 2019), a recent state-of-the-art (SOTA) method introduced for imbalanced data training (see more illustrations in Appendix A1.1). We implement (Zhang et al., 2019) for CIL, and compare with two latest CIL\n6\nSOTAs: iCaRL (Rebuffi et al., 2017) and IL2M (Belouadah & Popescu, 2019)1. Our results demonstrate the adapted CIL method from (Zhang et al., 2019) outperforms the others significantly, (1.65% SA better than IL2M and 4.88% SA better than iCaRL on CIFAR-10)2, establishing a new SOTA bar. The proposed lottery teaching further improves the performance of the baseline adapted from Zhang et al. (2019), given by 4.4% SA improvements on CIFAR-10. Thus, we use (Zhang et al., 2019), combined with/without lottery teaching, to train the original (unpruned) CIL model.\nCIL pruning: To the best of our knowledge, we are not aware of any effective CIL pruning baseline comparable to ours. Thus, we focus on the comparison among different variants of our methods. We also compare the proposal with the ordinary IMP, showing its incapability in CIL pruning. Furthermore, we demonstrate that given our proposed pruning frameworks, standard pruning methods such as IMP and `1 pruning (by imposing `1 sparsity regularization) then turn to be successful.\nResults on TD tickets We begin by showing that TD pruning is non-trivial in CIL. We find that the ordinary IMP (Han et al., 2015a) fails: It leads to 10.21% SA degradation (from 72.79% to 62.58% for SA) with 4.40% parameters left. By contrast, our proposed lifelong tickets yield substantially better performance which even surpasses the full dense model, with fewer parameters left than the ordinary IMP (Han et al., 2015a).\nIn what follows, We evaluate TD lifelong pruning using different weight rewindings, namely, i) TD tickets; ii) random TD tickets; iii) task-rewinding TD tickets; iv) late-rewinding TD tickets; and v) Fine-tuning. The late-rewinding tickets is a strong baseline claimed in Mehta (2019).\nFigure 3 and Table A4 demonstrate the high competitiveness of our proposed TD ticket (blue lines). It matches and most of the time outperforms the full model3 (black dash lines). Even with only 6.87% model parameters left, the TD ticket still surpasses the dense model by 0.49% SA. The task-rewinding tickets, in second place, exceeds the dense model until reaching the extreme sparsity of 4.40%. Moreover, we see late-rewinding TD tickets dominate over other rewinding/fine-tuning options, echoing the finding in single-task learning (Frankle et al., 2019).\nHowever, TD pruning cannot afford a lot more incremental tasks due to its greedy weight (over)pruning. Our results show that TD tickets pruned from only tasks T1 and T2 clearly overfit the first two tasks, even after incrementally learning the remaining three tasks. In this inappropriate pruning schedule (in contrast to T1 ∼ T5 scheme), the resultant ticket drops to 59.28% SA which is 13.51% lower than the dense model, as shown in Table A3. More results can be found in the appendix. Therefore, bottom-up lifelong pruning is proposed, as a remedy for relieving laborious tuning of pruning schedules.\nResults on BU lifelong tickets The bottom-up lifelong pruning allows the sparse network to regret if they could not deal with the newly added tasks, which compensates for the excessive pruning and reaches a substantially better trade-off between sparsity and generalization ability. Compared to TD pruning, it does not require any heuristic pruning schedules.\nIn Table 1, we first present the performance of BU tickets, random BU tickets, and task-rewinding BU (TR-BU) tickets, as mentioned in Section 4.2. As we can see, TR-BU tickets obtain the supreme performance. A possible explanation is that task-rewinding (i.e., θ(i)0 = θ\n(i−1)) maintains full information of learned tasks which mitigates the catastrophic forgetting, while other weight rewinding points lack sufficient task information to prevent compact models from forgetting. Next, we observe that TR-BU tickets significantly outperform the full dense model by 0.52% SA with only 3.64%\n1Both are implemented using official codes. The comparison has been strictly controlled to be fair, including dataset splits, same previously stored data, due diligence in hyperparameter tuning for each, etc.\n2More comparisons with the latest CIL SOTAs are referred to the Appendix A1.1 3Full model, denoting the performance of the dense CIL model (Zhang et al., 2019) with lottery teaching.\n7\nparameters left and `1 BU tickets obtain matched performance to the full dense model with 5.16% remaining parameters. It suggests that IMP, `1 and even other adequate pruning algorithms can be plugged into our proposed BU pruning framework to identify the lifelong tickets.\nIn Figure 4, we present the performance comparison between TR-BU tickets (the best subnetworks in Table 1) and TD tickets. TR-BU tickets are identified through bottom-up lifelong pruning, whose sparse masks continue to subtly grow along with the incremental tasks, from sparsity 2.81% at the first task to sparsity 3.64% at the last task. As we can see, at any incremental learning stage, TR-BU tickets attain a superior performance with significantly fewer parameters. Particularly, after learning all tasks, TR-BU tickets surpass TD tickets by 1.01% SA with 0.76% fewer weights on CIFAR-10; 3.07% with 2.46% fewer weights on CIFAR-100. Results demonstrate TR-BU tickets have a better generalization ability and parameter-efficiency compared with TD tickets. In addition, on TinyImageNet in Table A7, TR-BU tickets outperform full model with only 12.08% remaining weights. It is worth to mention that both TR-BU tickets and TD tickets have a superior performance than full dense model. We refer readers to Table A5 and A6 in the appendix for more detailed results.\nFrom the above results, we further observe that TR-BU tickets achieve comparable accuracy to full models which have more than 30× times in network capacity, implying that bottom-up lifelong pruning successfully discovers extremely sparse sub-networks, and yet they are powerful enough to inherit previous knowledge and generalize well on newly added tasks. Furthermore, our proposed lifelong pruning schemes can be directly plugged into other CIL models to identify the lifelong ticket, as shown in Appendix A1.\nAblation studies In what follows, we summarize our results on ablation studies and refer readers to the appendix A1.2.1 for more details. In Figure 5, we show the essential role of the curriculum schedule in TD pruning compared to the uniform pruning schedule. We notice that the curriculum\n8\npruning scheme generates stronger TD tickets than the uniform pruning in terms of accuracy, which confirms our motivation that pruning heavier in the late stage of lifelong learning with more classes is beneficial. In Table A8, we demonstrate the effectiveness of our proposals against different numbers of incremental tasks. In the Figure 5, we show that lottery teaching injects previous knowledge through applying knowledge distillation on external unlabeled data, and greatly alleviates the catastrophic forgetting issue in lifelong pruning (i.e., after learning all tasks, utilizing lottery teaching obtains a 4.34% SA improvement on CIFAR-10). It is worth mentioning that we set a buffer of fixed storage capacity to store 128 unlabeled images queried from public sources at each training iteration. We find that leveraging newly queried unlabeled data offers a better generalization-ability than storing historical data in past tasks. The latter only reaches 70.60% SA on CIFAR-10, which is 2.19% worse than the use of unlabeled data." }, { "heading": "6 CONCLUSION", "text": "We extend the Lottery Ticket Hypothesis to lifelong learning, in which networks incrementally learn from sequential tasks. We pose top-down and bottom-up lifelong pruning algorithms to identify lifelong tickets. Systematical experiments are conducted to validate that located tickets obtain strong(er) generalization ability across all incremental learned tasks, compared with unpruned models. Our future work aims to explore lifelong tickets with the (full) model growing approach." }, { "heading": "A1 MORE EXPERIMENT RESULTS", "text": "A1.1 MORE BASELINE RESULTS\nComparison with the Latest CIL SOTAs We find (Zhang et al., 2019) can be naturally introduced to class-incremental learning, which tackles the intrinsic training bias between a handful of previously stored data and a large amount of newly added data. It adopts random and class-balanced sampling strategies, combined with an auxiliary classifier, to alleviate the negative impact from imbalanced classes. Extensive results, shown in Table A2, demonstrates that adopting (Zhang et al., 2019) as the simple baseline surpasses previous SOTAs iCaRL (Rebuffi et al., 2017) and IL2M (Belouadah & Popescu, 2019) by a significant performance margin (1.65%/0.57% SA better than IL2M and 4.88%/7.60% SA better than iCaRL on CIFAR-10/CIFAR-100, respectively)4, establishing a new SOTA bar. With the assistance of lottery teaching, (Zhang et al., 2019) obtains an extra performance boost, 4.4% SA on CIFAR-10 and 7.34% SA on CIFAR-100.\nIt is worth mentioning that a lifelong ticket also exists in other CIL models. Take IL2M on CIFAR-10 as an example, bottom-up (BU) ticket achieves accuracy 68.92% with 11.97% parameters vs. the dense unpruned model with an accuracy of 66.74%.\nTable A2: Comparison between our dense model and two previous SOTA CIL methods on CIFAR10 and CIFAR-100. Reported performance it the final accuracy for each task T . Simple baseline donates the dense CIL model (Zhang et al., 2019). Full model represents our proposed framework which combines lottery teaching technique with the simple baseline.\nMethods CIFAR-10 T1 (%) T2 (%) T3 (%) T4 (%) T5 (%) Average (%)\niCaRL 76.45 79.00 75.70 50.85 35.55 63.51 IL2M 78.20 64.05 60.40 38.95 92.10 66.74\nSimple Baseline 75.05 71.50 54.25 52.05 89.10 68.39 Full Model 79.70 79.12 68.23 63.45 73.43 72.79\nMethods CIFAR-100 T1 (%) T2 (%) T3 (%) T4 (%) T5 (%) T6 (%) T7 (%) T8 (%) T9 (%) T10 (%) Average (%)\niCaRL 5.90 7.50 4.50 2.80 9.00 8.00 28.20 38.50 59.60 80.20 24.42 IL2M 19.90 24.10 19.80 12.90 21.30 21.70 29.90 34.80 40.30 89.80 31.45\nSimple Baseline 21.20 32.10 23.00 22.70 21.70 31.70 39.60 33.80 40.20 54.30 32.02 Full Model 29.04 33.94 32.54 27.94 32.74 29.64 47.94 45.34 47.24 67.24 39.36\nPruning Schedule is Important As shown in Table A3, an inappropriate pruning schedule across T1 ∼ T2, the resultant ticket drops to 59.28% accuracy which is 13.51% lower than the dense model. On the contrary, the adequate scheme across T1 ∼ T5 in Table A3, generates a TD winning ticket with a higher test accuracy (+0.49% SA) and extreme fewer parameters (6.87%), compared with the dense CIL model.\nTable A3: Evaluation performance of TD tickets (6.87%) pruned from different task ranges.\nPruning Schedule TD Tickets (6.87%) on CIFAR-10T1 (%) T2 (%) T3 (%) T4 (%) T5 (%) Average (%) Prune across T1 ∼ T2 74.05 88.80 78.25 28.40 26.90 59.28 Prune across T1 ∼ T5 78.90 82.15 71.55 63.80 70.00 73.28\nA1.2 MORE LIFELONG TICKETS RESULTS\nTop-down Lifelong Tickets We also report several performance reference baselines: (a) Full model, denoting the achievable performance of the dense CIL model (Zhang et al., 2019) combined with lottery teaching. (b) CILlower denoting a vanilla CIL model without using lottery teaching\n4To ensure fair compassion, iCaRL and IL2M both are implemented with their official codes. The comparison has been strictly controlled to be fair, including dataset splits, same previously stored data, due diligence in hyperparameter tuning for each, etc.\nA14\nnor storing/utilizing previous data in any form; (c) MTupper training a dense model using full data from all tasks simultaneously in a multi-task learning scheme. While it is not CIL (and much easier to learn), we consider MTupper as an accuracy “upper bound” for (dense) CIL ; (d) MTLT by directly pruning MTupper to obtain its lottery ticket (Frankle & Carbin, 2019). The detailed evaluation performance of TD tickets at different sparsity levels on CIFAR-10 are collected in Table A4.\n2.2 54.46.8 7 10 .74 32 .77 40 .9651 .210 0\nRemaining Weights %\n0\n20\n40\n60\n80\n100\nSt an\nda rd\nA cc\nur ac\ny (S\nA) %\nFull Model CILlower MTupper MTLT TD Tickets random TD Tickets task-rewinding TD Tickets late-rewinding TD Tickets Fine-tuning\n2.2 54.46.8 7 10 .74 32 .77 40 .9651 .210 0\nRemaining Weights %\n62\n64\n66\n68\n70\n72\n74\nSt an\nda rd\nA cc\nur ac\ny (S\nA) %\nFull Model TD Tickets random TD Tickets task-rewinding TD Tickets late-rewinding TD Tickets Fine-tuning\nFigure A6: Evaluation performance (standard accuracy) of top-down lifelong tickets. The right figure zooms in the red dash-line box in the left figure.\nTable A4: Evaluation performance of TD tickets at different sparsity levels on CIFAR-10. Reported performance is the final accuracy for each task T . Differences (+/-) are calculated w.r.t. the full/dense model performance.\nRemaining Weights TD Tickets on CIFAR-10T1 (%) T2 (%) T3 (%) T4 (%) T5 (%) Average (%) MTupper (100.00%) 97.48 97.48 93.33 90.83 94.03 94.63 CILlower (100.00%) 0.00 0.00 0.00 0.00 93.70 18.74\n100.00% 79.70 79.12 68.23 63.45 73.43 72.79 32.77% 84.30 80.05 70.70 67.75 69.55 74.47 + 1.68 10.74% 78.90 77.75 76.30 63.55 71.05 73.51 + 0.72 6.87% 78.90 82.15 71.55 63.80 70.00 73.28 + 0.49 4.40% 82.25 78.55 65.10 65.38 70.23 72.30 - 0.49 2.25% 78.20 78.30 69.50 58.20 65.00 69.84 - 2.45\nBottom-up Lifelong Tickets As shown in Table A5 and Table A6, even compared with the best TD tickets in terms of the trade-off between sparsity and accuracy, TR-BU tickets consistently remain prominent on both CIFAR-10 (a slightly higher accuracy and 3.23% fewer weights) and CIFAR100 (2.37% higher accuracy and 4.88% fewer weights). From the results, we further observe that TR-BU tickets achieve comparable accuracy to full models which have more than 30× times in network capacity, implying that bottom-up lifelong pruning successfully discovers extremely sparse sub-networks, and yet they are powerful enough to inherit previous knowledge and generalize well on newly added tasks.\nA1.2.1 MORE ABLATION RESULTS\nUniform v.s. Curriculum Lifelong Pruning We discuss different pruning schedules of top-down lifelong pruning, which play an essential role in the performance of TD tickets. From the right figure in Figure A7, we notice that the curriculum pruning scheme generates stronger TD tickets than the uniform pruning in terms of accuracy, which confirms our motivation that pruning heavier in the late stage of lifelong learning with more classes is beneficial.\nThe Number of Incremental Tasks Here we study the influence of increment times in our lifelong learning settings. Table A8 shows the results of TR-BU20 tickets incrementally learn from 20 tasks (5\nA15\nTable A5: Evaluation performance of TR-BU/TD tickets on CIFAR-10. T1∼i, i ∈ {1, 2, 3, 4, 5} donates that models have learned from T1, · · · , Ti incrementally. ||m||0||θ||0 represents the current network sparsity.\nCompact Weights CIFAR-10 T1 (%) ||m||0||θ||0 T1∼2 (%) ||m||0 ||θ||0 T1∼3 (%) ||m||0 ||θ||0 T1∼4 (%) ||m||0 ||θ||0 T1∼5 (%) ||m||0 ||θ||0\nMTupper - - - - - - - - 94.63 100.00% CILlower 97.60 100.00% 49.80 100.00% 32.57 100.00% 23.23 100.00% 18.74 100.00%\nFull Model 97.75 100.00% 89.10 100.00% 82.83 100.00% 76.99 100.00% 72.79 100.00% TD tickets (6.87%) 98.05 80.00% 87.55 64.00% 80.20 40.96% 73.21 16.78% 73.28 6.87% TD tickets (4.40%) 98.10 80.00% 87.83 64.00% 79.30 32.77% 72.34 13.42% 72.30 4.40%\nTR-BU tickets (3.64%) 96.80 2.81% 88.90 3.11% 81.37 3.40% 74.66 3.64% 73.31 3.64%\nTable A6: Evaluation performance of TR-BU/TD tickets when training incrementally on CIFAR-100. Compact Weights CIFAR-100\nT1 (%) ||m||0||θ||0 T1∼2 (%) ||m||0 ||θ||0 T1∼3 (%) ||m||0 ||θ||0 T1∼4 (%) ||m||0 ||θ||0 T1∼5 (%) ||m||0 ||θ||0\nMTupper - - - - - - - - - - CILlower 87.40 100.00% 44.65 100.00% 28.67 100.00% 20.08 100.00% 17.44 100.00%\nFull Model 88.30 100.00% 74.90 100.00% 63.70 100.00% 53.58 100.00% 48.52 100.00% TD Tickets (12.08%) 88.34 80.00% 71.60 64.00% 58.80 51.2% 48.88 40.96% 43.60 32.77% TD Tickets (9.66%) 88.20 80.00% 71.50 64.00% 58.73 51.2% 48.45 40.96% 44.08 32.77%\nBU Tickets (7.20%) 85.70 5.50% 74.75 5.78% 65.93 6.07% 53.15 6.07% 48.18 6.07%\nCompact Weights T1∼6 (%) ||m||0||θ||0 T1∼7 (%) ||m||0 ||θ||0 T1∼8 (%) ||m||0 ||θ||0 T1∼9 (%) ||m||0 ||θ||0 T1∼10 (%) ||m||0 ||θ||0\nMTupper - - - - - - - - 74.11 100.00% CILlower 14.07 100.00% 12.64 100.00% 10.91 100.00% 9.66 100.00% 8.64 100.00%\nFull Model 45.82 100.00% 44.07 100.00% 42.35 100.00% 40.93 100.00% 39.36 100.00% TD Tickets (12.08%) 41.50 26.21% 38.19 20.97% 37.28 16.78% 37.63 13.42% 37.42 12.08% TD Tickets (9.66%) 41.18 26.21% 39.37 20.97% 38.06 16.78% 37.30 13.42% 36.72 9.66%\nBU Tickets (7.20%) 47.58 6.35% 43.59 6.63% 41.35 6.63% 39.80 6.92% 39.79 7.20%\nTable A7: Evaluation performance of TR-BU/TD tickets when training incrementally on TinyImageNet.\nCompact Weights CIFAR-100 T1 (%) ||m||0||θ||0 T1∼2 (%) ||m||0 ||θ||0 T1∼3 (%) ||m||0 ||θ||0 T1∼4 (%) ||m||0 ||θ||0 T1∼5 (%) ||m||0 ||θ||0\nFull Model 73.70 100.00% 59.60 100.00% 52.07 100.00% 43.85 100.00% 41.32 100.00% BU Tickets (12.08%) 75.00 10.74% 58.60 11.01% 54.93 11.28% 47.23 11.28% 43.36 11.28%\nCompact Weights T1∼6 (%) ||m||0||θ||0 T1∼7 (%) ||m||0 ||θ||0 T1∼8 (%) ||m||0 ||θ||0 T1∼9 (%) ||m||0 ||θ||0 T1∼10 (%) ||m||0 ||θ||0\nFull Model 37.32 100.00% 34.69 100.00% 30.23 100.00% 29.94 100.00% 28.29 100.00% BU Tickets (12.08%) 36.93 11.28% 36.43 11.54% 32.41 11.54% 29.16 11.81% 28.33 12.08%\n1 2 3 4 5 The number of Learned Tasks\n60\n70\n80\n90\n100\nSt an\nda rd\nA cc\nua rc\ny (S\nA) %\nTD Tickets w. Lottery Teaching TD Tickets w.o. Lottery Teaching\n1 2 3 4 5 The number of Learned Tasks\nSt an\nda rd\nA cc\nua rc\ny (S\nA) %\nCurriculum Schedule (TD Tickets) Uniform Schedule (TD Tickets)\n0\n20\n40\n60\n80\n100\nRe m\nai ni\nng W\nei gh\nt %\nPruning Schedule of 6.87%\n0\n20\n40\n60\n80\n100\nRe m\nai ni\nng W\nei gh\nt %\nCurriculum Schedule of 10.74% Uniform Schedule of 10.74%\nFigure A7: Left: the results of TD Tickets with/without lottery teaching. Right: the comparison of TD tickets (10.74%) obtained from uniform and curriculum pruning schedule. Experiments are conducted on CIFAR-10.\nclasses per task); Table A6 presents the results of TR-BU10 tickets incrementally learn from 10 tasks (10 classes per task). Comparing between two tickets, TR-BU10 tickets reach 6.55% higher accuracy at the expense of 1.77% more parameters. Possible reasons behind it are that: i) the increasing of incremental learning times aggravates the forgetting issue, which causes TR-BU20 tickets fall in a\nA16\nTable A8: Evaluation performance of TR-BU Tickets when models incrementally learn 20 tasks on CIFAR-100.\nCompact Weights CIFAR-100 T1 (%) ||m||0||θ||0 T1∼2 (%) ||m||0 ||θ||0 T1∼3 (%) ||m||0 ||θ||0 T1∼4 (%) ||m||0 ||θ||0 T1∼5 (%) ||m||0 ||θ||0\nFull Model 89.20 100.00% 73.60 100.00% 67.33 100.00% 59.40 100.00% 52.16 100.00% TR-BU Tickets (5.43%) 86.80 2.81% 75.20 3.11% 66.47 3.40% 60.75 3.40% 53.68 3.40%\nCompact Weights T1∼6 (%) ||m||0||θ||0 T1∼7 (%) ||m||0 ||θ||0 T1∼8 (%) ||m||0 ||θ||0 T1∼9 (%) ||m||0 ||θ||0 T1∼10 (%) ||m||0 ||θ||0\nFull Model 50.10 100.00% 46.80 100.00% 43.35 100.00% 41.71 100.00% 38.62 100.00% TR-BU Tickets (5.43%) 50.40 3.40% 47.11 3.69% 44.10 3.98% 43.29 4.27% 38.70 4.27%\nCompact Weights T1∼11 (%) ||m||0||θ||0 T1∼12 (%) ||m||0 ||θ||0 T1∼13 (%) ||m||0 ||θ||0 T1∼14 (%) ||m||0 ||θ||0 T1∼15 (%) ||m||0 ||θ||0\nFull Model 36.38 100.00% 35.77 100.00% 35.14 100.00% 34.66 100.00% 35.41 100.00% TR-BU Tickets (5.43%) 37.35 4.27% 35.88 4.27% 34.62 4.27% 33.87 4.27% 35.48 4.56%\nCompact Weights T1∼16 (%) ||m||0||θ||0 T1∼17 (%) ||m||0 ||θ||0 T1∼18 (%) ||m||0 ||θ||0 T1∼19 (%) ||m||0 ||θ||0 T1∼20 (%) ||m||0 ||θ||0\nFull Model 34.43 100.00% 33.98 100.00% 34.14 100.00% 32.65 100.00% 33.13 100.00% TR-BU Tickets (5.43%) 34.64 4.56% 34.13 4.85% 33.69 5.14% 31.84 5.14% 33.24 5.43%\nworse accuracy decay; ii) at each incremental stage, TR-BU10 tickets learn more knowledge (10 v.s. 5 classes per task), which requires a large network capacity.\nWith v.s. Without Lottery Teaching Comparison results between TD tickets with lottery teaching and the ones without lottery teaching are collected in this section. As shown in Figure A7 (left figure), the performance of TD tickets without lottery teaching (black dash curves), quickly falls into a worse decay along with the times of incremental learning increase. After learning all tasks, utilizing lottery teaching obtains a 4.34% accuracy improvement on CIFAR-10. It suggests that our proposed lottery teaching injects previous knowledge through applying knowledge distillation on external unlabeled data, and greatly alleviates the catastrophic forgetting issue." }, { "heading": "A2 MORE METHODOLOGY AND IMPLEMENTATION DETAILS", "text": "A2.1 MORE LIFELONG PRUNING DETAILS ( ,(\n...... ......\nTop-down Lifelong Pruning Ticket Size ( ):, Random Initialization / Connection ):, Weights / Connection from Weights / Connection from): ( ):, Weights / Connection from\nIMP IMP IMP IMP\nFigure A8: Framework of our proposed top-down lifelong pruning algorithms. The top-down (TD) lifelong pruning performs like iterative magnitude pruning (IMP) by unrolling the sequential tasks. Tickets located by TD pruning continue to shrink with the growth of incremental tasks.\nMore Technical Details of Top-down Pruning In our implementation, we set p 1\nn(i) = 20% as (Frankle & Carbin, 2019; Renda et al., 2020) and adjust {n(i)} to control the pruning schedule of IMP over sequential tasks. The aforementioned lifelong pruning method is illustrated in Figure A8, and we call it top-down lifelong pruning since the model size is sequentially reduced, namely, ‖m(i)‖0 ≤ ‖m(i−1)‖0.\nPruning Algorithms We summarize the workflow of the top-down pruning and bottom-up pruning in Algorithm 1 and 2, respectively. For pruning hyperparameters, we follow the original LTH’s setting (Frankle & Carbin, 2019), i.e. ∆p = 20%. If we change ∆p to 40%, it will drop 2.04% accuracy at the same sparsity level on CIFAR-10.\nA2.2 MORE CLASS-INCREMENTAL LEARNING DETAILS\nLottery teaching regularization In order to mitigate the catastrophic forgetting effect, we apply knowledge distillation (Hinton et al., 2015) RKD to enforce the similarity between previous ŷ and\nA17\ncurrent y soft logits on unlabeled data. We stateRKD as follows: RKD(y, ŷ) = −H(t(y), t(ŷ))\n= − ∑ j t(y)j log t(ŷ)j\nwhere t(y)i = (yi) 1/T∑ j(yj)\n1/T , T = 2 in our case, following the standard setting in (Hinton et al., 2015; Li & Hoiem, 2017).\nAlgorithm 1: Top-Down Pruning\nInput: Full dense model f(θ0,θ (0) c ; x),\na desired sparsity Pm, samples x from a storage S and sequential tasks T1∼n, soft logits from previous model on queried unlabeled data, pruning ratio ∆p\nOutput: An updated sparse model f(θ m,θ(n)c ; x)\n1 Set i = 1 and maskm = 1 ∈ R||θ||0\n2 Train f(θ0 m,θ(0)c ; x) with data from S and T1. 3 while 1− ||m||0||θ||0 ≤ Pm and i ≤ n do 4 Iterative weight magnitude (IMP)\npruning ∆p and obtaining new mask m̃, where ||m̃||0 < ||m||0\n5 Rewind weight to θ0 6 m=m̂ 7 Retrain f(θ0 m,θ(i)c ; x) on the\ncurrent task Ti and S. Lottery teaching is applied (A knowledge distillation constrain with soft logits)\n8 Set i = i+ 1 9 end\nAlgorithm 2: Bottom-Up Pruning\nInput: f(θ0,θ (0) c ; x), Pm, x, soft logits and\n∆p defined in Algorithm 1, f(θi,θ (i) c ; x) has learned T1∼i and\nhas performanceR∗i , i ∈ {1, · · · , n} Output: An updated sparse model\nf(θ m̃,θ(n)c ; x) 1 Set i = 1 and mask m̃ = 0 ∈ R||θ||0\n2 Train f(θ0 m̃,θ(0)c ; x) with data from S and T1. Calculate accuracyR1. 3 while i ≤ n and ||m̃||0 < ||θ||0 do 4 ifRi ≥ R∗i or ||m̃||0 = ||θ||0 then 5 Continue 6 else 7 Start from f(θi,θ (i) c ; x),m = 1 8 repeat 9 pruning ∆p of θi (m− m̃),\nobtain new maskm∗, where ||m∗||0 ≥ ||m̃||0 and m̃ ∈m∗\n10 Retrain f(θi−1 m∗,θ(i)c ; x) and calculate accuracyRi 11 m = m∗ 12 untilRi ∼ R∗i and set m̃ = m∗; 13 end 14 Set i = i+ 1 15 end\nOur Dense Full CIL Model We consider a strong baseline framework derived from (Zhang et al., 2019) with our proposed lottery teaching as our dense full CIL model. It adopts random and classbalanced sampling strategies, an auxiliary classifier, and the knowledge distillation regularizerRKD. For incrementally learning task Ti, the training objective is depicted as:\nLCIL(θ,θ(i)c ,θ(i)a ) = γ2 × L(θ,θ(i)c ) + L(θ,θ(i)a )\nL(θ,θ(i)c ) = E(x,y)∈Db [ LXE(f(θ,θ(i)c ,x),y) ] + γ1 × Ex∈Du [ RKD(f(θ,θ(i)c ,x), ŷc) ] ,\nL(θ,θ(i)a ) = E(x,y)∈Dr [ LXE(f(θ,θ(i)a ,x),y) ] + γ1 × Ex∈Du [ RKD(f(θ,θ(i)a ,x), ŷa) ] ,\nwhere Db is the class-balanced sampled dataset, Dr represents the randomly sampled dataset, and Du stands for the queried unlabeled dataset. θ(i)c and θ(i)a are the main and auxiliary classifiers. ŷc and ŷa are soft logits on previous tasks of the main and auxiliary classifiers. We adopt γ1 = 1, γ2 = 0.5 in our experiment according to grid search.\nA18\nA2.3 MORE OTHER IMPLEMENTATION DETAILS\nDatasets and Task Splittings We evaluate our proposed lifelong tickets on CIFAR-10, CIFAR100, and Tiny-ImageNet datasets, all being standard and state-of-the-art benchmarks for CIL (Krizhevsky & Hinton, 2009). For all three datasets, we randomly split the original training dataset into training and validation with a ratio of 9 : 1. On CIFAR-10, we divide the 10 classes into splits of 2 classes with a random order (10/2 = 5 tasks); On CIFAR-100, we divide the 100 classes into splits of 10 classes with a random order (100/10 = 10 tasks); On Tiny-Imagenet, we divide the 200 classes into splits of 20 classes with a random order (200/20 = 10 tasks). In this way, when models learn a new incoming task, the dimension of classifiers will increase by 2 for CIFAR-10, 10 for CIFAR-100, and 20 for Tiny-ImageNet. Additionally, 100 images, 10 images, and 5 images per class of learned tasks will be stored for CIFAR-10, CIFAR-100, and Tiny-ImageNet respectively.\nUnlabeled Dataset All queried unlabeled data for CIFAR-10/CIFAR-100 are from 80 Million Tiny Image dataset (Torralba et al., 2008), and for Tiny-ImageNet are from ImageNet dataset (Krizhevsky et al., 2012). At each incremental learning stage, 4, 500, 450 and 450 images per class of learned tasks will be queried, based on the feature similarity with stored prototypes {m(i−1) θ(i−1),θ(i−1)c } in top-down Pruning and {θ(i−1),θ(i−1)c } in bottom-up Pruning at the ith CIL stage for CIFAR-10, CIFAR-100, and Tiny-ImageNet respectively. The feature similarity is defined by `2 norm distance.\nTraining and Evaluation Models are trained using Stochastic Gradient Descent (SGD) with 0.9 momentum and 5×10−4 weight decay. For 100 epochs training, a multi-step learning rate schedule is conducted, starting from 0.01, then decayed by 10 times at epochs 60 and 80. During the iterative pruning, we retrain the model for 30 epochs using a fixed learning rate of 10−4. The batch size for both labeled and unlabeled data is 128. We pick the trained model of the highest validation accuracy and report their performance on the hold-out testing set.\nOther Training Details (i) CIFAR-10 and CIFAR-100 can be download at https://www.cs. toronto.edu/˜kriz/cifar.html. (ii) 80 Million Tiny Image dataset is referred to http: //horatio.cs.nyu.edu/mit/tiny/data/index.html. (iii) All of our experiments are conducted on NVIDIA GTX 1080-Ti GPUs." }, { "heading": "A3 DISCUSSION", "text": "Challenges of Theoretical Analysis and Future Work The theoretical justification of the lottery ticket hypothesis is very limited, except for very shallow networks (Anonymous, 2021). In the meantime, class-incremental learning makes the theoretical analysis more difficult. It is a challenging lifelong learning problem, and the current progress lies in the empirical side rather than the theoretical side. The theoretical analysis is out of scope for this paper and we would like to explore it in the future.\nA19" } ]
2,021
null
SP:a74439e1ce3691416cb8557a7662c20855b187ee
[ "The authors introduce a pretrianing paradigm based on contrastive learning between multiple syntactic views of the same sentence. The method maximizes representations between different setence encoders when given the same sentence, and minimize the similarity to all other sentence repre sentations. The results on the infersent benchmark show competitive performance of the approach when compared to non-syntactic pretraining methods." ]
We propose a self-supervised method that builds sentence embeddings from the combination of diverse explicit syntactic structures of a sentence. We assume structure is crucial to build consistent representations as we expect sentence meaning to be a function from both syntax and semantic aspects. In this perspective, we hypothesize that some linguistic representations might be better adapted given the considered task or sentence. We, therefore, propose to jointly learn individual representation functions for different syntactic frameworks. Again, by hypothesis, all such functions should encode similar semantic information differently and consequently, be complementary for building better sentential semantic embeddings. To assess such hypothesis, we propose an original contrastive multi-view framework that induces an explicit interaction between models during the training phase. We make experiments combining various structures such as dependency, constituency, or sequential schemes. We evaluate our method on standard sentence embedding benchmarks. Our results outperform comparable methods on several tasks.
[]
[ { "authors": [ "Mahtab Ahmed", "Muhammad Rifayat Samee", "Robert E. Mercer" ], "title": "Improving tree-lstm with tree attention", "venue": "IEEE International Conference on Semantic Computing,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Yingyu Liang", "Tengyu Ma" ], "title": "A simple but tough-to-beat baseline for sentence embeddings", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Philip Bachman", "R. Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Liu Chen", "Guangping Zeng", "Qingchuan Zhang", "Xingyu Chen" ], "title": "Tree-lstm guided attention pooling of DCNN for semantic sentence modeling. In 5G for Future Wireless Networks ", "venue": "First International Conference,", "year": 2017 }, { "authors": [ "Qian Chen", "Xiaodan Zhu", "Zhen-Hua Ling", "Si Wei", "Hui Jiang", "Diana Inkpen" ], "title": "Enhanced LSTM for natural language inference", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Alexis Conneau", "Douwe Kiela" ], "title": "Senteval: An evaluation toolkit for universal sentence representations", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation,", "year": 2018 }, { "authors": [ "Alexis Conneau", "Douwe Kiela", "Holger Schwenk", "Loı̈c Barrault", "Antoine Bordes" ], "title": "Supervised learning of universal sentence representations from natural language inference data", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Timothy Dozat", "Christopher D. Manning" ], "title": "Deep biaffine attention for neural dependency parsing", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Lushan Han", "Abhay L. Kashyap", "Tim Finin", "James Mayfield", "Jonathan Weese" ], "title": "Umbc ebiquitycore: Semantic textual similarity systems", "venue": "In Proceedings of the Second Joint Conference on Lexical and Computational Semantics,", "year": 2013 }, { "authors": [ "Felix Hill", "Kyunghyun Cho", "Anna Korhonen" ], "title": "Learning distributed representations of sentences from unlabelled data", "venue": "In NAACL HLT", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Ruslan Salakhutdinov", "Richard S. Zemel", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Skip-thought vectors", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Nikita Kitaev", "Dan Klein" ], "title": "Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "venue": "Volume 1: Long Papers,", "year": 2018 }, { "authors": [ "Fang Kong", "Guodong Zhou" ], "title": "Combining dependency and constituent-based syntactic information for anaphoricity determination in coreference resolution", "venue": "In Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation,", "year": 2011 }, { "authors": [ "Zhouhan Lin", "Minwei Feng", "Cı́cero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio" ], "title": "A structured self-attentive sentence embedding", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee" ], "title": "An efficient framework for learning sentence representations", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "In 1st International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Gregory S. Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Allen Nie", "Erin Bennett", "Noah Goodman" ], "title": "Dissent: Learning sentence representations from explicit discourse relations", "venue": "In Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing,", "year": 2014 }, { "authors": [ "Nils Reimers", "Iryna Gurevych" ], "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP),", "year": 2019 }, { "authors": [ "Nikunj Saunshi", "Orestis Plevrakis", "Sanjeev Arora", "Mikhail Khodak", "Hrishikesh Khandeparkar" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D. Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing,", "year": 2015 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Alex Wang", "Yada Pruksachatkun", "Nikita Nangia", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance-level discrimination", "venue": null, "year": 1805 }, { "authors": [ "Han Zhao", "Zhengdong Lu", "Pascal Poupart" ], "title": "Self-adaptive hierarchical sentence model", "venue": "In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Yao Zhou", "Cong Liu", "Yan Pan" ], "title": "Modelling sentence pairs with tree-structured attentive encoder", "venue": "COLING", "year": 2016 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Richard S. Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In 2015 IEEE International Conference on Computer Vision,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "We propose a self-supervised method that builds sentence embeddings from the combination of diverse explicit syntactic structures. Such a method aims at improving the ability of models to perform compositional knowledge. In particular, we evaluate the embedding potential to solve downstream tasks.\nBuilding generic sentence embeddings remains an open question. Many training methods have been explored: generating past and previous sentences (Kiros et al., 2015; Hill et al., 2016), discriminating context sentences (Logeswaran & Lee, 2018), predicting specific relations between pairs of sentences (Conneau et al., 2017; Nie et al., 2019). While all these methods propose efficient training objectives, they all rely on a similar RNN as encoder architecture. Nonetheless, model architectures have been subject to extensive work as well (Tai et al., 2015; Zhao et al., 2015; Arora et al., 2017; Lin et al., 2017), and in supervised frameworks, many encoder structures outperform standard RNN networks.\nWe hypothesize structure is a crucial element to perform compositional knowledge. In particular, the heterogeneity of performances given models and tasks makes us assume that some structures may be better adapted for a given example or task. Therefore, combining diverse structures should be more robust for tasks requiring complex word composition to derive their meaning. Hence, we aim here to evaluate the potential benefit from interactions between pairs of encoders. In particular, we propose a training method for which distinct encoders are learned jointly. We conjecture this association might improve our embeddings’ power of generalization and propose an experimental setup to corroborate our hypothesis.\nWe take inspiration from multi-view learning, which is successfully applied in a variety of domains. In such a framework, the model learns representations by aligning separate observations of the same object. Traditionally, views are issued from a complementary natural perception of the data. For example, a picture and a sound recording of a dog. However, it can be extended to any pair of samples that share similar semantic content, such as the translation of the same sentence in two different languages. The definition can be extended to synthetic views, which are derived from the same unimodal data. In our case, we derived multiple views from a single sentence by pairing it with\na distinct syntactic framework. We illustrated in Figure 2, two views derived from the same input sentence by applying respectively a constituent or dependency parser.\nAs proposed in image processing (Tian et al., 2019; Bachman et al., 2019), we propose to align the different views using a contrastive learning framework. Indeed, contrastive learning is broadly used in NLP Mikolov et al. (2013b;a); Logeswaran & Lee (2018). We proposed to enhance the sentence embedding framework proposed in Logeswaran & Lee (2018) with a multi-view paradigm. As detailed in Section 2, composing multiple views has demonstrated its effectiveness in many NLP applications. However, as far as we are aware, combining distinct structured models to build standalone embeddings has not yet be explored. Nevertheless, this paradigm benefits from several structural advantages: as already mentioned, it pairs nicely with contrastive learning. It might thus be trained in a self-supervised manner that does not require data annotation. Moreover, contrary to models presented in Section 2, our method is not specific to a certain kind of encoder architecture. It does not require, for example, the use of attention layers or tree-structured models. More generally, it could be extended to any notion of view, even in other domains than language processing. Our setup could therefore be extended with any encoding function. Finally, our training method induces an interaction between models during inference and, paramountly, during the training phase.\nOur paper is organized as follows: we detail our contrastive multi-view framework in Section 3. In Section 4, we propose an evaluation of our framework on standard evaluation benchmarks and propose qualitative analysis from our embeddings." }, { "heading": "2 RELATED WORK", "text": "Multi-view is effectively used in a broad variety of domains. For image processing, some methods aim to learn representations by filling the missing part of an image or solving jigsaw puzzles. For video, Tian et al. (2019) propose to build image tuples using video frames and flow. For audio, van den Oord et al. (2018) maximize the mutual information between the embedding of the signal at different time steps.\nRegarding NLP, combining different structural views has already been proven to be successful. Kong & Zhou (2011) provide a heuristic to combine dependency and constituency analysis for coreference resolution. Zhou et al. (2016); Ahmed et al. (2019) combine Tree LSTM and standard sequential LSTM with a cross-attention method and observe improvement on a semantic textual similarity task. Chen et al. (2017a) combine CNN and Tree LSTM using attention methods on a sentiment classification task, and CNN outperforms both Tree-LSTM and CNN separately. Finally, Chen et al. (2017b) combine sequential LSTM and Tree LSTM for natural language inference tasks. However, to our knowledge, combining distinct structured models in a contrastive learning setup was not attempted to build sentence embeddings." }, { "heading": "3 METHOD", "text": "Given a sentence s, the model aims at discriminating the sentences s+ in the neighborhood of s from sentences s− outside of this neighborhood. This is contrastive learning (Section 3.1). The representation of each sentence is acquired by using multiple views (Section 3.2)." }, { "heading": "3.1 CONTRASTIVE LEARNING", "text": "Contrastive learning is successfully applied in a variety of domains including audio van den Oord et al. (2018), image (Wu et al., 2018; Tian et al., 2019), video or natural language processing for word embedding (Mikolov et al., 2013b) or sentence embedding (Logeswaran & Lee, 2018). Some mathematical foundations are detailed in Saunshi et al. (2019). The idea is to build a dataset such that each sample x is combined with another sample x+, which is somehow close. For word or sentence embeddings, the close samples are the words or the sentences appearing in the given textual context. For image processing, close samples might be two different parts of the same image. Systems are trained to bring close sentences together while dispersing negative examples.\nIn particular, a sentence embedding framework is proposed by Logeswaran & Lee (2018). The method takes inspiration from the distributional hypothesis successfully applied for word, but this\ntime, to identify context sentences. The network is trained using a contrastive method. Given a sentence s, a corresponding context sentence s+ and a set of K negative samples s−1 · · · s − K , the training objective is to maximize the probability of discriminate the correct sentence among negative samples: p(s+|s, s−1 · · · s − K).\nThe algorithm architecture used to estimate p is close to word2vec (Mikolov et al., 2013b;a). Two sentences encoders f and g are defined and the conditional probability is estimated as follow:\np(s+|s, s−1 · · · s − K) =\nef(s) T g(s+) ef(s)T g(s+) + ∑N\ni=1 e f(s)T g(s−i )\nAt inference time, the sentence representation is obtained as the concatenation of the two encoders f and g such as s → [f(s); g(s)]. In Logeswaran & Lee (2018), f and g are chosen identical and consist in two RNN. However, the authors observe that the encoders might learn redundant features. To limit this effect, they propose to use a distinct set of embeddings for each encoder.\nWe propose addressing this aspect by enhancing the method with a multi-view framework and using a distinct structured model for the encodes f and g. We hypothesize that some structures may be better adapted for a given example or task. For example, Figure 2 illustrates that dependency parsing uses the verb ”filled” as the root node. Whereas in constituency parsing, subject and verb are respectively, the right and left child from the root node. Therefore, the combination of different structures should be more robust for tasks requiring complex word composition and be less sensitive to lexical variations. Consequently, we propose a training procedure that allows the model to benefit from the interaction of various syntactic structures. The choice for the encoder architecture is detailed in the following section." }, { "heading": "3.2 LANGUAGE VIEWS", "text": "Multi-view aims as learning representations from data represented by multiple independent sets of features. The method is not specific for any particular nature of data and can be applied to a broad scale of domains, making it an efficient framework in self-supervised representation learning. As depicted in Section 1, we generalize the notion of view for a sentence as the application of a specific syntactic framework. For each view, we use an ad-hoc algorithm that maps the structured representation of the sentence into an embedding space.\nFor tree-based views, we consider both phrase structure trees and dependency trees. The phrase structure of a sentence is represented as nested multi-word constituents. The dependency tree represents the relationship between individual words. Although equivalences might be derived between\nthe two representations schemes, we hypothesize that, in our context, the corresponding sequence of operations might allow capturing rather distinct linguistic properties. The various models may, therefore, be complementary and their combination allows for more fine-grained analysis. Together with the trees, we consider the following views of a sentence.\nBag Of Word (BOW) This setup does not assume any underlying structure. The sentence is modeled as an unordered set of words. The associated encoding method is a simple commutative sum operation of the word embeddings. In our case, vectors are initialized using GloVe vectors (Pennington et al., 2014) publicly available1. We used 300-dimensional word vectors trained on the common crawl dataset (840B tokens) with a vocabulary of 2.2M case sensitive words.\nVanilla LSTM (SEQ) assumes a sequential structure where each word depends on the previous words in the sentence. The framework is a bidirectional sequential LSTM (Hochreiter & Schmidhuber, 1997). The concatenation of the forward and backward last hidden state of the model is used as sequence embedding.\nDependency tree (DEP) In the dependency tree model, words are connected through dependency edges. A word might have an arbitrary number of dependants. As illustrated in Figure 2, the sentence can be represented as a tree where nodes correspond to words and edges indicate whether or not the words are connected in the dependency tree. In our case, the dependency tree is obtained using the deep biaffine parser from Dozat & Manning (2017)2 For this view, we compute sentence embeddings with the Child-Sum Tree LSTM model described in Tai et al. (2015): Each node is assigned an embedding given its dependent with a recursive function. The recursive node function is derived from standard LSTM formulations but adapted for tree inputs. In particular, the hidden state is computed as the sum of all children hidden states:\nh̃j = ∑\nk∈C(j)\nhk (1)\nwith C(j), the set of children of node j. All equations are detailed in Tai et al. (2015). However, we slightly modify the computation of h̃j using Equation 2. As in Zhou et al. (2016), we propose to\n1https://nlp.stanford.edu/projects/glove/ 2We use an open-source implementation of the parser and replace the pos-tags features with features obtained with BERT. Therefore we do not need pos-tags annotations to parse our corpus: https://github. com/yzhangcs/biaffine-parser\ncompute h̃j as the weighted sum of children vectors in order to allow the model to filter semantically less relevant children. h̃j = ∑\nk∈C(j)\nαkjhk (2)\nThe parameters αkj are attention weights computed using a soft attention layer. Given a node j, we consider h1, h2, . . . , hn the corresponding children hidden states. the soft attention layer produces a weight αk for each child’s hidden state. We did not use any external query to compute the attention but instead use a projection from the current node embedding. The attention mechanism is detailed in equations below:\nqj =W (q)xj + b (q) (3)\npk =W (p)hk + b (p) (4)\nakj = qj · pᵀk\n‖qj‖2 · ‖pk‖2 (5)\nαkj = softmaxk(a1j · · · anj) (6) The embedding at the root of the tree is used as the sentence embedding as the Tree LSTM model computes representations bottom up.\nConstituency tree (CONST) Constituent analysis describes the sentence as a nested multi-word structure. In this framework, words are grouped recursively in constituents. In the resulting tree, only leaf nodes correspond to words, while internal nodes encode recursively word sequences. The structure is obtained using the constituency neural parser from Kitaev & Klein (2018). The framework is associated with the N-Ary Tree LSTM, which is defined in Tai et al. (2015). Similarly to the original article, we binarize the trees to ensure that every node has exactly two dependents. The binarization is performed using a left markovization and unary productions are collapsed in a single node. Again the representation is computed bottom-up and the embedding of the tree root node is used as sentence embedding. The equations detailed in Tai et al. (2015) make the distinction between right and left nodes. Therefore we do not propose to enhance the original architecture with a weighted sum as on the DEP view." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 TRAINING CONFIGURATION", "text": "We train our models3 on the UMBC dataset4 Han et al. (2013). We filter 40M sentences from the tokenized corpus. We build batches from successive sentences. Given a sentence in a batch, other sentences not in the context are considered as negatives samples as presented in Section 3.1.\nFor the vocabulary, we follow the setup proposed in Logeswaran & Lee (2018) and we train two models in each configuration. One initialized with pre-trained embedding vectors. The vectors are not updated during training and the vocabulary includes the top 2M cased words from the 300- dimensional GloVe embeddings Pennington et al. (2014). The other is limited 50K words initialized with a Xavier distribution and updated during training. For inference, the vocabulary is expanded to 2M words using the linear projection proposed in Logeswaran & Lee (2018); Kiros et al. (2015).\nAll models are trained using a batch size of 400 and the Adam optimizer with a 5e−4 learning rate. Regarding the infrastructure, we use a Nvidia GTX 1080 Ti GPU. All model weights are initialized with a Xavier distribution and biases set to 0. We do not apply any dropout. For the SEQ view, we use GRU cells instead of LSTM to match the implementation proposed in Logeswaran & Lee (2018) setup.\n3Hyperparameters of the models such as the hidden size and the optimization procedure such as learning rate are fixed given literature on comparable work. In particular Tai et al. (2015); Logeswaran & Lee (2018) and are detailed in Appendix.\n4The bookcorpus introduced in (Zhu et al., 2015) and traditionally used sentence embedding is no longer distributed. Therefore, we prefer a corpus freely available. The UMBC dataset is available at: https://ebiquity.umbc.edu/blogger/2013/05/01/ umbc-webbase-corpus-of-3b-english-words/" }, { "heading": "4.2 EVALUATION ON DOWNSTREAM TASKS", "text": "As usual for models aiming to build generic sentence embeddings (Kiros et al., 2015; Hill et al., 2016; Arora et al., 2017; Conneau et al., 2017; Logeswaran & Lee, 2018; Nie et al., 2019), we use the SentEval benchmark. SentEval is specifically designed to assess the quality of the embeddings themselves rather than the quality of a model specifically targeting a downstream task, as is the case for the GLUE and SuperGLue benchmarks (Wang et al., 2019b;a). Indeed, the evaluation protocol prevents for fine-tuning the model during inference and the architecture to tackle the downstream tasks is kept minimal. Moreover, the embedding as kept identical for all tasks, thus assessing their properties of generalization.\nTherefore, classification tasks from the SentEval benchmark are usually used for evaluation of sentence representations5 (Conneau & Kiela, 2018): the tasks include sentiment and subjectivity analysis (MR, CR, SUBJ, MPQA), question type classification (TREC), paraphrase identification (MRPC) and semantic relatedness (SICK-R). Contrasting the results of our models on this set of tasks will help to better understand its properties.\nWe use either the pre-defined train/dev/test splits or perform a 10-fold cross-validation. We follow the linear evaluation protocol of Kiros et al. (2015), where a logistic regression or softmax classifier\n5SentEval tool and data are freely available: https://github.com/facebookresearch/ SentEval\nis trained on top of sentence representations. The dev set is used for choosing the regularization parameter and results are reported on the test set." }, { "heading": "4.3 EVALUATION ON DOWNSTREAM TASKS", "text": "We compare the properties of distinct views combination on downstream tasks and report the results with respect to comparable state of the art methods in Table 1. The first set of methods (Context sentences prediction) relies on a distributional hypothesis: models are trained to reconstruct books storyline.\nThe second set of models (Sentence relations prediction) is pre-trained on a supervised task. Infersent Conneau et al. (2017) is trained on the SNLI dataset, which proposes to predict the entailment relation between two sentences. DisSent Nie et al. (2019) proposes a generalization of the method and builds a corpus of sentence pairs with more possible relations between them. Finally, we include models relying on transformer architectures (Pre-trained transformers) for comparison. In particular, BERT-base model and a BERT-model fine-tuned on the SNLI dataset Reimers & Gurevych (2019). In Table 1, we observe that our models expressing a combination of views such as (DEP, SEQ) or (DEP, CONST) give better results than the use of the same view (SEQ, SEQ) used in Quick-Thought model. It seems that the entanglement of views benefits the sentence embedding properties. In particular, we obtain state-of-the-art results for almost every metric from MRPC and SICK-R tasks, which focus on paraphrase identification. For the MRPC task, we gain a full point in accuracy and outperform BERT models. We hypothesize structure is important for achieving this task, especially as the dataset is composed of rather long sentences. The SICK-R dataset is structurally designed to discriminate models that rely on compositional operations.\nThis also explains the score improvement on this task. Tasks such as MR, CR or MPQA consist in sentiment or subjectivity analysis. We hypothesize that our models are less relevant in this case: such tasks are less sensitive to structure and depend more on individual word or lexical variation.\nWe finally observe our setup is competitive with models trained on broader datasets. Indeed, we use the publicly available UMBC corpus and limited the size to 40M sentences. In comparison, the BookCorpus used in Kiros et al. (2015); Logeswaran & Lee (2018) consists in 74M sentences." }, { "heading": "4.4 QUALTITATIVE RESULTS", "text": "We analyze the embeddings from a qualitative perspective and explore the sentences from the SICKR test set. We retrieved the closest neighbors using cosine distance. We compare the results with the Quick-thought model. We illustrated in Table 2 a panel of examples presenting interesting linguistic properties. Models seem somehow robust to adjective expansions illustrated in the first examples. Indeed, the closest expression from ”A black bird ” is ”A bird , which is black”. However the second\nretrieved sentence is semantically correct for the CONST, SEQ association only. Quick-thought and DEP, CONST present a weakness toward word scrambling for this specific example. We investigate passive forms in the second example. The CONST, SEQ and Quickthougth models seem to attach to much weight to the sentence syntax rather than the semantic. This time the association of DEP and CONST views retrieve to corresponding active sentences. Finally, we observe how models behave when facing numeric information. Interestingly Quickthoughts and DEP, CONST are able to bring together ”crowd” and ”group” notions.\nFrom a graphic perspective, we projected in two dimensions the sentences from the SUBJ task, for which we obtained state-of-the-art results. We use the UMAP algorithm for dimensionality reduction and compare our multi-view setup with the Quick-thought model. The projection is illustrated in Figure 3. While the Figure does not reveal any critical distinction between models, samples appear well separated in both cases." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "Inspired from linguistic insights and work on supervised learning, we hypothesize that structure is a central element to build sentence embedding. We propose in Section 3 an original contrastive multiview framework that aims to build embeddings from the interaction of various structured models.\nIn Section 4, we proposed to assess the quality of our embeddings and use them as a feature for the dedicated SentEval benchmark. We obtain state-of-the-art results on tasks which are expected, by hypothesis, to be more sensitive to sentence structure. Exploring our representation space from a qualitative perspective, we observe the chosen pair of views might affect the reaction to linguistic perturbations such as passive forms or adjective expansions. Some view pairs appear indeed more robust for some specific examples." } ]
2,020
null
SP:10b339c326238eeef479079dbe713af4ef5b2d92
[ "This work focuses on dynamic programming in the tabular setting. It proposes to use Hamiltonian Monte-Carlo (HMC) to sample the next states (instead of IID samples) and matrix completion to learn a low-rank Q matrix. It shows theoretical convergence. Experiments on discretized problems (CartPole and an ocean sampling problem) show that HMC and low-rank learning can behave more benignly compared to IID samples." ]
Model-free reinforcement learning (RL), in particular Q-learning is widely used to learn optimal policies for a variety of planning and control problems. However, when the underlying state-transition dynamics are stochastic and high-dimensional, Q-learning requires a large amount of data and incurs a prohibitively high computational cost. In this paper, we introduce Hamiltonian Q-Learning, a data efficient modification of the Q-learning approach, which adopts an importance-sampling based technique for computing the Q function. To exploit stochastic structure of the state-transition dynamics, we employ Hamiltonian Monte Carlo to update Q function estimates by approximating the expected future rewards using Q values associated with a subset of next states. Further, to exploit the latent low-rank structure of the dynamic system, Hamiltonian Q-Learning uses a matrix completion algorithm to reconstruct the updated Q function from Q value updates over a much smaller subset of state-action pairs. By providing an efficient way to apply Qlearning in stochastic, high-dimensional problems, the proposed approach broadens the scope of RL algorithms for real-world applications, including classical control tasks and environmental monitoring.
[]
[ { "authors": [ "Zafarali Ahmed", "Nicolas Le Roux", "Mohammad Norouzi", "Dale Schuurmans" ], "title": "Understanding the impact of entropy on policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Andrew F Bennett" ], "title": "Inverse modeling of the ocean and atmosphere", "venue": null, "year": 2005 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic Programming and Optimal Control, volume 1", "venue": "Athena Scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Michael Betancourt", "Simon Byrne", "Sam Livingstone", "Mark Girolami" ], "title": "The geometric foundations of Hamiltonian", "venue": "Monte Carlo. Bernoulli,", "year": 2017 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Emmanuel J Candes", "Yaniv Plan" ], "title": "Matrix completion with noise", "venue": "Proceedings of the IEEE,", "year": 2010 }, { "authors": [ "Yudong Chen", "Yuejie Chi" ], "title": "Harnessing structures in big data via guaranteed low-rank matrix estimation: Recent theory and fast algorithms via convex and nonconvex optimization", "venue": "IEEE Signal Processing Magazine,", "year": 2018 }, { "authors": [ "Augustin Chevallier", "Sylvain Pion", "Frédéric Cazals" ], "title": "Hamiltonian Monte Carlo with boundary reflections, and application to polytope volume", "venue": null, "year": 2018 }, { "authors": [ "Richard Dearden", "Nir Friedman", "Stuart Russell" ], "title": "Bayesian q-learning", "venue": "In Aaai/iaai, pp", "year": 1998 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2011 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Jianqing Fan", "Bai Jiang", "Qiang Sun" ], "title": "Hoeffding’s lemma for markov chains and its applications to statistical learning", "venue": "arXiv preprint arXiv:1802.00211,", "year": 2018 }, { "authors": [ "Matthieu Geist", "Olivier Pietquin" ], "title": "Algorithmic survey of parametric value function approximation", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2013 }, { "authors": [ "Susan Holmes", "Simon Rubinstein-Salzedo", "Christof Seiler" ], "title": "Curvature and concentration of Hamiltonian Monte Carlo in high dimensions", "venue": null, "year": 2014 }, { "authors": [ "Heejin Jeong", "Clark Zhang", "George J Pappas", "Daniel D Lee" ], "title": "Assumed density filtering q-learning", "venue": "arXiv preprint arXiv:1712.03333,", "year": 2017 }, { "authors": [ "Jeff Johns", "Sridhar Mahadevan" ], "title": "Constructing basis functions from directed graphs for value function approximation", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Eugenia Kalnay" ], "title": "Atmospheric modeling, data assimilation and predictability", "venue": null, "year": 2003 }, { "authors": [ "Sanket Kamthe", "Marc Deisenroth" ], "title": "Data-efficient reinforcement learning with probabilistic model predictive control", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Jens Kober", "J Andrew Bagnell", "Jan Peters" ], "title": "Reinforcement learning in robotics: A survey", "venue": "The International Journal of Robotics Research,", "year": 2013 }, { "authors": [ "Alec Koppel", "Ekaterina Tolstaya", "Ethan Stump", "Alejandro Ribeiro" ], "title": "Nonparametric stochastic compositional gradient descent for q-learning in continuous markov decision problems", "venue": "arXiv preprint arXiv:1804.07323,", "year": 2018 }, { "authors": [ "Su Young Lee", "Choi Sungik", "Sae-Young Chung" ], "title": "Sample-efficient deep reinforcement learning via episodic backward update", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Naomi E. Leonard", "Derek A. Paley", "Francois Lekien", "Rodolphe Sepulchre", "David M. Fratantoni", "Russ E. Davis" ], "title": "Collective motion, sensor networks, and ocean sampling", "venue": "Proceedings of the IEEE,", "year": 2007 }, { "authors": [ "Jun S Liu" ], "title": "Metropolized independent sampling with comparisons to rejection sampling and importance sampling", "venue": "Statistics and Computing,", "year": 1996 }, { "authors": [ "TWU Madhushani", "DH Sanjeeva Maithripala", "Jordan M Berg" ], "title": "Feedback regularization and geometric pid control for trajectory tracking of mechanical systems: Hoop robots on an inclined plane", "venue": "American Control Conference (ACC),", "year": 2017 }, { "authors": [ "DHS Maithripala", "TWU Madhushani", "JM Berg" ], "title": "A Geometric PID Control Framework for Mechanical Systems", "venue": null, "year": 2016 }, { "authors": [ "Hongzi Mao", "Mohammad Alizadeh", "Ishai Menache", "Srikanth Kandula" ], "title": "Resource management with deep reinforcement learning", "venue": "In Proceedings of the 15th ACM Workshop on Hot Topics in Networks,", "year": 2016 }, { "authors": [ "Rowan McAllister", "Carl Edward Rasmussen" ], "title": "Data-efficient reinforcement learning in continuous state-action Gaussian-POMDPs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Francisco S Melo" ], "title": "Convergence of Q-learning: A simple proof", "venue": "Institute Of Systems and Robotics, Tech. Rep, pp", "year": 2001 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Radford M Neal" ], "title": "MCMC using Hamiltonian dynamics", "venue": "Handbook of Markov Chain Monte Carlo,", "year": 2011 }, { "authors": [ "Kirill Neklyudov", "Max Welling", "Evgenii Egorov", "Dmitry Vetrov" ], "title": "Involutive MCMC: A Unifying Framework", "venue": null, "year": 2006 }, { "authors": [ "Hao Yi Ong" ], "title": "Value function approximation via low-rank models", "venue": null, "year": 2015 }, { "authors": [ "Yunpeng Pan", "Evangelos Theodorou" ], "title": "Probabilistic differential dynamic programming", "venue": "In Advances in Neural Information Processing Systems, pp. 1907–1915,", "year": 2014 }, { "authors": [ "Gilad Refael", "Amir Degani" ], "title": "A single-actuated swimming robot: Design, modelling, and experiments", "venue": "Journal of Intelligent & Robotic Systems,", "year": 2019 }, { "authors": [ "Devavrat Shah", "Dogyoon Song", "Zhi Xu", "Yuzhe Yang" ], "title": "Sample efficient reinforcement learning via low-rank matrix estimation", "venue": null, "year": 2006 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of Go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Elena Smirnova", "Elvis Dohmatob" ], "title": "On the convergence of smooth regularized approximate value iteration schemes", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Alexandre B Tsybakov" ], "title": "Introduction to nonparametric estimation", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Christopher JCH Watkins" ], "title": "Learning from delayed rewards", "venue": "PhD thesis, King’s College,", "year": 1989 }, { "authors": [ "Zaiwen Wen", "Wotao Yin", "Yin Zhang" ], "title": "Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm", "venue": "Mathematical Programming Computation,", "year": 2012 }, { "authors": [ "Yangyang Xu", "Ruru Hao", "Wotao Yin", "Zhixun Su" ], "title": "Parallel matrix factorization for low-rank tensor completion", "venue": null, "year": 2013 }, { "authors": [ "Wenhao Yang", "Xiang Li", "Zhihua Zhang" ], "title": "A regularized approach to sparse optimal policy in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yuxiang Yang", "Ken Caluwaerts", "Atil Iscen", "Tingnan Zhang", "Jie Tan", "Vikas Sindhwani" ], "title": "Data efficient reinforcement learning for legged robots", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Yuzhe Yang", "Guo Zhang", "Zhi Xu", "Dina Katabi" ], "title": "Harnessing structures for value-based planning and reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Zhenpeng Zhou", "Xiaocheng Li", "Richard N Zare" ], "title": "Optimizing chemical reactions with deep reinforcement learning", "venue": "ACS Central Science,", "year": 2017 } ]
[ { "heading": null, "text": "Model-free reinforcement learning (RL), in particular Q-learning is widely used to learn optimal policies for a variety of planning and control problems. However, when the underlying state-transition dynamics are stochastic and high-dimensional, Q-learning requires a large amount of data and incurs a prohibitively high computational cost. In this paper, we introduce Hamiltonian Q-Learning, a data efficient modification of the Q-learning approach, which adopts an importance-sampling based technique for computing the Q function. To exploit stochastic structure of the state-transition dynamics, we employ Hamiltonian Monte Carlo to update Q function estimates by approximating the expected future rewards using Q values associated with a subset of next states. Further, to exploit the latent low-rank structure of the dynamic system, Hamiltonian Q-Learning uses a matrix completion algorithm to reconstruct the updated Q function from Q value updates over a much smaller subset of state-action pairs. By providing an efficient way to apply Qlearning in stochastic, high-dimensional problems, the proposed approach broadens the scope of RL algorithms for real-world applications, including classical control tasks and environmental monitoring." }, { "heading": "1 INTRODUCTION", "text": "In recent years, reinforcement learning (Sutton & Barto, 2018) have achieved remarkable success with sequential decision making tasks especially in complex, uncertain environments. RL algorithms have been widely applied to a variety of real world problems, such as resource allocation (Mao et al., 2016), chemical process optimization (Zhou et al., 2017), automatic control (Duan et al., 2016), and robotics (Kober et al., 2013). Existing RL techniques often offer satisfactory performance only when it is allowed to explore the environment long enough and generating a large amount of data in the process (Mnih et al., 2015; Kamthe & Deisenroth, 2018; Yang et al., 2020a). This can be prohibitively expensive and thereby limits the use of RL for complex decision support problems.\nQ-Learning (Watkins, 1989; Watkins & Dayan, 1992) is a model-free RL framework that captures the salient features of sequential decision making, where an agent, after observing current state of the environment, chooses an action and receives a reward. The action chosen by the agent is based on a policy defined by the state-action value function, also called the Q function. Performance of such policies strongly depends on the accessibility of a sufficiently large data set covering the space spanned by the state-action pairs. In particular, for high-dimensional problems, existing model-free RL methods using random sampling techniques leads to poor performance and high computational cost. To overcome this challenge, in this paper we propose an intelligent sampling technique that exploits the inherent structures of the underlying space related to the dynamics of the system.\nIt has been observed that formulating planning and control tasks in a variety of dynamical systems such as video games (Atari games), classical control problems (simple pendulum, cart pole and double integrator) and adaptive sampling (ocean sampling, environmental monitoring) as Q-Learning problems leads to low-rank structures in the Q matrix (Ong, 2015; Yang et al., 2020b; Shah et al., 2020). Since these systems naturally consist of a large number of states, efficient exploitation of low rank structure of the Q matrix can potentially lead to significant reduction in computational complexity and improved performance. However, when the state space is high-dimensional and further, the state transition is probabilistic, high computational complexity associated with calculating the expected Q values of next states renders existing Q-Learning methods impractical.\nA potential solution for this problem lies in approximating the expectation of Q values of next states with the sample mean of Q values over a subset of next states. A natural way to select a subset of next states is by drawing IID samples from the transition probability distribution. However, this straight forward approach becomes challenging when the state transition probability distribution is highdimensional and is known only up to a constant. We address this problem by using Hamilton Monte Carlo (HMC) to sample next states; HMC draws samples by integrating a Hamiltonian dynamics governed by the transition probability (Neal et al., 2011). We improve the data efficiency further by using matrix completion methods to exploit the low rank structure of a Q matrix.\nRELATED WORK\nData efficient Reinforcement Learning: The last decade has witnessed a growing interest in improving data efficiency in RL methods by exploiting emergent global structures from underlying system dynamics. Deisenroth & Rasmussen (2011); Pan & Theodorou (2014); Kamthe & Deisenroth (2018); Buckman et al. (2018) have proposed model-based RL methods that improve data efficiency by explicitly incorporating prior knowledge about state transition dynamics of the underlying system. Dearden et al. (1998); Koppel et al. (2018); Jeong et al. (2017) propose Baysean methods to approximate the Q function. Ong (2015) and Yang et al. (2020b) consider a model-free RL approach that exploit structures of state-action value function. The work by Ong (2015) decomposes the Q matrix into a low-rank and sparse matrix model and uses matrix completion methods (Candes & Plan, 2010; Wen et al., 2012; Chen & Chi, 2018) to improve data efficiency. A more recent work by Yang et al. (2020b) has shown that incorporation of low rank matrix completion methods for recovering Q matrix from a small subset of Q values can improve learning of optimal policies. At each time step the agent chooses a subset of state-action pairs and update their Q value according to the Bellman optimally equation that considers a discounted average between reward and expectation of the Q values of next states. Shah et al. (2020) extends this work by proposing a novel matrix estimation method and providing theoretical guarantees for the convergence to a -optimal Q function. On the other hand, entropy regularization (Ahmed et al., 2019; Yang et al., 2019; Smirnova & Dohmatob, 2020), by penalizing excessive randomness in the conditional distribution of actions for a given state, provides an alternative means to implicitly exploit the underlying low-dimensional structure of the value function. Lee et al. (2019) proposes an approach that samples a whole episode and then updates values in a recursive, backward manner.\nCONTRIBUTION\nThe main contribution of this work is three-fold. First, we introduce a modifiedQ-learning framework, called Hamiltonian Q-learning, which uses HMC sampling for efficient computation of Q values. This innovation, by proposing to sample Q values from the region with the dominant contribution to the expectation of discounted reward, provides a data-efficient approach for using Q-learning in real-world problems with high-dimensional state space and probabilistic state transition. Furthermore, integration of this sampling approach with matrix-completion enables us to update Q values for only a small subset of state-action pairs and thereafter reconstruct the complete Q matrix. Second, we provide theoretical guarantees that the error between the optimal Q function and the Q function obtained by updating Q values using HMC sampling can be made arbitrarily small. This result also holds when only a handful ofQ values are updated using HMC and the rest are estimated using matrix completion. Further, we provide theoretical guarantee that the sampling complexity of our algorithm matches the mini-max sampling complexity proposed by Tsybakov (2008). Finally, we demonstrate the effectiveness of Hamiltonian Q-learning by applying it to a cart-pole stabilization problem and an adaptive ocean sampling problem. Our results also indicate that our proposed approach becomes more effective with increase in state space dimension." }, { "heading": "2 PRELIMINARY CONCEPTS", "text": "In this section, we provide a brief background on Q-Learning, HMC sampling and matrix completion, as well as introduce the mathematical notations. In this paper, |Z| denotes the cardinality of a set Z. Moreover, R represent the real line and AT denotes the transpose of matrix A.\n2.1 Q-LEARNING\nMarkov Decision Process (MDP) is a mathematical formulation that captures salient features of sequential decision making (Bertsekas, 1995). In particular, a finite MDP is defined by the tuple\n(S,A,P, r, γ), where S is the finite set of system states,A is the finite set of actions, P : S×A×S → [0, 1] is the transition probability kernel, r : S ×A → R is a bounded reward function, and γ ∈ [0, 1) is a discounting factor. Without loss of generality, states s ∈ S and actions a ∈ A can be assumed to be Ds-dimensional and Da-dimensional real vectors, respectively. Moreover, by letting si denote the ith element of a state vector, we define the range of state space in terms of the following intervals [d−i , d + i ] such that s\ni ∈ [d−i , d+i ] ∀i ∈ {1, . . . ,Ds}. At each time t ∈ {1, . . . , T} over the decision making horizon, an agent observes the state of the environment st ∈ S and takes an action at according to some policy π which maximizes the discounted cumulative reward. Once this action has been executed, the agent receives a reward r(st, at) from the environment and the state of the environment changes to st+1 according to the transition probability kernel P (·|st, at). The Q function, which represents the expected discounted reward for taking a specific action at the current time and following the policy thereafter, is defined as a mapping from the space of state-action pairs to the real line, i.e. Q : S × A → R. Then, by letting Qt represent the Q matrix at time t, i.e. the tabulation of Q function over all possible state-action pairs associated with the finite MDP, we can express the Q value iteration over time steps as\nQt+1(st, at) = ∑ s∈S P (s|st, at) ( r(st, at) + γmax a Qt(s, a) ) . (1)\nUnder this update rule, the Q function converges to its unique optimal value Q∗ (Melo, 2001). But computing the sum (1) over all possible next states is computationally expensive in certain problems; in these cases taking the summation over a subset of the next states provides an efficient alternative for updating the Q values." }, { "heading": "2.2 HAMILTONIAN MONTE CARLO", "text": "Hamiltonian Monte Carlo is a sampling approach for drawing samples from probability distributions known up to a constant. It offers faster convergence than Markov Chain Monte Carlo (MCMC) sampling (Neal et al., 2011; Betancourt; Betancourt et al., 2017; Neklyudov et al., 2020). To draw samples from a smooth target distribution P(s), which is defined on the Euclidean space and assumed to be known up to a constant, HMC extends the target distribution to a joint distribution over the target variable s (viewed as position within the HMC context) and an auxiliary variable v (viewed as momentum within the HMC context). We define the Hamiltonian of the system as\nH(s, v) = − logP(s, v) = − logP(s)− logP(v|s) = U(s) +K(v, s), where U(s) , − logP(s) and K(v, s) , − logP(v|s) = 12vTM−1v represent the potential and kinetic energy, respectively, and M is a suitable choice of the mass matrix.\nHMC sampling method consists of the following three steps − (i) a new momentum variable v is drawn from a fixed probability distribution, typically a multivariate Gaussian; (ii) then a new proposal (s′, v′) is obtained by generating a trajectory that starts from (s, v) and obeys Hamiltonian dynamics, i.e. ṡ = ∂H∂v , v̇ = −∂H∂s ; and (iii) finally this new proposal is accepted with probability min {1, exp (H(s, v)−H(s′,−v′))} following the Metropolis–Hastings acceptance/rejection rule. 2.3 LOW-RANK STRUCTURE IN Q-LEARNING AND MATRIX COMPLETION\nPrior work (Johns & Mahadevan, 2007; Geist & Pietquin, 2013; Ong, 2015; Shah et al., 2020) on value function approximation based approaches for RL has implicitly assumed that the state-action value functions are low-dimensional and used various basis functions to represent them, e.g. CMAC, radial basis function, etc. This can be attributed to the fact that the underlying state transition and reward function are often endowed with some structure. More recently, Yang et al. (2020b) provide empirical guarantees that the Q-matrices for benchmark Atari games and classical control tasks exhibit low-rank structure.\nTherefore, using matrix completion techniques (Xu et al., 2013; Chen & Chi, 2018) to recover Q ∈ R|S|×|A| from a small number of observed Q values constitutes a viable approach towards improving data efficiency. As low-rank matrix structures can be recovered by constraining the nuclear norm, the Q matrix can be reconstructed from its observed values (Q̂) by solving\nQ = arg min Q̃∈R|S|×|A|\n‖Q̃‖∗ subject to JΩ(Q̃) = JΩ(Q̂), (2)\nwhere ‖ · ‖∗ denotes the nuclear norm (i.e., the sum of its singular values), Ω is the observed set of elements, and JΩ is the observation operator, i.e. JΩ(x) = x if x ∈ Ω and zero otherwise.\n3 HAMILTONIAN Q-LEARNING\nA large class of real world sequential decision making problems - for example, board/video games, control of robots’ movement, and portfolio optimization - involves high-dimensional state spaces and often has large number of distinct states along each individual dimension. As using a Q-Learning based approach to train RL-agents for these problems typically requires tens to hundreds of millions of samples (Mnih et al., 2015; Silver et al., 2017), there is a strong need for data efficient algorithms forQ-Learning. In addition, state transition in such systems is often probabilistic in nature; even when the underlying dynamics of the system is inherently deterministic; presence of external disturbances and parameter variations/uncertainties lead to probabilistic state transitions.\nLearning an optimal Q∗ function through value iteration methods requires updating Q values of state-action pairs using a sum of the reward and a discounted expectation of Q values associated with next states. In this work, we assume the reward to be a deterministic function of state-action pairs. However, when the reward is stochastic, these results can be extended by replacing the reward with its expectation. Subsequently, we can express (1) as\nQt+1(st, at) = r(st, at) + γE (\nmax a\nQt(s, a) ) , (3)\nwhere E denotes the expectation over the discrete probability measure P. When the underlying state space is high-dimensional and has large number of states, obtaining a more accurate estimate of the expectation is computationally very expensive. The complexity increases quadratically with the number of states and linearly with number of actions, rendering the existing algorithms impractical.\nIn this work, we propose a solution to this issue by introducing an importance-sampling based method to approximate the aforementioned expectation from a sample mean of Q values over a subset of next states. A natural way to sample a subset from the set of all possible next states is to draw identically and independently distributed (IID) samples from the transition probability distribution P(·|st, at). However, when the transition probability distribution is high-dimensional and known only up to a constant, drawing IID samples incurs a very high computation cost." }, { "heading": "3.1 DATA EFFICIENCY THROUGH HMC SAMPLING", "text": "A number of importance-sampling methods (Liu, 1996; Betancourt) have been developed for estimating the expectation of a function by drawing samples from the region with the dominant contribution to the expectation. HMC is one such importance-sampling method that draws samples from the typical set, i.e., the region that maximizes probability mass, which provides the dominated contribution to the expectation. As shown in the second row of Figure 1, most of the samples in a limited pool of HMC samples indeed concentrate around the region with high probability mass. Since the decay in Q function is significantly smaller compared to the typical exponential or power law decays in transition probability function, HMC provides a better approximation for the expectation of the Q value of the next states (Yang et al., 2020b; Shah et al., 2020). Then by lettingHt denote the set of\nHMC samples drawn at time step t, we update the Q values as:\nQt+1(st, at) = r(st, at) + γ |Ht| ∑ s∈Ht max a Qt(s, a). (4)\nHMC for a smooth truncated target distribution: Recall that region of states is a subset of a Euclidean space given as s ∈ [d−1 , d+1 ]× . . .× [d−Ds , d + Ds\n] ⊂ RDs . Thus the main challenge to using HMC sampling is to define a smooth continuous target distribution P(s|st, at) which is defined on RDs with a sharp decay at the boundary of the region of states (Yi & Doshi-Velez, 2017; Chevallier et al., 2018). In this work, we generate the target distribution by first defining the transition probability kernel from the conditional probability distribution defined on RDs and then multiplying it with a smooth cut-off function.\nWe first consider a probability distribution P(·|st, at) : RDs → R such that the following holds P(s|st, at) ∝ ∫ s+ε s−ε P(s|st, at)ds (5) for some arbitrarily small ε > 0. Then the target distribution can be defined as\nP(s|st, at) = P(s|st, at) Ds∏ i=1\n1 1 + exp(−κ(d+i − si)) 1 1 + exp(−κ(si − d−i )) . (6)\nNote that there exists a large κ > 0 such that if s ∈ [d−1 , d+1 ]× . . .× [d−Ds , d + Ds ] then P(s|st, at) ∝ P(s|st, at) and P(s|st, at) ≈ 0 otherwise. Let µ(st, at),Σ(st, at) be the mean and covariance of the transition probability kernel. In this paper we consider transition probability kernels of the form\nP(s|st, at) ∝ exp ( −1\n2 (s− µ(st, at))TΣ−1(st, at)(s− µ(st, at))\n) . (7)\nThen from (5) the corresponding mapping can be given as a multivariate Gaussian P(s|st, at) = N (µ (st, at),Σ(st, at)) . Thus from (6) it follows that the target distribution is\nP(s|st, at) = N (µ (st, at),Σ(st, at)) Ds∏ i=1\n1 1 + exp(−κ(d+i − si)) 1 1 + exp(−κ(si − d−i )) (8)\nChoice of potential energy, kinetic energy and mass matrix: Recall that the target distribution P(s|st, at) is defined over the Euclidean space RDs . For brevity of notation we drop the explicit dependence on (st, at) and denote the target distribution as P(s). As explained in Section 2.2 we choose the potential energy U(s) = − log (P(s)). We consider an Euclidean metric M that induces the distance between s̃, s̄ as d(s̃, s̄) = (s̃− s̄)TM(s̃− s̄). Then we define Ms ∈ RDs×Ds as a diagonal scaling matrix and Mr ∈ RDs×Ds as a rotation matrix in dimension Ds. With this we can define M as M = MrMsMMTsM T r . Thus, any metric M that defines an Euclidean structure on the target variable space induces an inverse structure on the momentum variable space as d(ṽ, v̄) = (ṽ− v̄)TM−1(ṽ− v̄). This generates a natural family of multivariate Guassian distributions such that P(v|s) = N (0,M) leading to the kinetic energy K(v, s) = − logP(v|s) = 12vTM−1v where M−1 is the covariance of the target distribution.\n3.2 Q-LEARNING WITH HMC AND MATRIX COMPLETION\nIn this work we consider problems with a high-dimensional state space and large number of distinct states along individual dimensions. Although these problems admit a large Q matrix, we can exploit low rank structure of the Q matrix to further improve the data efficiency.\nAt each time step t we randomly sample a subset Ωt of state-action pairs (each state-action pair is sampled independently with some probability p) and update the Q function for state-action pairs in Ωt. Let Q̂t+1 be the updated Q matrix at time t. Then from (4) we have\nQ̂t+1(st, at) = r(st, at) + γ |Ht| ∑ s∈Ht max a Qt(s, a), ∀(st, at) ∈ Ωt. (9)\nThen we recover the complete matrix Qt+1 by using the method given in (2). Thus we have\nQt+1 = arg min Q̃t+1∈R|S|×|A|\n‖Q̃t+1‖∗ subject to JΩt ( Q̃t+1 ) = JΩt ( Q̂t+1 ) . (10)\nAlgorithm 1 Hamiltonian Q-Learning Inputs: Discount factor γ; Range of state space; Time horizon T ; Initialization: Randomly initialize Q0 for t = 1 to T do\nStep 1: Randomly sample a subset of state-action pairs Ωt Step 2: HMC sampling phase - Sample a set of next statesHt according to the target distribution defined in (6) Step 3: Update phase - For all (st, at) ∈ Ωt Q̂t+1(st, at) = r(st, at) +\nγ |Ht| ∑ s∈Ht maxaQ t(s, a)\nStep 4: Matrix Completion phase Qt+1 = arg minQ̃t+1∈R|S|×|A| ‖Q̃t+1‖∗ subject to JΩt ( Q̃t+1 ) = JΩt ( Q̂t+1 ) end for\nSimilar to the approach used by Yang et al. (2020b), we approximate the rank of the Q matrix as the minimum number of singular values that are needed to capture 99% of its nuclear norm." }, { "heading": "3.3 CONVERGENCE, BOUNDEDNESS AND SAMPLING COMPLEXITY", "text": "In this section we provide the main theoretical results of this paper. First, we formally introduce the following regularity assumptions: (A1) The state space S ⊆ RDs and the action space A ⊆ RDa are compact subsets. (A2) The reward function is bounded, i.e., r(s, a) ∈ [Rmin, Rmax] for all (s, a) ∈ S ×A. (A3) The optimal value function Q∗ is C-Lipschitz, i.e.∣∣∣Q∗(s, a)−Q∗(s′, a′)∣∣∣ ≤ C(||s− s′||F + ||a− a′||F) where || · ||F is the Frobenius norm (which is same as the Euclidean norm for vectors). We provide theoretical guarantees that Hamiltonian Q-Learning converges to an -optimal Q function with Õ ( 1\nDs+Da+2\n) number of samples. This matches the mini-max lower bound Ω ( 1\nDs+Da+2 ) proposed in Tsybakov (2008). First we define a family of -optimal Q functions as follows. Definition 1 ( -optimal Q functions). Let Q∗ be the unique fixed point of the Bellman optimality equation given as (T Q)(s′, a′) = ∑s∈S P(s|s′, a′) (r(s′, a′) + γmaxaQ(s, a)) ∀(s′, a′) ∈ S ×A where T denotes the Bellman operator. Then, under update rule (3), the Q function almost surely converges to the optimal Q∗. We define -optimal Q functions as the family of functions Q such that ‖Q′ −Q∗‖∞ ≤ whenever Q′ ∈ Q ." }, { "heading": "As ‖Q′ −Q∗‖∞ = max(s,a)∈S×A ‖Q′(s, a)−Q∗(s, a)‖, any -optimal Q function is element wise", "text": "-optimal. Our next result shows that under HMC sampling rule given in Step 3 of the Hamiltonian Q-Learning algorithm (Algorithm 1), theQ function converges to the family of -optimalQ functions. Theorem 1 (Convergence of Q function under HMC). Let T be an optimality operator under HMC given as (TQ)(s′, a′) = r(s′, a′) + γ|H| ∑ s∈HmaxaQ(s, a), ∀(s′, a′) ∈ S ×A, whereH is a subset of next states sampled using HMC from the target distribution given in (6). Then, under update rule (4) and for any given ≥ 0, there exists nH, t′ > 0 such that ‖Qt −Q∗‖∞ ≤ ∀t ≥ t′.\nRefer Appendix A.1 for proof of this theorem. The next theorem shows that the Q matrix estimated via a suitable matrix completion technique lies in the -neighborhood of the corresponding Q function obtained via exhaustive sampling. Theorem 2 (Bounded Error under HMC with Matrix Completion). Let Qt+1E (st, at) = r(st, at) + γ ∑ s∈S P(s|st, at) maxaQtE(s, a),∀(st, at) ∈ S × A be the update rule under exhaustive sampling, and Qt be the Q function updated according to Hamiltonian Q-Learning (9)-(10). Then, for any given ̃ ≥ 0, there exists nH = minτ |Hτ |, t′ > 0, such that ‖Qt −QtE‖∞ ≤ ̃ ∀t ≥ t′.\nPlease refer Appendix A.2 for proof of this theorem. Finally we provide guarantees on sampling complexity of Hamiltonian Q-Learning algorithm. Theorem 3. (Sampling complexity of Hamiltonian Q-Learning) Let Ds, Da be the dimension of state space and action space, respectively. Consider the Hamiltonian Q-Learning algorithm presented in Algorithm 1. Then, under a suitable matrix completion method, theQ function convergea to the family of -optimal Q functions with Õ ( −(Ds+Da+2) ) number of samples.\nProof of Theorem 3 is given in Appendix B." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EMPIRICAL EVALUATION FOR CART-POLE", "text": "Experimental setup: By letting θ, θ̇ denote the angle and angular velocity of the pole and x, ẋ denote the position and velocity of the cart, the 4-dimensional state vector for the cart-pole system can be defined as s = (θ, θ̇, x, ẋ). After defining the range of state space as θ ∈ [−π/2, π/2], θ̇ ∈ [−3.0, 3.0], x ∈ [−2.4, 2.4] and ẋ ∈ [−3.5, 3.5], we define the range of the scalar action as a ∈ [−10, 10]. Then each state space dimension is discretized into 5 distinct values and the action space into 10 distinct values. This leads to a Q matrix of size 625 × 10. To capture parameter uncertainties and external disturbances, we assume that the probabilistic state transition is governed by a multivariate Gaussian with zero mean and covariance Σ = diag[0.143, 0.990, 0.635, 1.346]. To drive the pole to an upright position, we define the reward function as r(s, a) = cos4(15θ) (Yang et al., 2020b). After initializing the Q matrix using randomly chosen values from [0, 1], we sample state-action pairs independently with probability p = 0.5 at each iteration. Additional experimental details and results are provided in Appendix C.\nResults: As it is difficult to visualize a heat map for a 4-dimensional state space, we show results for the first two dimensions θ, θ̇ with fixed x, ẋ. The color in each cell of the heat maps shown in Figures 2(a), 2(b) and 2(c) indicates the value of optimal action associated with that state. These figures illustrate that the policy heat map for Hamiltonian Q-Learning is closer to the policy heat map for Q-Learning with exhaustive sampling. The two curves in Figure 2(d), that show the Frobenius norm of the difference between the learned Q function and the optimal Q∗ , illustrate that Hamiltonian Q-Learning achieves better convergence than Q-Learning with IID sampling. We also show that the sampling efficiency of any Q-Learning algorithm can be significantly improved by incorporating Hamiltonian Q-Learning. We illustrate this by incorporating Hamiltonian Q-Learning with vanilla Q-Learning, DQN, Dueling DQN and DDPG. Figure 3 shows how Frobenius norm of the error between Q function and the optimal Q∗ varies with increase in the number of samples. Red solid curves correspond to the case with exhaustive sampling and black dotted curves correspond to the case with Hamiltonian Q-Learning. These results illustrate that Hamiltonian Q-Learning converges to an optimal Q function with significantly smaller number of samples than exhaustive sampling." }, { "heading": "4.2 EMPIRICAL EVALUATION FOR ACROBOT (I.E., DOUBLE PENDULUM)", "text": "Experimental setup: By letting θ1, θ̇1,θ2, θ̇2 denote the angle of the first pole, angular velocity of the first pole, angle of the second pole and angular velocity of the second pole, respectively, the 4-dimensional state vector for the acrobot can be defined as s = (θ1, θ̇1, θ2, θ̇2).After defining the\nrange of state space as θ1 ∈ [−π, π], θ̇1 ∈ [−3.0, 3.0], θ2 ∈ [−π, π] and θ̇2 ∈ [−3.0, 3.0], we define the range of the scalar action as a ∈ [−10, 10]. Then each state space dimension is discretized into 5 distinct values and the action space into 10 distinct values. This leads to a Q matrix of size 625× 10. Furthermore, we assume that the probabilistic state transition is governed by a multivariate Gaussian with zero mean and covariance Σ = diag[0.143, 0.990, 0.635, 1.346]. Following Sutton & Barto (2018), we define an appropriate reward function for stabilizing the acrobot to the upright position. After initializing the Q matrix using randomly chosen values from [0, 1], we sample state-action pairs independently with probability p = 0.5 at each iteration.\nResults: Figure 4 illustrates how Frobenius norm of the error between Q function and the optimal Q∗ varies with the number of samples. Red solid curves correspond to the case with exhaustive sampling and black dotted curves correspond to the case with Hamiltonian Q-Learning. These results show that for the same level of error Hamiltonain Q-Learning requires a significantly smaller number of samples compared to exhaustive sampling." }, { "heading": "4.3 APPLICATION TO OCEAN SAMPLING", "text": "Ocean sampling plays a major role in a variety of science and engineering problems, ranging from modeling marine ecosystems to predicting global climate. Here, we consider the problem of using an under water glider to obtain measurements of a scalar field (e.g., temperature, salinity or concentration of a certain zooplankton) and illustrate how the use of Hamiltonian Q-Learning in planning the glider trajectory can lead to measurements that minimize the uncertainty associated with the field.\nStates, actions and state transition: By assuming that the glider’s motion is restricted to an horizontal plane (Refael & Degani, 2019), we let x, y and θ denote its center of mass position and heading angle, respectively. Then we can define the 6-dimensional state vector for this system as s = (x, y, ẋ, ẏ, θ, θ̇) and the action a as a scalar control input to the glider. Also, to accommodate dynamic perturbations due to the ocean current, other external disturbances and parameter uncertainties, we assume that the probabilistic state transition is governed by a multivariate Gaussian.\nReward: As ocean fields often exhibit temporal and spatial correlations (Leonard et al., 2007), this work focuses on spatially correlated scalar fields. Following the approach of Leonard et al. (2007), we define ocean statistic correlation between two positions q = (x, y) and q′ = (x′, y′) as B(q,q′) = exp(−‖q − q′‖2/σ2), where σ is the spatial decorrelation scale. The goal of the task is to take measurements that reduce the uncertainty associated with the field. Now we assume that the glider takes N measurements at positions {q1, . . . ,qN}. Then covariance of the collected data set can be given by a N × N matrix W such that its ith row and the jth column element is: Wij = ηδij + B(qi,qj), where δij is the Dirac delta and η is the variance of the uniform and uncorrelated measurement noise. Then using objective analysis data assimilation scheme (Kalnay, 2003; Bennett, 2005), the total reduction of uncertainty of the field after the measurements at positions Q = {q1, . . . ,qN} can be expressed as\nU = ∑ q∈Q N∑ i,j=1 B(q,qi)W −1 ij B(qj ,q), (11)\nBy substituting the formulas from (Kalnay, 2003; Bennett, 2005) into (11), this formulation can be generalized to Gaussian measurement noise.\nRecall that the task objective is to guide the glider to take measurements at multiple locations/positions which maximize the reduction in uncertainty associated with the scalar field. Therefore the reward assigned to each state-action pair (s, a) is designed to reflect the amount of uncertainty that can\nbe reduced by taking a measurement at the position corresponding to the state and at the positions that correspond to the set of maximally probable next states, i.e., arg maxs′ P(s′|s, a). Then, by letting Zs = {s}∪{∪a∈A{arg maxs′ P(s′|s, a)}} denote the set of current state s and the maximally probable next states for all possible actions, the reward function associated with reducing uncertainty can be given as\nru(s, a) = ∑ q∈Q ∑ i,j∈Zs B(q,qi)W −1 ij B(qj ,q).\nWithout loss of generality, we assume that the glider is deployed from q = (0, 0) and retrieving the glider incurs a cost depending on its position. To promote trajectories that do not incur a high cost for glider retrieval, we define the following reward function\nrc(s, a) = −qTCq where C = CT ≥ 0. Then we introduce the total reward that aims to reduce uncertainty of the scalar field while penalizing the movements away from the origin, and define it as\nr(s, a) = ru(s, a) + rc(s, a) = −λqTCq + ∑ q∈Q ∑ i,j∈Zs B(q,qi)W −1 ij B(qj ,q),\nwhere λ > 0 is a trade-off parameter that maintains a balance between these two objectives.\nExperimental setup We define the range of state and action space as x, y ∈ [−10, 10], ẋ, ẏ ∈ [−25, 25], θ ∈ [−π, π], θ̇ ∈ [−3, 3], and a ∈ [−1, 1], respectively and then discretizing each state dimension into 5 distinct values and the action space into 5 distinct values, we have a Q matrix of size 15625× 5. Also, we assume that the state transition kernel is given by a multivariate Gaussian with zero mean and covariance Σ = diag[11.111, 69.444, 11.111, 69.444, 0.143, 0.990]. After initializing the Q matrix using randomly chosen values from [0, 1], we sample state-action pairs independently with probability p = 0.5 at each iteration. Also, we assume σ = 2.5, λ = 0.1, C = diag[1, 0]. Additional experimental details and results are provided in Appendix D.\nResults Figures 5(a), 5(b) and 5(c) show the policy heat map over first two dimensions x, y with fixed ẋ, ẏ, θ and θ̇. The color of each cell indicates the value of optimal action associated with that state. These figures illustrate that the difference between policy heat maps associated with Hamiltonian Q-Learning and Q-Learning with exhaustive sampling is smaller than the difference between policy heat maps associated with Q-Learning with IID sampling and Q-Learning with exhaustive sampling. The two curves in Figure 5(d), that show the Frobenius norm of the difference between the learned Q function and the optimal Q∗, illustrate that Hamiltonian Q-Learning achieves better convergence than Q-Learning with IID sampling. A comparison between results of the ocean sampling problem and the cart-pole stabilization problem indicates that Hamiltonian Q-Learning provides increasingly better performance with increase in state space dimension." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "Here we have introduced Hamiltonian Q-Learning which utilizes HMC sampling with matrix completion methods to improve data efficiency. We show, both theoretically and empirically, that the proposed approach can learn very accurate estimates of the optimal Q function with much fewer data points. We also demonstrate that Hamiltonian Q-Learning performs significantly better than Q-Learning with IID sampling when the underlying state space dimension is large. By building upon this aspect, future works will investigate how importance-sampling based methods can improve data-efficiency in multi-agent Q-learning with agents coupled through both action and reward." }, { "heading": "A CONVERGENCE AND BOUNDEDNESS RESULTS", "text": "We proceed to prove theorem by stating convergence properties for HMC as follows. In the initial sampling stage, starting from the initial position Markov chain converges towards to the typical set. In the next stage Markov chain quickly traverse the typical set and improves the estimate by removing the bias. In the last stage Markov chain refine the exploration of typical the typical set provide improved estimates. The number of samples taken during the last stage is referred as effective sample size.\nA.1 PROOF OF THEOREM 1 Theorem 1. Let T be an optimality operator under HMC given as (TQ)(s′, a′) = r(s′, a′) + γ |H| ∑ s∈HmaxaQ(s, a), ∀(s′, a′) ∈ S ×A, whereH is a subset of next states sampled using HMC from the target distribution given in (6). Then, under update rule (4) and for any given ≥ 0, there exists nH, t′ > 0 such that ‖Qt −Q∗‖∞ ≤ ∀t ≥ t′. Proof of Theorem 1. Let Q̄t(s, a) = 1nH maxaQ\nt(s, a),∀(s, a) ∈ S ×A. Here we consider nH to be the effective number of samples. Let EPQt,VarPQt be the expectation and covariance of Qt with respect to the target distribution. From Central Limit Theorem for HMC we have\nQ̄t ∼ N ( EPQt, √ VarPQt\nnH\n) .\nSince Q function does not decay fast we provide a proof for the case where Qt is C-Lipschitz. From Theorem 6.5 in (Holmes et al., 2014) we have that, there exists a c0 > 0 such that\n||Q̄t − EPQt|| ≤ c0. (12)\nRecall that Bellman optimality operator T is a contraction mapping. Thus from triangle inequality we have ∣∣∣∣∣∣TQ1 − TQ2∣∣∣∣∣∣\n∞ ≤ max s′,a′ ∣∣∣∣∣∣r(s′, a′) + γ|H1|∑ s∈S max a Q1(s, a)\n−r(s′, a′)− γ|H2| ∑ s∈S max a Q2(s, a) ∣∣∣∣∣∣\n≤ max s′,a′ ∣∣∣∣∣∣ γ|H1|∑ s∈S max a Q1(s, a)− γ |H2| ∑ s∈S max a Q2(s, a) ∣∣∣∣∣∣\nLet |H1| = |H2| = nH. Then using triangle inequality we have∣∣∣∣∣∣TQ1 − TQ2∣∣∣∣∣∣ ∞ ≤ max s′,a′ γ [∣∣∣∣∣∣Q̄1 − EPQ1∣∣∣∣∣∣+ ∣∣∣∣∣∣Q̄2 − EPQ2∣∣∣∣∣∣]+ max s′,a′ γ ∣∣∣∣∣∣EPQ1 − EPQ2∣∣∣∣∣∣\nSince Q function almost surely converge under exhaustive sampling we have\nmax s′,a′\nγ ∣∣∣∣∣∣EPQ1 − EPQ2∣∣∣∣∣∣ ≤ γ∣∣∣∣∣∣Q1 −Q2∣∣∣∣∣∣\n∞ (13)\nFrom equation 12 and equation 13 we have after t time steps∣∣∣∣∣∣TQ1 − TQ2∣∣∣∣∣∣ ∞ ≤ 2c0 + γ ∣∣∣∣∣∣Q1 −Q2∣∣∣∣∣∣ ∞ Let Rmax and Rmin be the maximum and minimum reward values. Then we have that∣∣∣∣∣∣Q1 −Q2∣∣∣∣∣∣ ∞ ≤ γ 1− γRmax −Rmin. Thus for any ≥ by choosing a γ such there exists a t′ such that ∀t ≥ t′ ‖Qt −Q∗‖∞ ≤\nThis concludes the proof of Theorem 1.\nA.2 PROOF OF THEOREM 2 Theorem 2. Let Qt+1E (st, at) = r(st, at) + γ ∑ s∈S P(s|st, at) maxaQtE(s, a),∀(st, at) ∈ S ×A be the update rule under exhaustive sampling, and Qt be the Q function updated according to Hamiltonian Q-Learning, i.e. by (9)-(10). Then, for any given ̃ ≥ 0, there exists nH, t′ > 0, such that ‖Qt −QtE‖∞ ≤ ̃ ∀t ≥ t′. Proof of Theorem 2. Note that at each time step we attempt to recover the matrix QtE , i.e., Q function time time t under exhaustive sampling though a matrix completion method starting from Q̂t, which is the Q updated function at time t using Hamiltonian Q-Learning. From Theorem 4 in (Chen & Chi, 2018) we have that ∀t ≥ t′ there exists some constant δ > 0 such that when the updated Q function a Q̂t satisfy ∣∣∣∣∣∣Q̂t −QtE∣∣∣∣∣∣∞ ≤ c where c is some positive constant then reconstructed (completed) matrix Qt satiesfies∣∣∣∣∣∣Qt −QtE∣∣∣∣∣∣∞ ≤ δ∣∣∣Q̂t −QtE∣∣∣∣∣∣∞ (14) for some δ > 0. This implies that when the initial matrix used for matrix completion is sufficiently close to the matrix we are trying to recover matrix completion iterations converge to a global optimum. From the result of Theorem 1 we have for any given ≥ 0, there exists nH, t′ > 0 such that ∀t ≥ t′∣∣∣∣∣∣Q̂t −Q∗∣∣∣∣∣∣ ≤ (15) Recall that under the update equationQt+1E (st, at) = r(st, at)+γ ∑ s∈S maxaQ t E(s, a),∀(st, at) ∈ S ×A we have that QE almost surely converge to the optimal Q∗. Thus there exists a t† such that ∀t ≥ t† ∣∣∣∣∣∣QtE −Q∗∣∣∣∣∣∣ ≤ Let t‡ = max{t†, t′}. Then from triangle inequality we have that∣∣∣∣∣∣Q̂t −QtE∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣Q̂t −Q∗∣∣∣∣∣∣+ ∣∣∣∣∣∣QtE −Q∗∣∣∣∣∣∣ ≤ 2 .\nThus from equation 14 we have that ∣∣∣∣∣∣Qt −QtE∣∣∣∣∣∣∞ ≤ 2δ This concludes the proof of Theorem 2." }, { "heading": "B SAMPLING COMPLEXITY", "text": "In this section we provide theoretical results on sampling complexity of Hamiltonian Q-Learning. For brevity of notation we define MQ(s) = maxaQ(s, a). Note that we have the following regularity conditions on the MDP studied in this paper.\nRegularity Conditions\n1. Spaces S and A (state space and action space) are compact subsets of RDs and RDa respectively.\n2. All the rewards are bounded such that r(s, a) ∈ [Rmin, Rmax], for all (s, a) ∈ S ×A. 3. The optimal Q∗ is C-Lipschitz such that∣∣∣Q∗(s, a)−Q∗(s′, a′)∣∣∣ ≤ C (||s− s′||F + ||a− a′||F )\nNow we prove some useful lemmas for proving sampling complexity of Hamiltonian Q-Learning Lemma 1. For some constant c1, if\n|Ωt| ≥ c1 max\n{ |S|2, |A|2 } |S||A|DsDa\nlog (Ds +Da) with ∣∣∣∣∣∣Q̂t(s, a)−Q∗(s, a)∣∣∣∣∣∣ ∞ ≤ then there exists a constant c2 such that∣∣∣∣∣∣Qt(s, a)−Q∗(s, a)∣∣∣∣∣∣\n∞ ≤ c2\nProof of Lemma 1. Recall that in order to complete a low rank matrix using matrix estimation methods, the matrix can not be sparse. This condition can be formulated using the notion of incoherence. Let Q be a matrix of rank rQ with the singular value decomposition Q = UΣV T . Let TQ be the orthogonal projection of Q ∈ R|S|×|A| to its column space. Then incoherence parameter of φ(Q) can be give as\nφ(Q) = max { |S| rQ max 1≤i≤|S| ||TUei||2F , |A| rQ max 1≤i≤|A| ||TUei||2F }\nwhere ei are the standard basis vectors. Recall that Qt is the matrix generated in matrix completion phase from Q̂. From Theorem 4 in Chen & Chi (2018) we have that for some constant C1 if a fraction of p elements are observed from the matrix such that\np ≥ C1 φ2t r 2 QDsDa\nlog (Ds +Da) where φt is the coherence parameter of Qt then with probability at least 1 − C2(Ds + Da)−1 for some constant C2 with ∣∣∣∣∣∣Q̂t(s, a)−Q∗(s, a)∣∣∣∣∣∣ ∞ ≤ there exists a constant c2 such that∣∣∣∣∣∣Qt(s, a)−Q∗(s, a)∣∣∣∣∣∣\n∞ ≤ c2\nNote that p ≈ |Ωt||S||A| . Further we have for some constant c3\nφ2t r 2 QDsDa\nlog (Ds +Da) = c3\nmax { |S|2, |A|2 } DsDa\nlog (Ds +Da) Thus it follows that for some constant c1 if\n|Ωt| = c1 max\n{ |S|2, |A|2 } |S||A|DsDa\nlog (Ds +Da)\nwith ∣∣∣∣∣∣Q̂t(s, a)−Q∗(s, a)∣∣∣∣∣∣\n∞ ≤ then there exists a constant c2 such that∣∣∣∣∣∣Qt(s, a)−Q∗(s, a)∣∣∣∣∣∣\n∞ ≤ c2\nThis concludes the proof of Lemma 1. Lemma 2. Let 1 − ξ be the spectral gap of Markov chain under Hamiltonian sampling where ξ ∈ [0, 1]. Let ∆R = Rmax −Rmin be the maximum reward gap. Then ∀(s′, a′) ∈ S ×A we have that\n|Q̂(s′, a′)−Q∗(s′, a′) ∣∣∣ ≤ γ2 1− γ∆R+ √ 1 + ξ 1− ξ 2 |H| ( γRmax 1− γ )2 log ( 2 δ ) .\nwith at least probability 1− δ. Proof of Lemma 2. Let Q̂(s′, a′) = r(s′, a′) + γ|H| ∑ s∈HmaxaQ(s, a). Recall that MQ(s) =\nmaxaQ(s, a). Then we have that Q̂(s′, a′) = r(s′, a′) + γ|H| ∑ s∈HMQ(s). Then it follows that\n|Q̂(s′, a′)−Q∗(s′, a′) ∣∣∣ = ∣∣∣r(s′, a′) + γ|H|∑\ns∈H MQ(s)− r(s′, a′)− γEPMQ∗(s) ∣∣∣ = ∣∣∣ γ|H| |H|∑ i=1 MQ(si)− γEPMQ∗(s) ∣∣∣\n= ∣∣∣ γ|H| |H|∑ i=1 MQ(si)− γ |H| |H|∑ i=1 MQ∗(si) ∣∣∣\n+ ∣∣∣ γ|H| |H|∑ i=1 MQ∗(si)− γEPMQ∗(s) ∣∣∣ (16)\nRecall that all the rewards are bounded such that r(s, a) ∈ [Rmin, Rmax], for all (s, a) ∈ S × A. Thus for all s, a we have that MQ(s) ≤ γ1−γRmax. Let ∆R = Rmax −Rmin. Then we have that∣∣∣ γ|H| |H|∑ i=1 MQ(si)− γ |H| |H|∑ i=1 MQ∗(si) ∣∣∣ ≤ γ2 1− γ∆R. (17) Let ξ ∈ [0, 1] be a constant such that 1− ξ is the spectral gap of the Markov chain under Hamiltonian sampling. Then from Fan et al. (2018) we have that\nP 1 |H| |H|∑ i=1 MQ∗(si)− EPMQ∗(s) ≥ ϑ ≤ exp(−1− ξ 1 + ξ |H|ϑ2 2R2max ( 1− γ γ )2)\nLet δ = exp ( − 1−ξ1+ξ |H|ϑ2 2R2max ( 1−γ γ )2) . Then we have that\nϑ =\n√ 1 + ξ\n1− ξ 2 |H| ( γRmax 1− γ )2 log ( 2 δ ) .\nThus we see that∣∣∣ 1|H| |H|∑ i=1 MQ∗(si)− EPMQ∗(s) ∣∣∣ ≤\n√ 1 + ξ\n1− ξ 2 |H| ( γRmax 1− γ )2 log ( 2 δ ) (18)\nwith at least probability 1−δ. Thus it follows from equations equation 16, equation 17 and equation 18 that\n|Q̂(s′, a′)−Q∗(s′, a′) ∣∣∣ ≤ γ2 1− γ∆R+ √ 1 + ξ 1− ξ 2 |H| ( γRmax 1− γ )2 log ( 2 δ ) .\nwith at least probability 1− δ. This concludes the proof of Lemma 2. Lemma 3. For all (s, a) ∈ S ×A we have that\n|Qt(s, a)−Q∗(s, a) ∣∣∣ ≤ 2c1 γ2Rmax\n1− γ with probability at least 1− δ\nProof of Lemma 3. From Lemma 2 and Shah et al. (2020) we have that for all (s, a) ∈ Ωt\n|Q̂t(s, a)−Q∗(s, a) ∣∣∣ ≤ γ2 1− γ∆R+ √ 1 + ξ 1− ξ 2 |Ht| ( γRmax 1− γ )2 log ( 2|Ωt|T δ ) . (19)\nwith probability at least 1− δT . Thus we have that\n|Qt(s, a)−Q∗(s, a) ∣∣∣ ≤ c1 γ2\n1− γ∆R+ c1\n√ 1 + ξ\n1− ξ 2 |Ht| ( γRmax 1− γ )2 log ( 2|Ωt|T δ ) .\nwith probability at least 1− δT . Fro all 1 ≤ t ≤ T letting\n|Ht| = 1 + ξ 1− ξ 2 γ2 log ( 2|Ωt|T δ ) we obtain\nγ2 1− γRmax ≥ √ 1 + ξ 1− ξ 2 |Ht| ( γRmax 1− γ )2 log ( 2|Ωt|T δ ) .\nThus we have,\n|Qt(s, a)−Q∗(s, a) ∣∣∣ ≤ 2c1 γ2Rmax\n1− γ with probability at least 1 − δ. Recall that ∀(s, a) ∈ S × A we have MQ(s, a) ≤ γ∆R1−γ . Thus this also proves that\n|Qt(s, a)−Q∗(s, a) ∣∣∣ ≤ 2c1γ|Qt−1(s, a)−Q∗(s, a)∣∣∣\nThis concludes the proof of Lemma 3.\nNow we proceed to prove the main theorem for sampling complexity as follows.\nTheorem 3. Let Ds,Da be the dimension of state space and action space respectively. Consider the Hamiltonian Q-Learning algorithm presented in Algorithm 1. Under a suitable matrix completion method sampling complexity of the algorithm, Q function converge to the family of -optimal Q functions with Õ ( −(Ds+Da+2) ) number of samples.\nProof of Theorem 3. Note that sample complexity of Hamiltonian Q-Learning can be given as T ∑ t=1 |Ωt||Ht| ≤ T |ΩT ||HT | Let βt be the discretization parameter at time t and T = log( γRmax(1−γ) ) log (\n1 2γc1 ) . Then from Lemmas 1, 2 and 3 it follows that\nT ∑ t=1 |Ωt||Ht| = Õ (\n1\nDs+Da+2 ) This concludes the proof of Theorem 3." }, { "heading": "C ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS FOR CART-POLE", "text": "Let θ, θ̇ be the angle and angular velocity of the pole, respectively. Let x, ẋ be the position and linear velocity of the cart, respectively. Let a be the control force applied to the cart. Then, by defining m, M , l and g as the mass of the pole, mass of the cart, length of the pole and gravitational acceleration, respectively, the dynamics of cart-pole system can be expressed as\nθ̈ = g sin θ − a+mlθ̇2 sin θm+M cos θ l (\n4 3 − m cos 2 θ m+M ) ẍ = a+ml ( θ̇2 sin θ − θ̈ cos θ ) m+M\n(20)\nState space of cart-pole system is 4-dimensional (Ds = 4) and any state s ∈ S is given by s = (θ, θ̇, x, ẋ). We define the range of state space as θ ∈ [−pi/2, π/2], θ̇ ∈ [−3.0, 3.0], x ∈ [−2.4, 2.4] and ẋ ∈ [−3.5, 3.5]. We consider action space to be a 1-dimensional (Da = 1) space such that a ∈ [−10, 10]. We discretize each dimension in state space into 5 values and action space into 10 values. This forms a Q matrix of dimensions 625× 10. Although the differential equations (20) governing the dynamics of the pendulum on a cart system are deterministic, uncertainty of the parameters and external disturbances to the system causes the cart pole to deviate from the defined dynamics leading to a stochastic state transition. Following the conventional approach we model these parameter uncertainties and external disturbances using a multivariate Gaussian perturbation (Maithripala et al., 2016; Madhushani et al., 2017; McAllister & Rasmussen, 2017). Here we consider the co-variance of the Gaussian perturbation to be Σ = diag[0.143, 0.990, 0.635, 1.346].\nLet st = (θt, θ̇t, xt, ẋt) and at be the state and the action at time t. Then the state transition probability kernel and corresponding target distribution can be given using (7) and (8), respectively, with mean µ(st, at) = (θt + θ̇tτ, θ̇t + θ̈tτ, xt + ẋtτ, ẋt + ẍtτ), where θ̈t, ẍt can be obtained from (20) by substituting θt, θ̇t, at, and co-variance Σ(st, at) = Σ.\nOur simulation results use the following value for the system parameters - m = 0.1kg, M = 1kg, l = 0.5m and g = 9.8ms−2. We take 100 HMC samples during the update phase. We use trajectory length L = 100 and step size δl = 0.02. We randomly initialize the Q matrix using values between 0 and 1. We provide additional comparison heat maps for first two dimensions θ, θ̇ with fixed x, ẋ.\nFurther, we provide additional comparison heat maps for last two dimensions x, ẋ with fixed θ, θ̇." }, { "heading": "D ADDITIONAL DETAILS FOR OCEAN SAMPLING APPLICATION", "text": "Glider dynamics By assuming that the glider’s motion is restricted to an horizontal plane (Refael & Degani, 2019), we let x, y and θ denote its center of mass position and heading angle, respectively. Then we can define the 6-dimensional state vector for this system as s = (x, y, ẋ, ẏ, θ, θ̇) and the action a as a scalar control input to the glider. Also, to accommodate dynamic perturbations due to the ocean current, other external disturbances and parameter uncertainties, we assume that the probabilistic state transition is governed by a multivariate Gaussian. We consider that the motion of the glider is restricted to a horizontal plane. Let x, y and θ be the coordinates of the center of mass of glider and heading angle respectively. By introducing q = [x y θ]T , the dynamics of the glider can be expressed as\nMq̈ = RFf + Fb + τ\nwhere\nM = diag [ m m\nIin + Iout\n] ; R = [ cos θ − sin θ 0 sin θ cos θ 0\n0 0 1\n]\nFf = αf θ̇2sgn(θ̇) sin(β + ψ)αf θ̇2 cos(β + ψ) 0 ; Fb = −αb√ẋ2 + ẏ2 [ẋẏ 0 ] ; τ = 00 −µfsgn(θ̇)θ̇2 − Iina αf = 1\n2 ρCfdfL\n( r2 + ( L\n2\n)2 + rL cosψ ) ; µf = αf ( L\n2 + r cosψ\n) ; αb = 1\n2 Cbdbπr\nαb = 0.005; αf = 0.062; µf = 0.0074; σ = 2.5.\nOur simulation results use system parameter values from Table 1. We define the range of state and action space as x, y ∈ [−10, 10], ẋ, ẏ ∈ [−25, 25], θ ∈ [−π, π], θ̇ ∈ [−3, 3], and a ∈ [−1, 1], respectively and then discretizing each state dimension into 5 distinct values and the action space into 5 distinct values, we have a Q matrix of size 15625 × 5. Also, we assume that the state transition kernel is given by a multivariate Gaussian with zero mean and covariance Σ = diag[11.111, 69.444, 11.111, 69.444, 0.143, 0.990]. After initializing the Q matrix using randomly chosen values from [0, 1], we sample state-action pairs independently with probability p = 0.5 at each iteration. Also, we assume σ = 2.5, λ = 0.1, C = diag[1, 0]. We take 100 HMC samples during the update phase. We use trajectory length L = 100 and step size δl = 0.02.\nAdditional experimental results : We provide additional comparison heat maps for first two dimensions x, y with fixed ẋ, ẏ, θ, θ̇.\nE APPLICATION OF HMC SAMPLING FOR Q-LEARNING: FURTHER DETAILS\nIn this section we provide a detailed explanation on drawing HMC samples from a given state transition probability distribution. Let (st, at) be the current state action pair. Let µ(st, at),Σ(st, at) be the mean and covariance of the transition probability kernel. In order to draw HMC samples\nwe are required to define the corresponding potential energy and kinetic energy of the system. Let P(s|st, at) be the smooth target state transition distribution.\nPotential energy, kinetic energy and mass: In this work we consider P(s|st, at) to be a truncated multivariate Gaussian as given in equation 8. Thus potential energy can be explicitly given as,\nU(s) = − log(P(s)) = 1 2 (s− µ)TΣ−1(s− µ)− 1 2\nlog ( (2π)Ds det(Σ) )\n− Ds∑ i=1 [ log ( 1 + exp(−κ(d+i − si)) ) + log ( 1 + exp(−κ(si − d−i )) )]\nwhere, µ and Σ correspond to the mean and variance of the transition probability kernel. In the context of HMC s is referred to as the position variable. Then we chose kinetic energy can be given as\nK(v) = − log(P(v|s)) = 1 2 vTM−1v = 1 2 vTΣv.\nwhere v is the momentum variable and M = Σ−1 corresponds to the mass/inertia matrix associated with the Hamiltonian.\nHamiltonian Dynamics: As the Hamiltonian is the sum of the kinetic and the potential energy, i.e. H(s, v) = U(s) +K(v), the Hamiltonian dynamics can be expressed as\nṡ = ∂K\n∂v = Σv\nand\nv̇ = −∂U ∂s\n= −Σ−1(s− µ) + κ [ S(−κ(d+ − s))− S(−κ(s− d−)) ] ,\nwhere S(ξ1, · · · , ξDs) = [ S(ξ1), · · · , S(ξDs) ]T denotes element wise sigmoid function of the vector ξ. We initialize HMC sampling by drawing a random sample s from the transition probability distribution and a new momentum variable v from the multivariate GaussianN (0,Σ−1). We integrate the Hamiltonian dynamics for L steps with step size ∆l to generate the trajectory from (s, v) to (s′, v′). To ensure that the Hamiltonian is conserved along the trajectory, we use a volume preserving symplectic integrator, in particular a leapfrog integrator which uses the following update rule to go from step l to l + ∆l:\nvl+ ∆l2 = vl − 0.5∆l\n∂U(s)\n∂s ∣∣∣∣ l , sl+∆l = sl + ∆lΣvl+ ∆l2 , vl+∆l = vl − 0.5∆l ∂U(s) ∂s ∣∣∣∣ l+∆l .\nAcceptance of the new proposal: Then, following the Metropolis–Hastings acceptance/rejection rule, this new proposal is accepted with probability\nmin { 1, exp (H(s, v))\nexp (H(s′,−v′))\n} .\nUpdating Q function using HMC samples: Let Ht be the set of next states obtained via HMC sampling, i.e., position variables from the accepted set of proposals. Then we update Q(st, at) using equation 9." } ]
2,020
null
SP:eead17fca9c9dd9c1def9e314e19235141fbe709
[ "The authors propose a \"meta-algorithm\" for approximating various graph representation learning schemes: generate batches of random trees with fixed fanout (and possibly biased probabilities of selecting different edges), and use them to accumulate information to approximate operations on the graph. The idea is beautifully simple, and generalizes running independent random walkers, an approach that is used in deriving many related algorithms. The biasing and accumulation functions are user provided, and the authors show how different choices of these functions can be used to approximate a number of graph representation learning schemes. The authors also provide a software framework, though it was inaccessible at review due to anonymization. Experiments show the approach is much more scalable than competing approaches (though, to be fair, some of the competition was not targeting scalability)." ]
Graph Representation Learning (GRL) methods have impacted fields from chemistry to social science. However, their algorithmic implementations are specialized to specific use-cases e.g. message passing methods are run differently from node embedding ones. Despite their apparent differences, all these methods utilize the graph structure, and therefore, their learning can be approximated with stochastic graph traversals. We propose Graph Traversal via Tensor Functionals (GTTF), a unifying meta-algorithm framework for easing the implementation of diverse graph algorithms and enabling transparent and efficient scaling to large graphs. GTTF is founded upon a data structure (stored as a sparse tensor) and a stochastic graph traversal algorithm (described using tensor operations). The algorithm is a functional that accept two functions, and can be specialized to obtain a variety of GRL models and objectives, simply by changing those two functions. We show for a wide class of methods, our algorithm learns in an unbiased fashion and, in expectation, approximates the learning as if the specialized implementations were run directly. With these capabilities, we scale otherwise non-scalable methods to set state-of-the-art on large graph datasets while being more efficient than existing GRL libraries – with only a handful of lines of code for each method specialization. GTTF and its various GRL implementations are on: https://github.com/isi-usc-edu/gttf
[ { "affiliations": [], "name": "Elan Markowitz" }, { "affiliations": [], "name": "Keshav Balasubramanian" }, { "affiliations": [], "name": "Mehrnoosh Mirtaheri" }, { "affiliations": [], "name": "Sami Abu-El-Haija" }, { "affiliations": [], "name": "Bryan Perozzi" }, { "affiliations": [], "name": "Greg Ver Steeg" }, { "affiliations": [], "name": "Aram Galstyan" } ]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Rami Al-Rfou", "Alexander A Alemi" ], "title": "Watch your step: Learning node embeddings via graph attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Hrayr Harutyunyan", "Nazanin Alipourfard", "Kristina Lerman", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Pierre Baldi", "Peter Sadowski" ], "title": "The dropout learning algorithm", "venue": "In Artificial Intelligence,", "year": 2014 }, { "authors": [ "Leon Bottou" ], "title": "Online algorithms and stochastic approximations", "venue": "In Online Learning and Neural Networks,", "year": 1998 }, { "authors": [ "Leon Bottou" ], "title": "Stochastic learning", "venue": "Advanced Lectures on Machine Learning, Lecture Notes in Artificial Intelligence,", "year": 2004 }, { "authors": [ "Ines Chami", "Sami Abu-El-Haija", "Bryan Perozzi", "Christopher Ré", "Kevin Murphy" ], "title": "Machine learning on graphs: A model and comprehensive taxonomy, 2021", "venue": null, "year": 2021 }, { "authors": [ "Hongming Chen", "Ola Engkvist", "Yinhai Wang", "Marcus Olivecrona", "Thomas Blaschke" ], "title": "The rise of deep learning in drug discovery", "venue": "In Drug discovery today,", "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: Fast learning with graph convolutional networks via importance sampling", "venue": "In International Conference on Learning Representation,", "year": 2018 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "William Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Thomas Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Adam Lerer", "Ledell Wu", "Jiajun Shen", "Timothee Lacroix", "Luca Wehrstedt", "Abhijit Bose", "Alex Peysakhovich" ], "title": "Pytorch-biggraph: A large-scale graph embedding system", "venue": "In The Conference on Systems and Machine Learning,", "year": 2019 }, { "authors": [ "Omer Levy", "Yoav Goldberg", "Ido Dagan" ], "title": "Improving distributional similarity with lessons learned from word embeddings", "venue": "In Transactions of the Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In ACM SIGKDD international conference on Knowledge discovery & Data Mining,", "year": 2014 }, { "authors": [ "Jiezhong Qiu", "Yuxiao Dong", "Hao Ma", "Jian Li", "Kuansan Wang", "Jie Tang" ], "title": "Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec", "venue": "In ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "In Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Petar Veličković", "William Fedus", "William L. Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Minjie Wang", "Da Zheng", "Zihao Ye", "Quan Gan", "Mufei Li", "Xiang Song", "Jinjing Zhou", "Chao Ma", "Lingfan Yu", "Yu Gai", "Tianjun Xiao", "Tong He", "George Karypis", "Jinyang Li", "Zheng Zhang" ], "title": "Deep graph library: A graph-centric, highly-performant package for graph neural networks", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zhilin Yang", "William W Cohen", "Ruslan Salakhutdinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "GraphSAINT: Graph sampling based inductive learning method", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Difan Zou", "Ziniu Hu", "Yewen Wang", "Song Jiang", "Yizhou Sun", "Quanquan Gu" ], "title": "Few-shot representation learning for out-of-vocabulary words", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph representation learning (GRL) has become an invaluable approach for a variety of tasks, such as node classification (e.g., in biological and citation networks; Veličković et al. (2018); Kipf & Welling (2017); Hamilton et al. (2017); Xu et al. (2018)), edge classification (e.g., link prediction for social and protein networks; Perozzi et al. (2014); Grover & Leskovec (2016)), entire graph classification (e.g., for chemistry and drug discovery Gilmer et al. (2017); Chen et al. (2018a)), etc.\nIn this work, we propose an algorithmic unification of various GRL methods that allows us to re-implement existing GRL methods and introduce new ones, in merely a handful of code lines per method. Our algorithm (abbreviated GTTF, Section 3.2), receives graphs as input, traverses them using efficient tensor1 operations, and invokes specializable functions during the traversal. We show function specializations for recovering popular GRL methods (Section 3.3). Moreover, since GTTF is stochastic, these specializations automatically scale to arbitrarily large graphs, without careful derivation per method. Importantly, such specializations, in expectation, recover unbiased gradient estimates of the objective w.r.t. model parameters.\n1To disambiguate: by tensors, we refer to multi-dimensional arrays, as used in Deep Learning literature; and by operations, we refer to routines such as matrix multiplication, advanced indexing, etc\nGTTF uses a data structure  (Compact Adjacency, Section 3.1): a sparse encoding of the adjacency matrix. Node v contains its neighbors in row Â[v] , Âv, notably, in the first degree(v) columns of Â[v]. This encoding allows stochastic graph traversals using standard tensor operations. GTTF is a functional, as it accepts functions ACCUMULATEFN and BIASFN, respectively, to be provided by each GRL specialization to accumulate necessary information for computing the objective, and optionally to parametrize sampling procedure p(v’s neighbors | v). The traversal internally constructs a walk forest as part of the computation graph. Figure 1 depicts the data structure and the computation. From a generalization perspective, GTTF shares similarities with Dropout (Srivastava et al., 2014).\nOur contributions are: (i) A stochastic graph traversal algorithm (GTTF) based on tensor operations that inherits the benefits of vectorized computation and libraries such as PyTorch and Tensorflow. (ii) We list specialization functions, allowing GTTF to approximately recover the learning of a broad class of popular GRL methods. (iii) We prove that this learning is unbiased, with controllable variance. Wor this class of methods, (iv) we show that GTTF can scale previously-unscalable GRL algorithms, setting the state-of-the-art on a range of datasets. Finally, (v) we open-source GTTF along with new stochastic traversal versions of several algorithms, to aid practitioners from various fields in applying and designing state-of-the-art GRL methods for large graphs." }, { "heading": "2 RELATED WORK", "text": "We take a broad standpoint in summarizing related work to motivate our contribution.\nMethod Fa m\nily\nSc al\ne\nL ea\nrn in\ng\nModels GCN, GAT MP 7 exact node2vec NE 3 approx\nWYS NE 7 exact Stochastic Sampling Methods\nSAGE MP 3 approx FastGCN MP 3 approx LADIES MP 3 approx\nGraphSAINT MP 3 approx CluterGCN MP 3 heuristic\nSoftware Frameworks PyG Both inherits / reDGL Both implements\nAlgorithmic Abstraction (ours) GTTF Both 3 approx\nModels for GRL have been proposed, including message passing (MP) algorithms, such as Graph Convolutional Network (GCN) (Kipf & Welling, 2017), Graph Attention (GAT) (Veličković et al., 2018); as well as node embedding (NE) algorithms, including node2vec (Grover & Leskovec, 2016), WYS (Abu-El-Haija et al., 2018); among many others (Xu et al., 2018; Wu et al., 2019; Perozzi et al., 2014). The full-batch GCN of Kipf & Welling (2017), which drew recent attention and has motivated many MP algorithms, was not initially scalable to large graphs, as it processes all graph nodes at every training step. To scale MP methods to large graphs, researchers proposed Stochastic Sampling Methods that, at each training step, assemble a batch constituting subgraph(s) of the (large) input graph. Some of these sampling methods yield unbiased gradient estimates (with some variance) including SAGE (Hamilton et al., 2017), FastGCN (Chen et al., 2018b), LADIES (Zou et al., 2019), and GraphSAINT (Zeng et al., 2020). On the other hand, ClusterGCN (Chiang et al., 2019) is a heuristic in the sense that, despite its good performance, it provides no guarantee of unbiased gradient estimates of the full-batch learning. Gilmer et al. (2017) and Chami et al. (2021) generalized many GRL models into Message Passing and Auto-Encoder frameworks.\nThese frameworks prompt bundling of GRL methods under Software Libraries, like PyG (Fey & Lenssen, 2019) and DGL (Wang et al., 2019), offering consistent interfaces on data formats.\nWe now position our contribution relative to the above. Unlike generalized message passing (Gilmer et al., 2017), rather than abstracting the model computation, we abstract the learning algorithm. As a result, GTTF can be specialized to recover the learning of MP as well as NE methods. Morever, unlike Software Frameworks, which are re-implementations of many algorithms and therefore inherit the scale and learning of the copied algorithms, we re-write the algorithms themselves, giving them new properties (memory and computation complexity), while maintaining (in expectation) the original algorithm outcomes. Further, while the listed Stochastic Sampling Methods target MP algorithms (such as GCN, GAT, alike), as their initial construction could not scale to large graphs, our learning algorithm applies to a wider class of GRL methods, additionally encapsulating NE methods. Finally, while some NE methods such as node2vec (Grover & Leskovec, 2016) and DeepWalk (Perozzi et al., 2014) are scalable in their original form, their scalability stems from their multi-step process: sample many (short) random walks, save them to desk, and then learn node embeddings using positional embedding methods (e.g., word2vec, Mikolov et al. (2013)) – they are sub-optimal in the sense that their first step (walk sampling) takes considerable time (before training even starts) and also places an artificial limit on the number of training samples (number of simulated walks), whereas our algorithm conducts walks on-the-fly whilst training." }, { "heading": "3 GRAPH TRAVERSAL VIA TENSOR FUNCTIONALS (GTTF)", "text": "At its core, GTTF is a stochastic algorithm that recursively conducts graph traversals to build representations of the graph. We describe the data structure and traversal algorithm below, using the following notation. G = (V,E) is an unweighted graph with n = |V | nodes and m = |E| edges, described as a sparse adjacency matrix A ∈ {0, 1}n×n. Without loss of generality, let the nodes be zero-based numbered i.e. V = {0, . . . , n− 1}. We denote the out-degree vector δ ∈ Zn – it can be calculated by summing over rows of A as δu = ∑ v∈V A[u, v]. We assume δu > 0 for all u ∈ V : pre-processing can add self-connections to orphan nodes. B denotes a batch of nodes." }, { "heading": "3.1 DATA STRUCTURE", "text": "Internally, GTTF relies on a reformulation of the adjacency matrix, which we term CompactAdj (for \"Compact Adjacency\", Figure 1c). It consists of two tensors:\n1. δ ∈ Zn, a dense out-degree vector (figure 1c, right) 2.  ∈ Zn×n, a sparse edge-list matrix in which the row u contains left-aligned δu non-zero\nvalues. The consecutive entries {Â[u, 0], Â[u, 1], . . . , Â[u, δu − 1]} contain IDs of nodes receiving an edge from node u. The remaining |V | − δu are left unset, therefore,  only occupies O(m) memory when stored as a sparse matrix (Figure 1c, left).\nCompactAdj allows us to concisely describe stochastic traversals using standard tensor operations. To uniformly sample a neighbor to node u ∈ V , one can draw r ∼ U [0..(δu − 1)], then get the neighbor ID with Â[u, r]. In vectorized form, given node batch B and access to continuous U [0, 1), we sample neighbors for each node in B as: R ∼ U [0, 1)b, where b = |B|, then B′ = Â[B, bR ◦ δ[B]c] is a b-sized vector, with B′u containing a neighbor of Bu, floor operation b.c is applied element-wise, and ◦ is Hadamard product." }, { "heading": "3.2 STOCHASTIC TRAVERSAL FUNCTIONAL ALGORITHM", "text": "Our traversal algorithm starts from a batch of nodes. It expands from each into a tree, resulting in a walk forest rooted at the nodes in the batch, as depicted in Figure 1d. In particular, given a node batch B, the algorithm instantiates |B| seed walkers, placing one at every node in B. Iteratively, each walker first replicates itself a fanout (f ) number of times. Each replica then samples and transitions to a neighbor. This process repeats a depth (h) number of times. Therefore, each seed walker becomes the ancestor of a f -ary tree with height h. Setting f = 1 recovers traditional random walk. In practice, we provide flexibility by allowing a custom fanout value per depth.\nAlgorithm 1: Stochastic Traverse Functional, parametrized by ACCUMULATEFN and BIASFN. input: u (current node); T ← [] (path leading to u, starts empty); F (list of fanouts);\nACCUMULATEFN (function: with side-effects and no return. It is model-specific and records information for computing model and/or objective, see text); BIASFN ← U (function mapping u to distribution on u’s neighbors, defaults to uniform)\n1 def Traverse(T , u, F , ACCUMULATEFN, BIASFN): 2 if F.size() = 0 then return # Base case. Traversed up-to requested depth 3 f ← F.pop() # fanout duplication factor (i.e. breadth) at this depth. 4 sample_bias ← BIASFN(T, u) 5 if sample_bias.sum() = 0 then return # Special case. No sampling from zero mass 6 sample_bias ← sample_bias / sample_bias.sum() # valid distribution 7 K ← Sample( [u, :δu]] ,sample_bias, f) # Sample f nodes from u’s neighbors 8 for k ← 0 to f − 1 do 9 Tnext ← concatenate(T, [u])\n10 ACCUMULATEFN(Tnext,K[k], f) 11 Traverse(Tnext,K[k], f,ACCUMULATEFN,BIASFN) # Recursion 12 13 def Sample(N , W , f ): 14 C ← tf.cumsum(W ) # Cumulative sum. Last entry must = 1. 15 coin_flips← tf.random.uniform((f, ), 0, 1) 16 indices← tf.searchsorted(C, coin_flips) 17 return N [indices]\nFunctional Traverse is listed in Algorithm 1. It accepts: a batch of nodes2; a list of fanout values F (e.g. to F = [3, 5] samples 3 neighbors per u ∈ B, then 5 neighbors for each of those); and more notably, two functions: ACCUMULATEFN and BIASFN. These functions will be called by the functional on every node visited along the traversal, and will be passed relevant information (e.g. the path taken from root seed node). Custom settings of these functions allow recovering wide classes of graph learning methods. At a high-level, our functional can be used in the following manner:\n1. Construct model & initialize parameters (e.g. to random). Define ACCUMULATEFN and BIASFN. 2. Repeat (many rounds):\ni. Reset accumulation information (from previous round) and then sample batch B ⊂ V . ii. Invoke Traverse on (B, ACCUMULATEFN, BIASFN), which invokes the FN’s, allowing the first\nto accumulate information sufficient for running the model and estimating an objective. iii. Use accumulated information to: run model, estimate objective, apply learning rule (e.g. SGD).\nACCUMULATEFN is a function that is used to track necessary information for computing the model and the objective function. For instance, an implementation of DeepWalk (Perozzi et al., 2014) on top of GTTF, specializes ACCUMULATEFN to measure an estimate of the sampled softmax likelihood of nodes’ positional distribution, modeled as a dot-prodct of node embeddings. On the other hand, GCN (Kipf & Welling, 2017) on top of GTTF uses it to accumulate a sampled adjacency matrix, which it passes to the underlying model (e.g. 2-layer GCN) as if this were the full adjacency matrix.\nBIASFN is a function that customizes the sampling procedure for the stochastic transitions. If provided, it must yield a probability distribution over nodes, given the current node and the path that lead to it. If not provided, it defaults to U , transitioning to any neighbor with equal probability. It can be defined to read edge weights, if they denote importance, or more intricately, used to parameterize a second order Markov Chain (Grover & Leskovec, 2016), or use neighborhood attention to guide sampling (Veličković et al., 2018), as discussed in the Appendix.\n2Our pseudo-code displays the traversal starting from one node rather than a batch only for clarity, as our actual implementation is vectorized e.g. u would be a vector of nodes, T would be a 2D matrix with each row containing transition path preceeding the corresponding entry in u, ... etc. Refer to Appendix and code.\n3.3 SOME SPECIALIZATIONS OF ACCUMULATEFN & BIASFN" }, { "heading": "3.3.1 MESSAGE PASSING: GRAPH CONVOLUTIONAL VARIANTS", "text": "These methods, including (Kipf & Welling, 2017; Hamilton et al., 2017; Wu et al., 2019; Abu-ElHaija et al., 2019; Xu et al., 2018) can be approximated by by initializing à to an empty sparse n× n matrix, then invoking Traverse (Algorithm 1) with u = B; F to list of fanouts with size h; Thus ACCUMULATEFN and BIASFN become:\ndef ROOTEDADJACC(T, u, f): Ã[u, T−1]← 1; (1)\ndef NOREVISITBIAS(T, u): return 1[Ã[u].sum() = 0] ~1δu δu ; (2)\nwhere ~1n is an n-dimensional all-ones vector, and negative indexing T−k is the kth last entry of T . If a node has been visited through the stochastic traversal, then it already has fanout number of neighbors and NOREVISITBIAS ensures it does not get revisited for efficiency, per line 5 of Algorithm 1. Afterwards, the accumulated stochastic à will be fed3 into the underlying model e.g. for a 2-layer GCN of Kipf & Welling (2017):\nGCN(Ã,X;W1,W2) = softmax( ◦ A ReLu( ◦ AXW1)W2); (3)\nwith ◦ A = D′1/2D̃′−1Ã′D′−1/2; D′ = diag(δ′); δ′ = ~1>nA ′; renorm trick︷ ︸︸ ︷ Ã′ = In×n + Ã\nLastly, h should be set to the receptive field required by the model for obtaining output dL-dimensional features at the labeled node batch. In particular, to the number of GC layers multiplied by the number of hops each layers access. E.g. hops=1 for GCN but customizable for MixHop and SimpleGCN." }, { "heading": "3.3.2 NODE EMBEDDINGS", "text": "Given a batch of nodes B ⊆ V , DeepWalk4 can be implemented in GTTF by first initializing loss L to the contrastive term estimating the partition function of log-softmax:\nL ← ∑ u∈B log E v∼Pn(V ) [exp(〈Zu, Zv〉)] , (4)\nwhere 〈., .〉 is dot-product notation, Z ∈ Rn×d is the trainable embedding matrix with Zi ∈ Rd is d-dimensional embedding for node u ∈ V . In our experiments, we estimate the expectation by taking 5 samples and we set the negative distribution Pn(V = v) ∝ δ 3 4 v , following Mikolov et al. (2013).\nThe functional is invoked with no BIASFN and ACCUMULATEFN =\ndef DEEPWALKACC(T, u, f): L ← L− 〈 Zi,\nCT∑ k=1 η [T−k] ( C−k+1 C ) ZT−k\n〉 ; η[u] ← η [T−1]\nf ; (5)\nwhere hyperparameter C indicates maximum window size (inherited from word2vec, Mikolov et al., 2013), in the summation on k does not access invalid entries of T as CT , min(C, T .size), the scalar fraction ( C−k+1 C ) is inherited from context sampling of word2vec (Section 3.1 in Levy et al., 2015), and rederived for graph context by Abu-El-Haija et al. (2018), and η[u] stores a scalar per node on the traversal Walk Forest, which defaults to 1 for non-initialized entries, and is used as a correction term. DeepWalk conducts random walks (visualized as a straight line) whereas our walk tree has a branching factor of f . Setting fanout f = 1 recovers DeepWalk’s simulation, though we found f > 1 outperforms within fewer iterations e.g. f = 5, within 1 epoch, outperforms DeepWalk’s published implementation. Learning can be performed using the accumulated L as: Z ← Z − ∇ZL;" }, { "heading": "4 THEORETICAL ANALYSIS", "text": "Due to space limitations, we include the full proofs of all propositions in the appendix. 3Before feeding the batch to model, in practice, we find nodes not reached by traversal and remove their corresponding rows (and also columns) from X (and A). 4We present more methods in the Appendix.\n4.1 ESTIMATING kTH POWER OF TRANSITION MATRIX\nWe show that it is possible with GTTF to accumulate an estimate of transition T matrix to power k. Let Ω denote the walk forest generated by GTTF, Ω(u, k, i) as the ith node in the vector of nodes at depth k of the walk tree rooted at u ∈ B, and tu,v,ki as the indicator random variable 1[Ω(u, k, i) = v]. Let the estimate of the kth power of the transition matrix be denoted T̂ k. Entry T̂ ku,v should be an unbiased estimate of T ku,v for u ∈ B, with controllable variance. We define:\nT̂ ku,v = ∑fk i=1 t u,v,k i\nfk (6)\nThe fraction in Equation 6 counts the number of times the walker starting at u visits v in Ω, divided by the total number of nodes visited at the kth step from u.\nProposition 1. (UNBIASEDTK) T̂ ku,v as defined in Equation 6, is an unbiased estimator of T ku,v\nProposition 2. (VARIANCETK) Variance of our estimate is upper-bounded: Var[T̂ ku,v] ≤ 1\n4fk\nNaive computation of kth powers of the transition matrix can be efficiently computed via repeated sparse matrix-vector multiplication. Specifically, each column of T k can be computed in O(mk), where m is the number of edges in the graph. Thus, computing T k in its entirety can be accomplished in O(nmk). However, this can still become prohibitively expensive if the graph grows beyond a certain size. GTTF on the other hand can estimate T k in time complexity independent of the size of the graph, (Prop. 8), with low variance. Transition matrix powers are useful for many GRL methods. (Qiu et al., 2018)" }, { "heading": "4.2 UNBIASED LEARNING", "text": "As a consequence of Propositions 1 and 2, GTTF enables unbiased learning with variance control for classes of node embedding methods, and provides a convergence guarantee for graph convolution models under certain simplifying assumptions.\nWe start by analyzing node embedding methods. Specifically, we cover two general types. The first is based on matrix factorization of the power-series of transition matrix. and the second is based on cross-entropy objectives, e.g., like DeepWalk (Perozzi et al., 2014), node2vec (Grover & Leskovec, 2016), These two are shown in Proposations 3 and 4\nProposition 3. (UNBIASEDTFACTORIZATION) Suppose L = 12 ||LR− ∑ k ckT k||2F , i.e. factorization objective that can be optimized by gradient descent by calculating∇L,RL, where ck’s are scalar coefficients. Let its estimate L̂ = 12 ||LR− ∑ k ckT̂ k||2F , where T̂ is obtained by GTTF according to Equation 6. Then E[∇L,RL̂] = ∇L,RL.\nProposition 4. (UNBIASEDLEARNNE) Learning node embeddings Z ∈ Rn×d with objective function L, decomposable as L(Z) = ∑ u∈V L1(Z, u) − ∑ u,v∈V ∑ k L2(T k, u, v)L3(Z, u, v), where L2 is linear over T k, then using T̂ k yields an unbiased estimate of∇ZL.\nGenerally, L1 (and L3) score the similarity between disconnected (and connected) nodes u and v. The above form of L covers a family of contrastive learning objectives that use cross-entropy loss and assume a logistic or (sampled-)softmax distributions. We provide, in the Appendix, the decompositions for the objectives of DeepWalk (Perozzi et al., 2014), node2vec (Grover & Leskovec, 2016) and WYS (Abu-El-Haija et al., 2018).\nProposition 5. (UNBIASEDMP) Given input activations, H(l−1), graph conv layer (l) can use rooted adjacency à accumulated by ROOTEDADJACC (1), to provide unbiased pre-activation output,\ni.e. E [ ◦ AkH(l−1)W (l) ] = ( D ′−1/2A′D ′−1/2 )k H(l−1)W (l), with A′ and D′ defined in (3).\nProposition 6. (UNBIASEDLEARNMP) If objective to a graph convolution model is convex and Lipschitz continous, with minimizer θ∗, then utilizing GTTF for graph convolution converges to θ∗." }, { "heading": "4.3 COMPLEXITY ANALYSIS", "text": "Proposition 7. STORAGE complexity of GTTF is O(m+ n). Proposition 8. TIME complexity of GTTF is O(bfh) for batch size b, fanout f , and depth h.\nProposition 8 implies the speed of computation is irrespective of graph size. Methods implemented in GTTF inherit this advantage. For instance, the node embedding algorithm WYS (Abu-El-Haija et al., 2018) is O(n3), however, we apply its GTTF implementation on large graphs." }, { "heading": "5 EXPERIMENTS", "text": "We conduct experiments on 10 different graph datasets, listed in in Table 1. We experimentally demonstrate the following. (1) Re-implementing baseline method using GTTF maintains performance. (2) Previously-unscalable methods, can be made scalable when implemented in GTTF. (3) GTTF achieves good empirical performance when compared to other sampling-based approaches handdesigned for Message Passing. (4) GTTF consumes less memory and trains faster than other popular Software Frameworks for GRL. To replicate our experimental results, for each cell of the table in our code repository, we provide one shell script to produce the metric, except when we indicate that the metric is copied from another paper. Unless otherwise stated, we used fanout factor of 3 for GTTF implementations. Learning rates and model hyperparameters are included in the Appendix." }, { "heading": "5.1 NODE EMBEDDINGS FOR LINK PREDICTION", "text": "In link prediction tasks, a graph is partially obstructed by hiding a portion of its edges. The task is to recover the hidden edges. We follow a popular approach to tackle this task: first learn node embedding Z ∈ Rn×d from the observed graph, then predict the link between nodes u and v with score∝ Z>u Zv . We use two ranking metrics for evaluations: ROC-AUC, which is a ranking objective: how well do methods rank the hidden edges above randomly-sampled negative edges and Mean Rank.\nWe re-implement Node Embedding methods, DeepWalk (Perozzi et al., 2014) and WYS (Abu-ElHaija et al., 2018), into GTTF (abbreviated F ). Table 2 summarizes link prediction test performance. LiveJournal and Reddit are large datasets, where original implementation of WYS is unable to scale to. However, scalable F(WYS) sets new state-of-the-art on these datasets. For PPI and HepTh datasets, we copy accuracy numbers for DeepWalk and WYS from (Abu-El-Haija et al., 2018). For LiveJournal, we copy accuracy numbers for DeepWalk and PBG from (Lerer et al., 2019) – note that a well-engineered approach (PBG, (Lerer et al., 2019)), using a mapreduce-like framework, is under-performing compared to F(WYS), which is a few lines specialization of GTTF." }, { "heading": "5.2 MESSAGE PASSING FOR NODE CLASSIFICATION", "text": "We implement in GTTF the message passing models: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), MixHop (Abu-El-Haija et al., 2019), SimpleGCN (Wu et al., 2019), as their computation is straight-forward. For GAT (Veličković et al., 2018) and GCNII (Chen et al., 2020), as they are more intricate, we download the authors’ codes, and wrap them as-is with our functional.\nWe show that we are able to run these models in Table 3 (left and middle), and that GTTF implementations matches the baselines performance. For the left table, we copy numbers from the published papers. However, we update GAT to work with TensorFlow 2.0 and we use our updated code (GAT*)." }, { "heading": "5.3 EXPERIMENTS COMPARING AGAINST SAMPLING METHODS FOR MESSAGE PASSING", "text": "We now compare models trained with GTTF (where samples are walk forests) against sampling methods that are especially designed for Message Passing algorithms (GraphSAINT and ClusterGCN), especially since their sampling strategies do not match ours.\nTable 3 (right) shows test performance on node classification accuracy on a large dataset: Products. We calculate the accuracy forF (SAGE), but copy from (Hu et al., 2020) the accuracy for the baselines: GraphSAINT (Zeng et al., 2020) and ClusterGCN (Chiang et al., 2019) (both are message passing methods); and also node2vec (Grover & Leskovec, 2016) (node embedding method)." }, { "heading": "Dataset Split # Nodes # Edges # Classes Nodes Edges Tasks", "text": "" }, { "heading": "5.4 RUNTIME AND MEMORY COMPARISON AGAINST OPTIMIZED SOFTWARE FRAMEWORKS", "text": "In addition to the accuracy metrics discussed above, we also care about computational performance. We compare against software frameworks DGL (Wang et al., 2019) and PyG (Fey & Lenssen, 2019). These software frameworks offer implementations of many methods. Table 4 summarizes the following. First (left), we show time-per-epoch on large graphs of their implementation of\nGraphSAGE, compared with GTTF’s, where we make all hyper parameters to be the same (of model architecture, and number of neighbors at message passing layers). Second (middle), we run their GCN implementation on small datasets (Cora, Citeseer, Pubmed) to show peak memory usage. The run times between GTTF, PyG and DGL are similar for these datasets. The comparison can be found in the Appendix. While the aforementioned two comparisons are on popular message passing methods, the third (right) chart shows a popular node embedding method: node2vec’s link prediction test ROC-AUC in relation to its training runtime." }, { "heading": "6 CONCLUSION", "text": "We present a new algorithm, Graph Traversal via Tensor Functionals (GTTF) that can be specialized to re-implement the algorithms of various Graph Representation Learning methods. The specialization takes little effort per method, making it straight-forward to port existing methods or introduce new ones. Methods implemented in GTTF run efficiently as GTTF uses tensor operations to traverse graphs. In addition, the traversal is stochastic and therefore automatically makes the implementations scalable to large graphs. We theoretically show that the learning outcome due to the stochastic traversal is in expectation equivalent to the baseline when the graph is observed at-once, for popular GRL methods we analyze. Our thorough experimental evaluation confirms that methods implemented in GTTF maintain their empirical performance, and can be trained faster and using less memory even compared to software frameworks that have been thoroughly optimized." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "We acknowledge support from the Defense Advanced Research Projects Agency (DARPA) under award FA8750-17-C-0106." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A HYPERPARAMETERS", "text": "For the general link prediction tasks we used a |B| = |V |, C = 5, f = 3, 10 negative samples per edge, Adam optimizer with a learning rate of 0.5, multiplied by a factor of 0.2, every 50 steps, for 200 total iterations. The differences are listed below.\nThe Reddit dataset was trained using a starting learning rate of 2.0, decaying 50% every 10 iterations.\nThe LiveJournal task was trained using a fixed learning rate of 0.001, |B| = 5000, f = 2, and 50 negative samples per edge.\nFor the node classifications tasks:\nFor F(SimpleGCN) on Amazon, we use f = [15, 15], a batch size of 1024, and a learning rate of 0.02, decaying by a factor 0f 0.2 after 2 and 6 epochs for a total of 25 epochs. On Reddit, it is the same except f = [25, 25]\nFor F(SAGE) on Amazon we use f = [20, 10], a two layer model, a batch size of 256, and fixed learning rates of 0.001 and 0.002 respectively. On reddit we use f = [25, 20], a fixed learning rate of 0.001, hidden dimension of 256 and a batch size of 1024. On the Products dataset, we used f = [15, 10], a fixed learning learning rate of 0.001 and a batch size of 1024, a hidden dimension of 256 and a fixed learning rate of 0.003.\nFor GAT (baseline), we follow the authors code and hyperparameters: for Cora and Citeseer, we use Adam with learning rate of 0.005, L2 regularization of 0.0005, 8 attention heads on the first layer and 1 attention head on the output layer. For Pubmed, we use Adam with learning rate of 0.01, L2 regularization of 0.01, 8 attention heads on the first layer and 8 attention heads on the output layer. For F(GAT), we use the same aforementioned hyperparameters, a fanout of 3 and traversal depth of 2 (to cover two layers) i.e. F = [3, 3]. For F(GCN), we use the authors’ recommended hyperparameters. Learning rate of 0.005, 0.001 L2 regularization, and F = [3, 3], for all datasets. For both methods, we apply “patience” and stop the training if validation loss does not improve for 100 consecutive epochs, reporting the test accuracy at the best validation loss. For F(MixHop), we wrap the authors’ script and use their hyperparameters. For F(GCNII), we use F = [5, 5, 5, 5, 5, 5], as their models are deep (64 layers for Cora). Otherwise, we inherit their network hyperparameters (latent dimensions, number of layers, dropout factor, and their introduced coefficients), as they have tuned them per dataset, but we change the learning rate to 0.005 (half of what they use) and we extend the patience from 100 to 1000, and extend the maximum number of epochs from 1500 to 5000 – this is because we are presenting a subgraph at each epoch, and therefore we intuitively want to slow down the learning per epoch, which is similar to the practice when someone applies Dropout to a neural networks. We re-run their shell scripts, with their code modified to use the Rooted Adjacency rather than the real adjacency, which is sampled at every epoch.\nMLP was trained with 1 layer and a learning rate of 0.01." }, { "heading": "B PROOFS", "text": "" }, { "heading": "B.1 PROOF OF PROPOSITION 1", "text": "Proof. E[T̂ ku,v] = E [ fk∑ i=1 tu,v,ki\nfk\n] = fk∑ i=1 E[tu,v,ki ]\nfk =\nfk∑ i=1 P [tu,v,ki = 1]\nfk =\nfk∑ i=1 T ku,v\nfk = T ku,v" }, { "heading": "B.2 PROOF OF PROPOSITION 2", "text": "Proof. Var[T̂ ku,v] = ∑fk i=1 V ar[t u,v,k i ]\nf2k = fkT ku,v(1− T ku,v) f2k = T ku,v(1− T ku,v) fk\nSince 0 ≤ T ku,v ≤ 1, then T ku,v(1−T ku,v) is maximized with T ku,v = 1\n2 . Hence V ar[T̂ ku,v] ≤\n1\n4fk" }, { "heading": "B.3 PROOF OF PROPOSITION 3", "text": "Proof. Consider a d-dimensional factorization of ∑ k ckT k, where ck’s are scalar coefficients:\nL = 1 2 ∣∣∣∣∣ ∣∣∣∣∣LR−∑\nk\nckT k ∣∣∣∣∣ ∣∣∣∣∣ 2\nF\n, (7)\nparametrized by L,R> ∈ Rn×d. The gradients of L w.r.t. parameters are:\n∇LL = ( LR> −\n∑ k\nckT k ) R> and ∇RL = L> ( LR> −\n∑ k\nckT k ) . (8)\nGiven estimate objective L (replacing T̂ with using GTTF-estimated T̂ ):\nL̂ = 1 2 ∣∣∣∣∣ ∣∣∣∣∣LR−∑\nk\nckT̂ k ∣∣∣∣∣ ∣∣∣∣∣ 2\nF\n. (9)\nIt follows that: E [ ∇LL̂ ] = E [( LR> −\n∑ k\nckT̂ k ) R> ]\n= E [( LR> −\n∑ k\nckT̂ k )] R> Scaling property of expectation\n= ( LR> −\n∑ k\nckE [ T̂ k ]) R> Linearity of expectation\n= ( LR> −\n∑ k\nckT k ) R> Proof of Proposition 1\n= ∇LL The above steps can similarly be used to show E [ ∇RL̂ ] = ∇RL" }, { "heading": "B.4 PROOF OF PROPOSITION 4", "text": "Proof. We want to show that E[∇ZL(T̂ k, Z)] = ∇ZL(T k, Z). Since the terms of L1 are unaffected by T̂ , they are excluded w.l.g. from L in the proof.\nE[∇ZL(T̂ k, Z)] = E [ ∇Z(− ∑ u,v∈V ∑ k∈{1..C} L2(T̂ k, u, v)L3(Z, u, v)) ]\n(by linearity of expectation) = −∇Z ∑ u,v∈V ∑ k∈{1..C} L2(E[T̂ k], u, v)L3(Z, u, v)\n(by Prop 1) = −∇Z ∑ u,v∈V ∑ k∈{1..C} L2(T k, u, v)L3(Z, u, v) = ∇ZL(T k, Z)\nThe following table gives the decomposition for DeepWalk, node2vec, and Watch Your Step. Node2vec also introduces a biased sampling procedure based on hyperparameters (they name p and q) instead of uniform transition probabilities. We can equivalently bias the transitions in GTTF to match node2vec’s. This would then show up as a change in T̂ k in the objective. This effect can also be included in the objective by multiplying 〈Zu, Zv〉 by the probability of such a transition in L3. In this format, the p and q variables appear in the objective and can be included in the optimization. For WYS, Qk are also trainable parameters.\nFor methods in which the transition distribution is not uniform, such as node2vec, there are two options for incorporating this distribution in the loss. The obvious choice is to sample from a biased transition matrix, Tu,v = W̃u,v , where W̃ is the transition weights. Alternatively, the transition bias can be used as a weight on the objective itself. This approach is still unbiased as\nE v∼W̃u [L(v, u)] = ∑ v∈V Pv∼W̃u [v]L(v, u) = ∑ v∈V W̃u,vL(v, u)" }, { "heading": "B.5 PROOF OF PROPOSITION 5", "text": "Proof. Let à be the neighborhood patch returned by GTTF, and let .̃ indicate a measurement based on the sampled graph, Ã, such as the degree vector, δ̃, or diagonal degree matrix, D̃. For the remainder of this proof, let all notation for adjacency matrices, A or Ã, and diagonal degree matrices, D or D̃, and degree vector, δ, refer to the corresponding measure on the graph with self loops e.g. A← A+ In×n. We now show that the expectation of the layer output is unbiased.\nE [ ◦ AkH(l−1)W (l) ] = [ E [ ◦ Ak ] H(l−1)W (l) ] implies that E [ ◦ AkH(l−1)W (l) ] is unbiased if\nE [ ◦ Ak ] = ( D−1/2AD−1/2 )k .\nE [ ◦ Ak ] = E [ D1/2 ( D̃−1à )k D−1/2 ] = D1/2E [( D̃−1à )k] D−1/2\nLet Pu,v,k be the set of all walks {p = (u, v1, ..., vk−1, v)|vi ∈ V }, and let p∃à indicate that the path p exists in the graph given by Ã. Let tu,v,k be the transition probability from u to v in k steps, and let tp be the probability of a random walker traversing the graph along path p.\nE [( D̃−1à )k u,v ] = E [ T̃ ku,v ] = Pr [ t̃u,v,k = 1 ] = ∑ p∈Pu,v,k Pr[p∃Ã]Pr[t̃p = 1|p∃Ã]\n= ∑\np∈Pu,v,k k∏ i=1 1[A[pi, pi+1] = 1] f + 1 δ[pi] k∏ i=1 (f + 1)−1 = ∑ p∈Pu,v,k k∏ i=1 1[A[pi, pi+1] = 1]δ[pi] −1\n= ∑\np∈Pu,v,k Pr[tp = 1] = Pr[tu,v,k] = (T )ku,v =\n( D−1A )k u,v\nThus, E [ ◦ Ak ] = ( D−1/2AD−1/2 )k and E [ ◦ AkH(l−1)W (l) ] = ( D−1/2AD−1/2 )k H(l−1)W (l)\nFor writing, we assumed nodes have degree, δu ≥ f , though the proof still holds if that is not the case as the probability of an outgoing edge being present from u becomes 1 and the transition probability becomes δ−1u i.e. the same as no estimate at all." }, { "heading": "B.6 PROOF OF PROPOSITION 6", "text": "GTTF can be seen as a way of applying dropout (Srivastava et al., 2014), and our proof is contingent on the convergence of dropout, which is shown in Baldi & Sadowski (2014). Our dropout is on the adjacency, rather than the features. Denote the output of a graph convolution network5 with H:\nH = GCNX(A;W ) = T XW\nWe restrict our analysis to GCNs with linear activations. We are interested in quantifying the change of H as A changes, and therefore the fixed (always visible) features X is placed on the subscript. Let à denote adjacency accumulated by GTTF’s ROOTEDADJACC (Eq. 1).\nH̃c = GCNX(Ãc).\nLet A = {Ãc}|A|c=1 denote the (countable) set of all adjacency matrices realizable by GTTF. For the analysis, assume the graph is α-regular: the assumption eases the notation though it is not needed. Therefore, degree δu = α for all u ∈ V . Our analysis depends6 on 1|A| ∑ Ã∈A à ∝ A. i.e. the average realizable matrix by GTTF is proportional (entry-wise) to the full adjacency. This is can be shown when considering one-row at a time: given node u with δu = α outgoing neighbors, each of its neighbors has the same appearance probability = 1δu . Summing over all combinations ( δu f ) , makes\neach edge appear the same frequency = 1δu |A|, noting that |A| evenly divides ( δu f ) for all u ∈ V .\nWe define a dropout module:\nd\nA = |A|∑ c zcÃc with z ∼ Categorical\n( |A| of them︷ ︸︸ ︷ 1\n|A| ,\n1\n|A| , . . . ,\n1\n|A|\n) , (10)\nwhere zc acts as Multinoulli selector over the elements of A, with one of its entries set to 1 and all others to zero. With this definitions, GCNs can be seen in the droupout framework as: H̃ = GCNX( d\nA). Nonetheless, in order to inherit the analysis of (Baldi & Sadowski, 2014, see their equations 140 & 141), we need to satisfy two conditions which their analysis is founded upon:\n(i) E[GCNX( d\nA)] = GCNX(A): in the usual (feature-wise) dropout, such condition is easily verified. (ii) Backpropagated error signal does not vary too much around around the mean, across all realizations of d A.\nCondition (i) is satisfied due to proof of Proposition 5. To analyze the error signal, i.e. the gradient of the error w.r.t. the network, assume loss function L(H), outputs scalar loss, is λ-Lipschitz continuous.\n5The following definition averages the node features (uses non-symmetric normalization) and appears in multiple GCN’s including Hamilton et al. (2017).\n6If not α-regular, it would be 1|A| ∑ Ã∈A à ∝ D −1A\nThe Liptchitz continuity allows us to bound the difference in error signal between L(H) and L(H̃):\n||∇HL(H)−∇HL(H̃)||22 (a) ≤λ (∇HL(H)−∇HL(H̃))>(H − H̃) (11) (b)\n≤λ ||∇HL(H)−∇HL(H̃)||2 ||H − H̃||2 (12) w.p.≥1− 1\nQ2 ≤ λ ||∇HL(H)−∇HL(H̃)||2 W>X>Q √ V ar[T ]XW (13)\n= λQ\n2 √ f ||∇HL(H)−∇HL(H̃)||2 ||W ||21 ||X||21 (14)\n||∇HL(H)−∇HL(H̃)||2 ≤ λQ\n2 √ f ||W ||21 ||X||21 (15)\nwhere (a) is by Lipschitz continuity, (b) is by Cauchy–Schwarz inequality, “w.p.” means with probability and uses Chebyshev’s inequality, with the following equality because the variance of T is shown element-wise in proof for Prop. 2. Finally, we get the last line by dividing both sides over the common term. This shows that one can make the error signal for the different realizations arbitrarily small, for example, by choosing a larger fanout value or putting (convex) norm constraints on W and X e.g. through batchnorm and/or weightnorm. Since we can have ∇HL(H) ≈ ∇HL(H̃1) ≈ ∇HL(H̃2) ≈ · · · ≈ ∇HL(H̃|A|) with high probability, then the analysis of Baldi & Sadowski (2014) applies. Effectively, it can be thought of as an online learning algorithm where the elements of A are the stochastic training examples and analyzed per (Bottou, 1998; 2004), as explained by Baldi & Sadowski (2014) ." }, { "heading": "B.7 PROOF OF PROPOSITION 7", "text": "The storage complexity of CompactAdj is O(sizeof(δ) + sizeof(Â)) = O(n+m). Moreover, for extemely large graphs, the adjacncy can be row-wise partitioned across multiple machines and therefore admitting linear scaling. However, we acknolwedge that choosing which rows to partition to which machines can drastically affect the performance. Balanced partitioning is ideal. It is an NP-hard problem, but many approximations have been proposed. Nonetheless, reducing inter-communication, when distributing the data structure across machines, is outside our scope." }, { "heading": "B.8 PROOF OF PROPOSITION 8", "text": "For each step of GTTF, the computational complexity is O(bhf ). This follows trivially from the GTTF functional: each nodes in batch (b of them) builds a tree with depth h and fanout f i.e. with hf tree nodes. This calculation assumes random number generation, ACCUMULATEFN and BIASFN take constant time. The searchsorted function is linear, as it is called on a sorted list: cumulative sum of probabilities." }, { "heading": "C ADDITIONAL GTTF IMPLEMENTATIONS", "text": "" }, { "heading": "C.1 MESSAGE PASSING IMPLEMENTATIONS", "text": "" }, { "heading": "C.1.1 GRAPH ATTENTION NETWORKS (GAT, VELIČKOVIĆ ET AL., 2018)", "text": "One can implement GAT by following the previous subsection, utilizing ACCUMULATEFN and BIASFN defined in (1) and (2), but just replacing the model (3) by GAT’s:\nGAT(Ã,X;A,W1,W2) = softmax((A ◦ ◦ A) ReLu((A ◦ ◦ A)XW1)W2); (16)\nwhere ◦ is hadamard product and A is an n× n matrix placing a positive scalar (an attention value) on each edge, parametrized by multi-headed attention described in (Veličković et al., 2018). However, for some high-degree nodes that put most of the attention weight on a small subset of their neighbors, sampling uniformly (with BIASFN=NOREVISITBIAS) might mostly sample neighbors with entries inAwith value≈ 0, and could require more epochs for convergence. However, our flexible functional\nallows us to propose a sample-efficient alternative, that is in expectation, equivalent to the above:\nGAT(Ã,X;A,W1,W2) = softmax(( √ A ◦ ◦ A) ReLu(( √ A ◦ ◦ A)XW1)W2); (17)\ndef GATBIAS(T, u): return NOREVISITBIAS(T, u) ◦ √ A[u, Â[u]]; (18)" }, { "heading": "C.1.2 DEEP GRAPH INFOMAX (DGI, VELIČKOVIĆ ET AL., 2019)", "text": "DGI implementation on GTTF can use ACCUMULATEFN=ROOTEDADJACC, defined in (1). To create the positive graph: it can sample some nodes B ⊂ V . It would pass to GTTF’s Traverse B, and utilize the accumulated adjacency  for running: GCN(Â,XB) and GCN(Â,Xpermute), where the second run randomly permutes the order of nodes in X . Finally, the output of those GCNs can then be fed into a readout function which outputs to a descriminator trying to classify if the readout latent vector correspond to the real, or the permuted features." }, { "heading": "C.2 NODE EMBEDDING IMPLEMENTATIONS", "text": "" }, { "heading": "C.2.1 NODE2VEC (GROVER & LESKOVEC, 2016)", "text": "A simple implementation follows from above: N2VACC , DEEPWALKACC; but override BIASFN =\ndef N2VBIAS(T, u): return p−1[i=T−2]q−1[〈A[T−2],A[u]〉>0]; (19)\nwhere 1 denotes indicator function, p, q > 0 are hyperparameters of node2vec assigning (unnormalized) probabilities for transitioning back to the previous node or to node connected to it. 〈A[T−2], A[u]〉 counts mutual neighbors between considered node u and previous T−2. An alternative implementation is to not override BIASFN but rather fold it into ACCUMULATEFN, as:\ndef N2VACC(T, u, f): DEEPWALKACC(T, u, f); η[u] ← η[u] × N2VBIAS(T, u); (20) Both alternatives are equivalent in expectation. However, the latter directly exposes the parameters p and q to the objective L: allowing them to be differentiable w.r.t. L and therefore trainable via gradient descent, rather than by grid-search. Nonetheless, parameterizing p & q is beyond our scope." }, { "heading": "C.2.2 WATCH YOUR STEP (WYS, ABU-EL-HAIJA ET AL., 2018)", "text": "First, embedding dictionaries R,L ∈ Rn× d2 can be initialized to random. Then repeatedly over batches B ⊆ V , the loss L can be initialized to estimate the negative part of the objective:\nL ← − ∑ u∈B log σ(−Ev∈U(V ) [〈Ru, Lv〉+ 〈Rv, Lu〉]),\nThen call GTTF’s traverse passing the following ACCUMULATEFN=\ndef WYSACC(T, u): if T.size() 6= Q.size(): return; t← T [0]; U ← T [1 :] ∪ [u];\nctx_weighted_L← ∑ j QjLUj ; ctx_weighted_R← ∑ j QjRUj ;\nL ← L− log(σ(〈Rt,ctx_weighted_L〉+ 〈Lt,ctx_weighted_R〉));" }, { "heading": "D MISCELLANEOUS", "text": "" }, { "heading": "D.1 SENSITIVITY", "text": "The following figures show the sensitivity of fanout and walk depth for WYS on the Reddit dataset." }, { "heading": "D.2 RUNTIME OF GCN ON CITATION NETWORKS", "text": "Citation networks (Cora, Citeseer, Pubmed) have been popularized by Kipf & Welling (2017), and for completeness, we report in Figure 4 the runtime versus test accuracy of GCN on these networks. We compare against PyG, which is optimized for GCN. We find that the methods are somewhat comparable in terms of training time." } ]
2,021
null
SP:1f2d445f78bb495d09e9b796de3662ab6a6b26af
[ "When the number of classes is very large, calculating softmax for classification (e.g., in backpropagation) is computationally costly. Approaches based on negative sampling have been used in literature to alleviate this problem. However, most of existing approaches are (argued to be) either inaccurate or computationally costly. This paper proposes to use the well-known LSH (locality sensitive hashing) method to address this problem. In particular, two variants, LSH label and LSH Embedding are showed to speed up the training in terms of time needed to converge compared with a number of baseline methods over three large scale datasets. " ]
Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval. The calculation of full-softmax is very expensive from the computational and energy perspective. There have been a variety of sampling approaches to overcome this challenge, popularly known as negative sampling (NS). Ideally, NS should sample negative classes from a distribution that is dependent on the input data, the current parameters, and the correct positive class. Unfortunately, due to the dynamically updated parameters and data samples, there does not exist any sampling scheme that is truly adaptive and also samples the negative classes in constant time every iteration. Therefore, alternative heuristics like random sampling, static frequencybased sampling, or learning-based biased sampling, which primarily trade either the sampling cost or the adaptivity of samples per iteration, are adopted. In this paper, we show a class of distribution where the sampling scheme is truly adaptive and provably generates negative samples in constant time. Our implementation in C++ on commodity CPU is significantly faster, in terms of wall clock time, compared to the most optimized TensorFlow implementations of standard softmax or other sampling approaches on modern GPUs (V100s).
[]
[ { "authors": [ "Alexandr Andoni", "Piotr Indyk" ], "title": "E2lsh: Exact euclidean locality-sensitive hashing", "venue": "Technical report,", "year": 2004 }, { "authors": [ "Robert Bamler", "Stephan Mandt" ], "title": "Extreme classification via adversarial softmax approximation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Samy Bengio", "Krzysztof Dembczynski", "Thorsten Joachims", "Marius Kloft", "Manik Varma" ], "title": "Extreme classification (dagstuhl seminar 18291)", "venue": "Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,", "year": 2019 }, { "authors": [ "K. Bhatia", "K. Dahiya", "H. Jain", "A. Mittal", "Y. Prabhu", "M. Varma" ], "title": "The extreme classification repository: Multi-label datasets and code, 2016", "venue": "URL http://manikvarma.org/downloads/ XC/XMLRepository.html", "year": 2016 }, { "authors": [ "Moses Charikar", "Paris Siminelakis" ], "title": "Hashing-based-estimators for kernel density in high dimensions", "venue": "IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2017 }, { "authors": [ "Moses S Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing,", "year": 2002 }, { "authors": [ "Beidi Chen", "Anshumali Shrivastava" ], "title": "Densified winner take all (wta) hashing for sparse datasets", "venue": "In Uncertainty in artificial intelligence,", "year": 2018 }, { "authors": [ "Beidi Chen", "Anshumali Shrivastava", "Rebecca C Steorts" ], "title": "Unique entity estimation with application to the syrian conflict", "venue": "The Annals of Applied Statistics,", "year": 2018 }, { "authors": [ "Beidi Chen", "Tharun Medini", "James Farwell", "Sameh Gobriel", "Charlie Tai", "Anshumali Shrivastava" ], "title": "Slide : In defense of smart algorithms over hardware acceleration for large-scale deep learning systems, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Beidi Chen", "Yingchen Xu", "Anshumali Shrivastava" ], "title": "Fast and accurate stochastic gradient estimation", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Beidi Chen", "Yingchen Xu", "Anshumali Shrivastava" ], "title": "Fast and accurate stochastic gradient estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Welin Chen", "David Grangier", "Michael Auli" ], "title": "Strategies for training large vocabulary neural language models", "venue": "arXiv preprint arXiv:1512.04906,", "year": 2015 }, { "authors": [ "Linhao Dong", "Shuang Xu", "Bo Xu" ], "title": "Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Michel X Goemans", "David P Williamson" ], "title": "Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming", "venue": "Journal of the ACM (JACM),", "year": 1995 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Monika Henzinger" ], "title": "Finding near-duplicate web pages: a large-scale evaluation of algorithms", "venue": "In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 2006 }, { "authors": [ "Piotr Indyk", "Rajeev Motwani" ], "title": "Approximate nearest neighbors: towards removing the curse of dimensionality", "venue": "In Proceedings of the thirtieth annual ACM symposium on Theory of computing,", "year": 1998 }, { "authors": [ "Piotr Indyk", "David Woodruff" ], "title": "Polylogarithmic private approximations and efficient matching", "venue": "In Theory of Cryptography Conference,", "year": 2006 }, { "authors": [ "Himanshu Jain", "Venkatesh Balasubramanian", "Bhanu Chunduri", "Manik Varma" ], "title": "Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches", "venue": "In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining,", "year": 2019 }, { "authors": [ "Sébastien Jean", "Kyunghyun Cho", "Roland Memisevic", "Yoshua Bengio" ], "title": "On using very large target vocabulary for neural machine translation", "venue": "arXiv preprint arXiv:1412.2007,", "year": 2014 }, { "authors": [ "Tharun Kumar Reddy Medini", "Qixuan Huang", "Yiqiu Wang", "Vijai Mohan", "Anshumali Shrivastava" ], "title": "Extreme classification in log memory using count-min sketch: A case study of amazon search with 50m products", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Ankit Singh Rawat", "Jiecao Chen", "Felix Xinnan X Yu", "Ananda Theertha Suresh", "Sanjiv Kumar" ], "title": "Sampled softmax with random fourier features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Anshumali Shrivastava", "Ping Li" ], "title": "Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips)", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ryan Spring", "Anshumali Shrivastava" ], "title": "A new unbiased and efficient class of lsh-based samplers and estimators for partition function computation in log-linear models", "venue": "arXiv preprint arXiv:1703.05160,", "year": 2017 }, { "authors": [ "Ryan Spring", "Anshumali Shrivastava" ], "title": "Scalable and sustainable deep learning via randomized hashing", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Fei Wang", "Mengqing Jiang", "Chen Qian", "Shuo Yang", "Cheng Li", "Honggang Zhang", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Residual attention network for image classification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Liang Yao", "Chengsheng Mao", "Yuan Luo" ], "title": "Graph convolutional networks for text classification", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Rajaraman", "Ullman" ], "title": "and preserves the cosine similarity measure. Given a vector x, SRP generates a random w vector with each component generated from i.i.d. normal", "venue": "wi∼N(0,", "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural Networks (NN) have successfully pushed the boundaries of many application tasks, such as image or text classification (Wang et al., 2017; Yao et al., 2019), speech recognition (Dong et al., 2018) and recommendation systems (Zhang et al., 2015; Medini et al., 2019). Many hard AI problems are currently modeled as massive multiclass or multilabel problems leading to a drastic improvement over prior work. For example, popular NLP models predicts the best word, given the full context observed so far. Such models are becoming the state-of-the-art. Recommendation systems and related Information Retrieval (IR) problems are classical examples of machine learning with outrageously large outputs (Medini et al., 2019; Jain et al., 2019). In IR, given the user query, the task is to predict few relevant documents (or products) from among hundreds of millions possible documents, a typical machine learning problem with massive output space.\nOwing to the significance of the problem, machine learning with large output space or alternatively also known as extreme classification is a field in itself (Bengio et al., 2019). A large number of classes naturally brings a new set of computational and memory challenge.\nFortunately, with access to our powerful Graphic Processing Unit (GPU) (Owens et al., 2008), training processes of large models have been accelerated heavily. That is because GPUs have a unique advantage for matrix multiplication, which usually requires a cubic time algebraic operation (O(N3)) and is the major and costly building block of NN computations. However, the number of concurrent operations required in large matrix multiplications for classification with extensive number of classes has reached a limit for further speedups even using GPUs." }, { "heading": "1.1 NEGATIVE SAMPLING", "text": "The common approach to address this challenge is known as negative sampling (Pennington et al., 2014; Jean et al., 2014; Rawat et al., 2019; Mikolov et al., 2013b). In Negative Sampling, we only sample a small subset of classes for each input and compute the softmax and cross-entropy function. This subset usually includes the positive (true) and a small set of negative (false) classes. Negative\nsampling scales down the computations in the most cumbersome last layer, thereby making training efficient.\nHowever, approximating full-softmax with small sub-sample results in poor convergence if the negative samples are not chosen appropriately. For instance, let us take the example of a recommendation system (predicting products relevant to a query) with a large number of products. If the input query is ‘Nike Running Shoes’, the true loss concentrates on the specific small number of confusing (’hard’) negative classes like ‘Adidas Running Shoes’. Since the number of classes is huge, random sampling is unlikely to identify this hard negative class. Other heuristics like frequent class sampling as negative samples are also unlikely to find these hard negatives most of the time. Clearly, without discriminating between closely related negative samples, the classifier cannot achieve good accuracy. Our experiments on recommendations datasets clearly indicate this sub-optimality of current negative sampling heuristics.\nIf there exists a way to sample the subset of confusing classes from the skewed distribution, the training progress would be largely accelerated. However, as evident from the example, such ground-truth distribution depends on the input sample and current model parameters. Moreover, this distribution varies significantly as training progresses. Consider the same query ’Nike Running Shoes’, initially, when the network has not learned anything and has random weights, all classes are equally confusing. Thus, uniform sampling is optimal initially as the network has just started to learn. As the training progresses, the network’s belief starts getting more concentrated on a few classes; at this time, a negative sample of say ’baby toys’ is not at all useful because the network has already learned to tell them apart. The sampling distribution keeps changing, often drastically, as the training progresses.\nTo the best of our knowledge, there does not exist any statistical sampling scheme and implementation for adaptive Negative Sampling, where the cost of maintaining and updating the distribution, per iteration, is O(1) (independent of the number of classes). This is because the input, current true class, and parameters update all the sampling weights in every iteration. It is widely assumed that there is no such sampling scheme, and hence several heuristic alternatives are proposed.\nThe first set of alternatives use a static distribution. The most popular ones, implemented in TensorFlow, assume a static distribution such as the distribution based on the frequency of classes. Uniform sampling is another popular choice.\nLearning-based alternatives are also proposed (Bamler & Mandt, 2020), where a machine learning generator predicts (or generates) the negative samples. The sampler is solving the same hard problem, prediction over a large number of classes, as a sub-routine. Most importantly, since the sampling distribution for the same data point shifts drastically throughout training, ML models are likely to suffer.\nNegative sampling alternatives try to balance the sampling cost with quality. So far, negative sampling methods, other than the ones based on static sampling, have failed to demonstrate any training time improvements over the optimized full softmax implementation over GPUs. Static sampling strategies are known to be fast but lead to poor accuracy. With current strategies, the cost of improving the quality with current alternatives does not seem worth it over the GPU acceleration of softmax.\nIn this paper, we change this. Our work provides a truly constant time adaptive sampling scheme utilizing the recent advances in Locality Sensitive Sampling (Charikar & Siminelakis, 2017; Spring & Shrivastava, 2017a). More impressively, we provide an efficient implementation of our proposal on CPU, which outperforms TensorFlow’s implementation of softmax and other negative sampling strategies on some of the best available GPUs (V100) in terms of wall-clock training time.\nSummary of Contributions:\n1) We propose two efficient schemes for sampling ‘hard’ negatives where the negative sampling distribution provably adapts to changing parameters and the data instance. Furthermore, the sampling cost is provably constant (independent of the number of classes).\n2) We show that our technique is not only provably adaptive but also practical. We provide an efficient CPU implementation, in C++, of our negative sampling approach. We demonstrate the effectiveness of a truly constant time negative sampler by showing that our implementation significantly outperforms standard TensorFlow on V100 GPU in-wall clock speed, while retaining the accuracy.\n3) We provide a rigorous evaluation of our proposal with its efficient implementation against full softmax and popular approximations like sampled softmax, frequency-based sampled softmax, top-K activation softmax, differentiation softmax (D-Softmax), and Noise Contrastive Estimation(NCE). We report the time-wise and iteration-wise precision on large recommendation datasets like Amazon670K, WikiLSH-325K, and popular natural language processing dataset Text8 corpus." }, { "heading": "1.2 LSH BASED HASH TABLES", "text": "In this section, we briefly describe the recent development of using locality sensitive hashing for sampling and estimation (Chen et al., 2019b; Spring & Shrivastava, 2017a; Charikar & Siminelakis, 2017; Spring & Shrivastava, 2017b). Locality Sensitive Hashing (Indyk & Motwani, 1998; Indyk & Woodruff, 2006) is a widely used paradigm for large scale similarity search and nearest neighbor search. LSH is a family of hash functions with a unique property that vectors ‘close’ wrt some distance metric are more likely to have the same hash code as opposed to vectors that are ‘far’ from each other. Formally, one sufficient condition for a hash family H to be a LSH family is that the collision probability PrH(h(x) = h(y)) is a monotonically increasing function of the similarity:\nPrH(h(x) = h(y)) = f(Sim(x, y)), (1)\nwhere f is a monotonically increasing function.\nThe idea is to use the hash value of x, i.e., h(x), to generate key of x in the hash table. We first initialize L hash tables by constructing a meta-LSH hash function usingK independent hash functions for each of them. For details, see (Andoni & Indyk, 2004). There are three major steps:\nPre-processing Phase: Given a dataset of size n, we first insert all the data points into the hash tables using the meta-LSH formed by concatenating K independent LSH hash functions. We only store the index/pointer of the data point in the hash tables instead of the entire vector. The cost of addition is K × L hash computations followed by L insertions in the buckets. Query Phase: During the query phase, we use the same meta-LSH hash to compute the hash codes for the query. Then we probe the corresponding bucket of each table and retrieve samples from it. The union of candidates from all hash tables constitute the samples for the particular query.\nUpdate Phase: If an existing element in the database is updated, we can delete it from the hash table and re-add it. The cost is equivalent to twice the insertion cost of an element which is 2×K × L.\n1.3 ADAPTIVE SAMPLING VIEW OF LSH\nDenote pqx be the probability of retrieving x from the datasets, when queried with a given query q. In (Indyk & Motwani, 1998), it was shown that for (K,L) parametrized LSH algorithm the precise form of pqx = 1− (1−αK)L, where α is the collision probability of query q and x under the given LSH function, i.e. α = PrH(h(x) = h(q)). pqx is monotonic in α which is further monotonic in the similarity between query q and the data element x. Note the similarity measure is dependent on the LSH function in use. See (Spring & Shrivastava, 2017a) for details.\nConstant Time Sampling: It should be noted that the cost of sampling is the cost of querying, which is only K × L. This sampling cost is independent of the number of elements in the data. Clearly, the probability pqx is dependent on the query, and every element x in the data has a different sampling probability. Thus, even though our sampling scheme induces n different sampling probabilities every time the query q is changed, the sampling cost is independent of n, and in-fact is constant if K and L are small constants. All this is assuming one O(n) time preprocessing.\nThis efficient sampling view of LSH has been used in a wide range of applications, such as deep neural networks (Spring & Shrivastava, 2017b; Chen et al., 2019a), kernel density estimation (Charikar & Siminelakis, 2017), record linkage (Chen et al., 2018), and optimization (Chen et al., 2019c).\nRecent advances in fast inner product search using asymmetric LSH has made it possible to sample large inner products (Shrivastava & Li, 2014). Effectively, given a query q, it is possible to sample an element x from the database with probability proportional to a monotonic function of inner product f(qTx). Here f is a monotonically increasing function." }, { "heading": "2 OUR PROPOSAL: LOCALITY SENSITIVE NEGATIVE SAMPLING (LNS)", "text": "Notations: We will start by defining a few vectors in the neural network setting illustrated in Figure 2. We are in large softmax settings. Here, we will use N to denote the total number of classes. Define a vector wi ∈ Rd (class vectors) to be the vector associated with class i in the last layer of the neural network. We will use (x, y) to denote the current input sample to the neural network for which we want to generate negative samples. We will use Ex ∈ Rd (final input embedding) to denote the vector of activation in the penultimate layer of the neural network when fed with input x.\nWe first describe our sampling procedure, and later we argue why it is distribution aware and constant time. Our approach, just like the LSH algorithm, has three phases. The first phase is a one time costly (O(N)) prepossessing stage. The other two phases, sampling and update phase, are performed in each iteration, and both of them are constant-time operation independent of N .\nOne time Preprocessing Phase during Initialization: We start with randomly initializing the neural network parameters. This automatically initializes all the class vectors wi. We now preprocess all these randomly initialized class vectors in (K,L) parameterized LSH hash tables, as described in Section 1.2. This is a one-time operation during initialization.\nSampling Phase for every input (x, y): In this phase, we process input x to the penultimate layer and get the final input embedding Ex. Now instead of processing all the N nodes in the last layer, we query the hash tables with either vector Ex (LSH Embedding) or with the vector corresponding to the true label y, i.e., wy (LSH Label). This preciously describes our two sampling schemes. We can obviously mix and match, but we consider these two choices as two different methods for the simplicity of analysis and evaluations.\nWhen we query, we generate a small set of the sampled candidates, call them C, forming our negative samples. Thus, we only compute the activation of nodes belonging to C ∪ y in the last layer and treat others as zero activation.\nUpdate Hash Tables with Update in Weights: During backpropagation for input (x, y), we only update C ∪ y weights in the last layer. We update these changed weights in the LSH hash tables. Next, we first argue why this sampling is distribution aware and adaptive with every parameter and input change. We will then argue that the sampling and update process is significantly efficient. It is a constant-time operation that is easily parallelizable." }, { "heading": "2.1 WHAT IS THE SAMPLING DISTRIBUTION? IS IT ADAPTIVE?", "text": "We start with two theorems that give the precise probability distribution of sampling a class as a negative sample with LSH Label and LSH Embedding methods provided the input (x, y) and current parameters. We will use pxy as the collision probability of the LSH hash value of x and y.\nTheorem 1 LSH Label Distribution For an input (x, y) and LSH parameters (K,L), the probability of sampling a class i 6= y as negative sampling with LSH Label method is given by\npi ∝ 1− (1− pKwywi) L,\nwhere wy and wi are the weights associated with true class y and i respectively. Furthermore, the probability of sampling class i is more than any other class j, if and only if sim(wy, wi) > sim(wy, wj). Here sim is the underlying similarity function of the LSH.\nTheorem 2 LSH Embedding Distribution For an input (x, y) and LSH parameters (K,L), the probability of sampling a class i 6= y as negative sampling with LSH Embedding method is given by\npi ∝ 1− (1− pKExwi) L,\nwhere Ex is the embedding vector of input x and wi is the weights associated with class i respectively. Furthermore, the probability of sampling class i is more than any other class j, if and only if sim(Ex, wi) > sim(Ex, wj). Here sim is the underlying similarity function of the LSH.\nComments: The expressions of probability are immediate from the sampling view of LSH. The expressions 1− (1− pK)L is monotonically increasing in p, the collision probability, which in turn is monotonically increasing in the underlying similarity function sim. Clearly, the distribution is adaptive as they change with the input (x, y) as well as the parameters. So any update in the parameter or any change in the input change the sampling distribution completely. However, the sampling cost is constant and independent of the number of classes we are sampling from!\nIntuition of LSH Label: Coming back to our example of class ’Nike Running Shoes’. Let us focus on LSH Label distribution. Initially, when all other labels have random weights, the similarity between the label ’Nike Running Shoes’ and any other label will be random. So initial negative sampling should be like uniform sampling. However, as the learning progress, it is likely that ’Nike Running Shoes’ and ‘Adidas Running Shoes’ will likely get close enough. Their weights will have high similarity (high sim), at that time, the LSH Label sampling will select ‘Adidas Running Shoes’ as likely negative sample for ‘Nike Running Shoes’ class.\nIntuition of LSH Embedding: The LSH Embedding method is also adaptive. Consider the similarity function as an inner product. LSH embedding inner product with class vector is directly proportional to its activation. Thus, it naturally selects classes in which the classifier is confused (high activation but incorrect) as negative samples. Again, the distribution is adaptive." }, { "heading": "2.2 COMPUTATIONAL COST FOR PROCESSING EACH INPUT", "text": "Given an input (x, y), the cost of processing it without any negative sampling is O(N). With our proposed negative sampling the cost of sampling is the cost of query which is K × L, a negligible number compared to N in practice.\nThe cost of update is slightly more (|C|+ 1)×K × L because we have to update |C|+ 1 weights. In negative sampling, C is a very small constant. Also, in practice K and L are small constants. Furthermore, we have a choice to delay the hash table updates, as it only changes the sampling probability of few elements." }, { "heading": "2.3 ALGORITHM AND IMPLEMENTATION DETAILS", "text": "First we construct K × L hash functions and initialize the weights of the network and L hash tables. The LSH hash code of weight vectors of the last layer are computed and the id of the corresponding neuron is saved into the hash buckets. During the feed-forward path in the last layer, we query whether the embedding vector (LSH-embedding scheme) or the label vector of true class (LSH-label scheme) and retrieve the classes from hash table which are considered as negative classes. Instead of computing the activation of all the output nodes (full softmax), we compute the activations of the true classes and the corresponding retrieved negative classes. For the backpropagation, we backpropagate the errors to calculate the gradient and update the weights for the active nodes. Please refer to Algorithm 1 in the Appendix [A.4]." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we will empirically evaluate the performance of our LSH Negative Sampling (LNS) approach against other sampling schemes that are conducive to GPUs. The real advantage of LNS is noticeable with huge neural networks. The popular extreme classification challenges have models with more than 100 million parameters, which are ideal for our purpose. For these challenges, most of the heavy computations happen in the last layer." }, { "heading": "3.1 DATASETS", "text": "We evaluate our framework and other baselines on three datasets. Amazon-670K and WikiLSH-325K are two datasets from extreme classification repository (Bhatia et al., 2016) and Text8 is a popular NLP dataset. The detailed statistics about the dimensions and samples sizes are included in Table 1 in appendix [A.1]." }, { "heading": "3.2 BASELINES", "text": "We benchmark our proposed framework against full-softmax, sampled-softmax, topK-softmax, frequency-based-softmax, noise contrastive estimation and differentiated softmax (all explained below). All the baselines are implemented on TensorFlow-GPU. To have a fair comparison, the architecture, optimizer and size of hidden layer are exactly the same for all the methods on each dataset. Please note that our proposal only uses CPU and yet outperforms the other methods.\nFull-Softmax: Full-softmax updates the weights of all the output neurons, which makes it computationally expensive and intractable for extreme classification framework. Sampled-Softmax: Sampled-softmax draws negative samples based on log-uniform distribution and updates their corresponding weights plus the weights for the true classes. This approach alleviates the computational\nbottleneck but degrades the performance in terms of accuracy. TopK-Softmax: TopK-softmax updates the weights of the output neurons with k highest activations (including the true classes). This framework maintains better accuracy than sampled-softmax but with a slower convergence rate due to the scoring and sorting of all classes. Frequency-based-Softmax: Frequency-based-softmax samples the classes in proportion to the frequency of their occurrence in the training data. Computationally, this is the same as Sampled-softmax, however it samples negative classes from more frequent classes with higher probability. Noise-Contrastive Estimation (NCE): NCE loss (Gutmann & Hyvärinen, 2010) tackles multi-class classification problem as multiple binary classifiers instead. Each binary classifier is trained by logistic loss to distinguish between true classes and negative classes. Negative classes are sampled from a noise distribution which is typically log-uniform distribution or based on class frequencies. Differentiated Softmax (D-Softmax): D-softmax (Chen et al.,\n2015) defines softmax layer weight matrix as a sparse block diagonal matrix where the blocks are constructed based on class frequencies, e.g. more frequent classes attains more parameters. LNS (our proposal): Our proposed negative sampling algorithm samples the classes from output distribution which is adaptive to the input, true class and model parameters. Our model utilizes LSH to sample the most confusing (the most similar but false) classes as the negative samples in constant time." }, { "heading": "3.3 ARCHITECTURE AND HYPERPARAMETERS", "text": "For Amazon-670K and WikiLSH-325K we use a standard fully connected neural network with hidden layer size of 128, where both the input and output are multi-hot encoded vectors. For Text8, We utilize standard word2vec language model with hidden layer size of 200, where input and output are one-hot and multi-hot encoded vectors, respectively. In word2vec architecture we utilize skip-gram model introduced in Mikolov et al. (2013a). Skip-gram model aims to predict nearby words in a document by learning continuous representation of words. Particularly, given a word, skip-gram model targets to predict the m left and m right neighbor words, where m is window size and is considered as a hyperparameter. We pick m = 2 for our experiments. We performed hyperparameter tuning for all the baselines to maintain their best trade-off between convergence time and accuracy. The optimizer is Adam with learning rate 0.0001 for all the experiments. The batch size for Amazon-670, WikiLSH-325 and Text8 is 1024, 256 and 512 respectively for all the experiments. We apply hash functions for the last layer where we have the computational bottleneck. In LSH literature, L denotes the number of hash tables and K denotes the number of bits in the hash code for each hash table (thereby having 2K buckets per hash table). We use DWTA hash function [A.3] for Amazon-670K and WikiLSH-325K with K=6, L=400 and K=5, L=350 respectively. For Text8 we use Simhash hash function [A.2] with K=9 and L=50. We update the hash tables with an initial update period of 50 iterations and then exponentially decaying the updating frequency (as we need less updates near convergence). Our experiments are performed on a single machine with 28-core and 224-thread processors. All the baselines are run on the state of the art NVIDIA V100 GPUs with 32 GB memory." }, { "heading": "3.4 RESULTS", "text": "Figure 4 shows the plots comparing Precision@1 (denoted here-on by P@1) versus both wall-clock time and the number of iterations for our method and all the baselines. For WikiLSH-325K dataset, LSH-label and LSH-embedding are respectively 4x and 4.3x faster than TensorFlow full softmax on GPU in terms of wall-clock training time. Moreover, both of them outperform all other TensorFlow baselines on GPU with significant margin. The same is true for Amazon-670K where LSH-label and LSH-embedding are 10x and 9.6x faster than TensorFlow full softmax on GPU correspondingly. On Text8 dataset, all methods perform more or less equally in terms of P@1 and our LSH-Embedding is the second fastest negative sampling method after sampled softmax in terms of training time. According to iteration-wise plots, both LSH-label and LSH-Embedding schemes attain the highest P@1 among all other sampling methods for Amazon-670K and WikiLSH-325K datasets, and achieves pretty similar performance with other sampling methods on Text8 dataset. Therefore, both variations of our LNS method outperform almost all other TensorFlow baselines on all datasets while being very similar to full softmax iteration-wise. This establishes the earlier statement that LNS does not compromise performance for speed-up. This is particularly noteworthy because our implementation of LNS uses only CPU while all other baselines run TensorFlow-GPU on a V100." }, { "heading": "4 CONCLUSION", "text": "We proposed two provable, efficient and adaptive negative sampling schemes for neural networks with extreme number of classes. Our method samples negative classes in constant time, while adapts to the continuous change of the input, true class and network parameters. We efficiently implemented our algorithm on CPU in C++ and benchmarked it against standard TensorFlow implementation of six baselines on GPU. Our method on CPU outperforms almost all the TensorFlow baselines on GPU with significant margin on three datasets." }, { "heading": "A APPENDIX", "text": "Amazon-670K dataset is a product recommendation dataset with 670K labels. Here, each input is a vector representation of a product, and the corresponding labels are other products (among 670K choices) that a user might be interested in purchase. This is an anonymized and aggregated behavior data from Amazon and poses a significant challenge owing to a large number of classes. WikiLSHTC-325K is based on two main sources: Wikipedia and ODP web directory. Each instance is in sparse vector format where the value of a feature corresponds to the frequency and the label corresponds to the category of the instance. Text8 is a popular NLP dataset and a preprocessed version of the first 100million tokens of English Wikipedia which contains 253K vocabulary. We utilized standard word2vec language model for text8 dataset, where input and output are one-hot and multi-hot encoded vectors, respectively.\nA.2 SIMHASH: SIGNED RANDOM PROJECTIONS SimHash is a popular LSH that originates from Signed Random Projections (SRP) (Charikar, 2002; Rajaraman & Ullman, 2010; Henzinger, 2006) and preserves the cosine similarity measure. Given a vector x, SRP generates a random w vector with each component generated from i.i.d. normal, , wi∼N(0, 1), and only stores the sign of the projection. Formally SimHash is given by\nhsignw (x) = sign(w Tx). (2)\nIt was shown in (Goemans & Williamson, 1995) that the collision probability under SRP satisfies the following formula:\nPr(hsignw (x) = h sign w (y)) = 1−\nθ π , (3) where θ = cos−1 (\nxT y ||x||2·||y||2\n) .\nA.3 DWTA HASH: DENSIFIED WINNER TAKE ALL HASH\nDWTA (Chen & Shrivastava, 2018) hash transforms the data into the transformed space such that their hamming distance correlates with their rank similarity measure in the original space. Densified WTA (DWTA) hash combines traditional WTA hashing and densification to improve discriminative power over sparse datasets. DWTA hash generates KLmd number of permutations and each permutation is split into dm bins where d is the input dimension and m << d is a hyperparameter. DWTA loops through the nonzero indices of the sparse input and updates the current maximum index of the corresponding bins according to the mapping in each permutation. It has been shown (Chen & Shrivastava, 2018) that the collision probability of DWTA is precisely the collision probability of WTA hash for nonempty bins, irrespective of the sparsity.\nPr(hDwta(x) = hDwta(y)) = Pr(hwta(x) = hwta(y)|Θ(x) = Θ(y) = Empty) (4)\nwhere Θ(x) is the set of random features of x with permutation Θ.\nA.4 ALGORITHM\nAlgorithm 1 Locality Sensitive Negative Sampling (LNS) input Ex final input embedding, wi class vectors output No set of active neurons of the last layer\n1: Initialize weights wl for last layer l 2: Create K × L hash functions and initialize L hash tables for the last layer 3: Compute hl(wl) for all output neurons 4: for i = 1 : Iterations do 5: for Batch B do 6: if LSH Embedding then 7: No=Query(hl(Ex), HTl) 8: end if 9: if LSH Label then\n10: No = Query(hl(wi), HTl) 11: end if 12: end for 13: end for" } ]
2,020
A TRULY CONSTANT-TIME DISTRIBUTION-AWARE NEGATIVE SAMPLING
SP:a0281c7b8cc747c8ced8b7ddfcc56fb6e082eb84
[ "This paper proposes a powerful non-learning Kernal based baseline for ImageNet classification. The proposed non-learning Kernal based baseline (which can be interpretable to a vector quantization) shows comparable results (88.5) with AlexNet (89.1) in CIFAR-10 top-1 accuracy. The ImageNet result (39.4) shows that it is still challenging to classify the images without deep features, but about 40% is an impressive baseline without any learning method (e.g., these results is almost comparable to BagNet top-5 error)." ]
A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87− 90% while being more amenable to theoretical analysis. In this work, we highlight the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods. This step typically corresponds to a whitened dictionary of patches, and gives rise to a data-driven convolutional kernel methods. We extensively study its effect, demonstrating it is the key ingredient for high performance of these methods. Specifically, we show that one of the simplest instances of such kernel methods, based on a single layer of image patches followed by a linear classifier is already obtaining classification accuracies on CIFAR-10 in the same range as previous more sophisticated convolutional kernel methods. We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods. This is a new baseline for object recognition without representation learning methods, that initiates the investigation of convolutional kernel models on ImageNet. We conduct experiments to analyze the dictionary that we used, our ablations showing they exhibit low-dimensional properties.
[ { "affiliations": [], "name": "Louis Thiry" }, { "affiliations": [], "name": "Michael Arbel" } ]
[ { "authors": [ "J. Ba", "R. Caruana" ], "title": "Do deep nets really need to be deep? In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "E. Belilovsky", "M. Eickenberg", "E. Oyallon" ], "title": "Greedy layerwise learning can scale to imagenet", "venue": "arXiv preprint arXiv:1812.11446,", "year": 2018 }, { "authors": [ "K. Beyer", "J. Goldstein", "R. Ramakrishnan", "U. Shaft" ], "title": "When is “nearest neighbor", "venue": "In International conference on database theory,", "year": 1999 }, { "authors": [ "M.P. Chandra" ], "title": "On the generalised distance in statistics", "venue": "In Proceedings of the National Institute of Sciences of India,", "year": 1936 }, { "authors": [ "L. Chizat", "E. Oyallon", "F. Bach" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "P. Chrabaszcz", "I. Loshchilov", "F. Hutter" ], "title": "A downsampled variant of imagenet as an alternative to the CIFAR", "venue": "datasets. CoRR,", "year": 2017 }, { "authors": [ "A. Coates", "A.Y. Ng" ], "title": "The importance of encoding versus training with sparse coding and vector quantization", "venue": null, "year": 2011 }, { "authors": [ "A. Coates", "A. Ng", "H. Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "A. Criminisi", "P. Pérez", "K. Toyama" ], "title": "Region filling and object removal by exemplar-based image inpainting", "venue": "IEEE Transactions on image processing,", "year": 2004 }, { "authors": [ "A.A. Efros", "T.K. Leung" ], "title": "Texture synthesis by non-parametric sampling", "venue": "In Proceedings of the seventh IEEE international conference on computer vision,", "year": 1999 }, { "authors": [ "C. Fefferman", "S. Mitter", "H. Narayanan" ], "title": "Testing the manifold hypothesis", "venue": "Journal of the American Mathematical Society,", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "A. Jacot", "F. Gabriel", "C. Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "R. Jin", "Y. Breitbart" ], "title": "Muoh. Data discretization unification", "venue": "Knowledge and Information Systems,", "year": 2009 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "E. Levina", "P.J. Bickel" ], "title": "Maximum likelihood estimation of intrinsic dimension. In Advances in neural information processing systems 17, pages 777–784", "venue": null, "year": 2004 }, { "authors": [ "Z. Li", "R. Wang", "D. Yu", "S.S. Du", "W. Hu", "R. Salakhutdinov", "S. Arora" ], "title": "Enhanced convolutional neural tangent kernels", "venue": null, "year": 1911 }, { "authors": [ "D.G. Lowe" ], "title": "Distinctive image features from scale-invariant keypoints", "venue": "International journal of computer vision,", "year": 2004 }, { "authors": [ "Z. Lu", "A. May", "K. Liu", "A.B. Garakani", "D. Guo", "A. Bellet", "L. Fan", "M. Collins", "B. Kingsbury", "M. Picheny" ], "title": "How to scale up kernel methods to be as good as deep neural nets", "venue": "arXiv preprint arXiv:1411.4000,", "year": 2014 }, { "authors": [ "J. Mairal" ], "title": "End-to-end kernel learning with supervised convolutional kernel networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "J. Mairal", "P. Koniusz", "Z. Harchaoui", "C. Schmid" ], "title": "Convolutional kernel networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "S. Mallat" ], "title": "A wavelet tour of signal processing", "venue": null, "year": 1999 }, { "authors": [ "S. Mallat" ], "title": "Group invariant scattering", "venue": "Communications on Pure and Applied Mathematics,", "year": 2012 }, { "authors": [ "N. Montobbio", "A. Sarti", "G. Citti" ], "title": "A metric model for the functional architecture of the visual cortex", "venue": null, "year": 2019 }, { "authors": [ "E. Oyallon", "S. Mallat" ], "title": "Deep roto-translation scattering for object classification", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "E. Oyallon", "E. Belilovsky", "S. Zagoruyko" ], "title": "Scaling the scattering transform: Deep hybrid networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "E. Oyallon", "E. Belilovsky", "S. Zagoruyko", "M. Valko" ], "title": "Compressing the input for cnns with the first-order scattering transform", "venue": "In The European Conference on Computer Vision (ECCV), September 2018a", "year": 2018 }, { "authors": [ "E. Oyallon", "S. Zagoruyko", "G. Huang", "N. Komodakis", "S. Lacoste-Julien", "M. Blaschko", "E. Belilovsky" ], "title": "Scattering networks for hybrid representation learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "F. Perronnin", "J. Sánchez", "T. Mensink" ], "title": "Improving the fisher kernel for large-scale image classification", "venue": "In European conference on computer vision,", "year": 2010 }, { "authors": [ "A. Rahimi", "B. Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "B. Recht", "R. Roelofs", "L. Schmidt", "V. Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "arXiv preprint arXiv:1902.10811,", "year": 2019 }, { "authors": [ "A. Rudi", "L. Carratino", "L. Rosasco" ], "title": "Falkon: An optimal large scale kernel method", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "M. Samarin", "V. Roth", "D. Belius" ], "title": "On the empirical neural tangent kernel of standard finite-width convolutional neural network architectures", "venue": "arXiv preprint arXiv:2006.13645,", "year": 2020 }, { "authors": [ "V. Shankar", "A. Fang", "W. Guo", "S. Fridovich-Keil", "L. Schmidt", "J. Ragan-Kelley", "B. Recht" ], "title": "Neural kernels without tangents", "venue": "arXiv preprint arXiv:2003.02237,", "year": 2020 }, { "authors": [ "G.K. Wallace" ], "title": "The jpeg still picture compression standard", "venue": "IEEE transactions on consumer electronics,", "year": 1992 }, { "authors": [ "J. Zarka", "L. Thiry", "T. Angles", "S. Mallat" ], "title": "Deep network classification by scattering and homotopy dictionary learning", "venue": null, "year": 1910 } ]
[ { "heading": "1 INTRODUCTION", "text": "Understanding the success of deep convolutional neural networks on images remains challenging because images are high-dimensional signals and deep neural networks are highly-non linear models with a substantial amount of parameters: yet, the curse of dimensionality is seemingly avoided by these models. This problem has received a plethora of interest from the machine learning community. One approach taken by several authors (Mairal, 2016; Li et al., 2019; Shankar et al., 2020; Lu et al., 2014) has been to construct simpler models with more tractable analytical properties (Jacot et al., 2018; Rahimi and Recht, 2008), that still share various elements with standard deep learning models. Those simpler models are based on kernel methods with a particular choice of kernel that provides a convolutional representation of the data. In general, these methods are able to achieve reasonable performances on the CIFAR-10 dataset. However, despite their simplicity compared to deep learning models, it remains unclear which ones of the multiple ingredients they rely on are essential. Moreover, due to their computational cost, it remains open to what extend they achieve similar performances on more complex datasets such as ImageNet. In this work, we show that an additional implicit ingredient, common to all those methods, consists in a data-dependent feature extraction step that makes the convolutional kernel data-driven (as opposed to purely handcrafted) and is key for obtaining good performances.\nData driven convolutional kernels compute a similarity between two images x and y, using both their translation invariances and statistics from the training set of images X . In particular, we focus on similarities K that are obtained by first standardizing a representation Φ of the input images and then feeding it to a predefined kernel k:\nKk,Φ,X (x, y) = k(LΦx, LΦy) , (1)\nwhere a rescaling and shift is (potentially) performed by a diagonal affine operator L = L(Φ,X ) and is mainly necessary for the optimization step Jin et al. (2009): it is typically a standardization. The kernel K(x, y) is said to be data-driven if Φ depends on training set X , and data-independent otherwise. This, for instance, is the case if a dictionary is computed from the data (Li et al., 2019; Mairal, 2016; Mairal et al., 2014) or a ZCA (Shankar et al., 2020) is incorporated in this representation. The convolutional structure of the kernel K can come either from the choice of the representation Φ (convolutions with a dictionary of patches (Coates et al., 2011)) or by design of the predefined kernel k (Shankar et al., 2020), or a combination of both (Li et al., 2019; Mairal, 2016). One of the goal of this paper is to clearly state that kernel methods for vision do require to be data-driven and this is explicitly responsible for their success. We thus investigate, to what extent this common step is responsible for the success of those methods, via a shallow model.\nOur methodology is based on ablation experiments: we would like to measure the effect of incorporating data, while reducing other side effects related to the design of Φ, such as the depth of Φ or the implicit bias of a potential optimization procedure. Consequently, we focus on 1-hidden layer neural networks of any widths, which have favorable properties, like the ability to be a universal approximator under non-restrictive conditions. The output linear layer shall be optimized for a classification task, and we consider first layers which are predefined and kept fixed, similarly to Coates et al. (2011). We will see below that simply initializing the weights of the first layer with whitened patches leads to a significant improvement of performances, compared to a random initialization, a wavelet initialization or even a learning procedure. This patch initialization is used by several works (Li et al., 2019; Mairal, 2016) and is implicitly responsible for their good performances. Other works rely on a whitening step followed by very deep kernels (Shankar et al., 2020), yet we noticed that this was not sufficient in our context. Here, we also try to understand why incorporating whitened patches is helpful for classification. Informally, this method can be thought as one of the simplest possible in the context of deep convolutional kernel methods, and we show that the depth or the non-linearities of such kernels play a minor role compared to the use of patches. In our work, we decompose and analyze each step of our feature design, on gold-standard datasets and find that a method based solely on patches and simple non-linearities is actually a strong baseline for image classification.\nWe investigate the effect of patch-based pre-processing for image classification through a simple baseline representation that does not involve learning (up to a linear classifier) on both CIFAR-10 and ImageNet datasets: the path from CIFAR-10 to ImageNet had never been explored until now in this context. Thus, we believe our baseline to be of high interest for understanding ImageNet’s convolutional kernel methods, which almost systematically rely on a patch (or descriptor of a patch) encoding step. Indeed, this method is straightforward and involves limited ad-hoc feature engineering compared to deep learning approach: here, contrary to (Mairal, 2016; Coates et al., 2011; Recht et al., 2019; Shankar et al., 2020; Li et al., 2019) we employ modern techniques that are necessary for scalability (from thousands to million of samples) but can still be understood through the lens of kernel methods (e.g., convolutional classifier, data augmentation, ...). Our work allows to understand the relative improvement of such encoding step and we show that our method is a challenging baseline for classification on Imagenet: we outperform by a large margin the classification accuracy of former attempts to get rid of representation learning on the large-scale ImageNet dataset.\nWhile the literature provides a detailed analysis of the behavior of a dictionary of patches for image compression (Wallace, 1992), texture synthesis (Efros and Leung, 1999) or image inpainting (Criminisi et al., 2004), we have a limited knowledge and understanding of it in the context of image classification.\nThe behavior of those dictionaries of patches in some classification methods is still not well understood, despite often being the very first component of many classic vision pipelines (Perronnin et al., 2010; Lowe, 2004; Oyallon et al., 2018b). Here, we proposed a refined analysis: we define a Euclidean distance between patches and we show that the decision boundary between image classes can be approximated using a rough description of the image patches neighborhood: it is implied for instance by the fame low-dimensional manifold hypothesis (Fefferman et al., 2016).\nOur paper is structured as follows: first, we discuss the related works in Sec. 2. Then, Sec. 3 explains precisely how our visual representation is built. In Sec. 4, we present experimental results on the vision datasets CIFAR-10 and the large scale ImageNet. The final Sec. 4.3 is a collection of numerical experiments to understand better the dictionary of patches that we used. Our code as well as commands to reproduce our results are available here: https://github.com/louity/patches." }, { "heading": "2 RELATED WORK", "text": "The seminal works by Coates et al. (2011) and Coates and Ng (2011) study patch-based representations for classification on CIFAR-10. They set the first baseline for a single-layer convolutional network initialized with random patches, and they show it can achieve a non-trivial performance (∼ 80%) on the CIFAR-10 dataset. Recht et al. (2019) published an implementation of this technique and conducted numerous experiments with hundreds of thousands of random patches, improving the accuracy (∼ 85%) on this dataset. However, both works lack two key ingredients: online optimization procedure (which allows to scale up to ImageNet) and well-designed linear classifier (as we propose a factorization of our linear classifier).\nRecently, (Li et al., 2019; Shankar et al., 2020) proposed to handcraft kernels, combined with deep learning tools, in order to obtain high-performances on CIFAR-10. Those performances match standard supervised methods (∼ 90%) which involve end-to-end learning of deep neural networks. Note that the line of work (Li et al., 2019; Shankar et al., 2020; Mairal, 2016) employs a wellengineered combination of patch-extracted representation and a cascade of kernels (possibly some neural tangent kernels). While their works suggest that patch extraction is crucial, the relative improvement due to basic-hyper parameters such as the number of patches or the classifier choice is unclear, as well as the limit of their approach to more challenging dataset. We address those issues.\nFrom a kernel methods perspective, a dictionary of random patches can be viewed as the building block of a random features method (Rahimi and Recht, 2008) that makes kernel methods computationally tractable. Rudi et al. (2017) provided convergence rates and released an efficient implementation of such a method. However, previously mentioned kernel methods (Mairal, 2016; Li et al., 2019; Shankar et al., 2020) have not been tested on ImageNet to our knowledge.\nSimple methods involving solely a single-layer of features have been tested on the ImageNet-2010 dataset1, using for example SIFT, color histogram and Gabor texture encoding of the image with K-nearest neighbors, yet there is a substential gap in accuracy that we attempt to fill in this work on ImageNet-2012 (or simply ImageNet). We note also that CNNs with random weights have been tested on ImageNet, yielding to low accuracies (∼ 20% top-1, (Arandjelovic et al., 2017)). The Scattering Transform (Mallat, 2012) is also a deep non-linear operator that does not involve representation learning, which has been tested on ImageNet (∼ 45% top-5 accuracy (Zarka et al., 2019) and CIFAR-10 (∼ 80%, (Oyallon and Mallat, 2015)) and is related to the HoG and SIFT transforms (Oyallon et al., 2018a). Some works also study directly patch encoders that achieve competitive accuracy on ImageNet but involve deep cascade of layers that are difficult to interpret (Oyallon et al., 2017; Zarka et al., 2019; Brendel et al., 2019). Here, we focus on shallow classifiers." }, { "heading": "3 METHOD", "text": "We first introduce our preliminary notations to describe an image. A patch p of size P of a larger image x, is a restriction of that image to a squared domain of surface P 2. We denote by N2 the size of the natural image x and require that P ≤ N . Hence, for a spatial index i of the image, pi,x represents the patch of image x located at i. We further introduce the collection of all overlapping patches of that image, denoted by: Px = {pi,x, i ∈ I} where I is a spatial index set such that |I| = (N − P + 1)2. Fig. 1 corresponds to an overview of our classification pipeline that consist of 3 steps: an initial whitening step of a dictionary D of random patches, followed by a nearest neighbor quantization of images patches via D that are finally spatially averaged.\n1As one can see on the Imagenet2010 leaderboard http://image-net.org/challenges/LSVRC/2010/results, and the accuracies on ImageNet2010 and ImageNet2012 are comparable.\nWhitening We describe the single pre-processing step that we used on our image data, namely a whitening procedure on patches. Here, we view natural image patches of size P 2 as samples from a random vector of mean µ and covariance Σ. We then consider whitening operators which act at the level of each image patch by first subtracting its mean µ then applying the linear transformation W = (λI + Σ)−1/2 to the centered patch. The additional whitening regularization with parameter λ was used to avoid ill-conditioning effects.\nFigure 1: Our classification pipeline described synthetically to explain how we build the representation Φ(x) of an input image x.\nDictionary\n…\nFind Q-nearest neighbours per patch\n0 1patch 1\npatch 2\n… 01 … 1 1\n…\n01 … 1 …\nSpatial Average pooling\n…\n1 1 …\n…\n…\n…\n01 … 1 …2 0\n2\n2\n1 1\n2 … … 2 0 … 2 …\npatch 1\npatch 2\ninput\nx\nSplit the image in overlapping patches\nRepresentation Dictionary (x)\n<latexit sha1_base64=\"TGV8Qz6p5CvLBilkjVUuHjjuCuo=\">AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90=</latexit><latexit sha1_base64=\"TGV8Qz6p5CvLBilkjVUuHjjuCuo=\">AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90=</latexit><latexit sha1_base64=\"TGV8Qz6p5CvLBilkjVUuHjjuCuo=\">AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90=</latexit><latexit sha1_base64=\"TGV8Qz6p5CvLBilkjVUuHjjuCuo=\">AAACKnicbZDLSsNAFIYn9VbrLerSTbAIFaRMSsF2V3TjsoK9QBPKZDpph04uzEzEGvIaPoWP4FYfwF1xqQ/iJM3Ctv4w8PGfc+YcfidkVEgI51phY3Nre6e4W9rbPzg80o9PuiKIOCYdHLCA9x0kCKM+6UgqGemHnCDPYaTnTG/Teu+RcEED/0HOQmJ7aOxTl2IklTXUYWxlnwz42LFjWK03TFgzrzJQygA2m83Eak9o5ekyGeplWIWZjHUwcyiDXO2h/m2NAhx5xJeYISEGJgylHSMuKWYkKVmRICHCUzQmA4U+8oiw4+yoxLhQzshwA66eL43M/TsRI0+ImeeoTg/JiVitpeZ/tUEk3YYdUz+MJPHxYpEbMUMGRhqTMaKcYMlmChDmVN1q4AniCEsV5tIWSafPaSrmagbr0K1VTcX39XLrJs+nCM7AOagAE1yDFrgDbdABGLyAN/AOPrRX7VOba1+L1oKWz5yCJWk/v7NTo90=</latexit>\nD <latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit> D\n<latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit><latexit sha1_base64=\"Rb/kpNpYCeioxAjDXIA8A4KhYjk=\">AAACLnicbZDLSsNAFIYn9VbrLerSTbAILqQksfayK+rCZQXbCkkok+m0HTq5MDMRasiL+BQ+glt9AMGFuHDjYzhJs7CtPwx8/OecOYffDSnhQtc/lMLK6tr6RnGztLW9s7un7h90eRAxhDsooAG7dyHHlPi4I4ig+D5kGHouxT13cpXWew+YcRL4d2IaYseDI58MCYJCWn21GtvZJxYbuU6sV8ya0ayaZ3rlot48rzUlNOt6wzQT24NijCCNr5Okr5b1ip5JWwYjhzLI1e6r3/YgQJGHfYEo5Nwy9FA4MWSCIIqTkh1xHEI0gSNsSfShh7kTZ4cl2ol0BtowYPL5QsvcvxMx9Difeq7sTG/ki7XU/K9mRWLYcGLih5HAPpotGkZUE4GWRqUNCMNI0KkEiBiRt2poDBlEQgY6t0WQyWOairGYwTJ0zYoh+bZabl3m+RTBETgGp8AAddACN6ANOgCBJ/ACXsGb8qy8K5/K16y1oOQzh2BOys8vGACmRQ==</latexit>\n… The whitening operation is defined up to an isometry, but the Euclidean distance between whitened patches (i.e., the Mahanobolis distance (Chandra et al., 1936)) is not affected by the choice of such isometry (choices leading to PCA, ZCA, ...), as discussed in Appendix A. In practice, the mean and covariance are estimated empirically from the training set to construct the whitening operators. For the sake of simplicity, we only consider whitened patches, and unless explicitly stated, we assume that each patch p is already whitened, which holds in particular for the collection of patches in Px of any image x. Once this whitening step is performed, the Euclidean distance over patches is approximatively isotropic and is used in the next section to represent our signals.\nQ-Nearest Neighbors on patches The basic idea of this algorithm is to compare the distances between each patch of an image and a fixed dictionary of patches D, with size |D| that is the number of patches extracted. Note that we also propose a variant where we simply use a soft-assignment operator. For a fixed dataset, this dictionary D is obtained by uniformly sampling patches from images over the whole training set. We augmentD into ∪d∈D{d,−d} because it allows the dictionary of patches to be contrast invariant and we observe it leads to better classification accuracies; we still refer to it as D. An illustration is given by Fig. 2. Once the dictionary D is fixed, for each patch pi,x we consider the set Ci,x of pairwise distances Ci,x = {‖pi,x − d‖ , d ∈ D}. For each whitened patch we encode the Q-Nearest Neighbors of pi,x from the set D, for some Q ∈ N. More formally, we consider τi,x the Q-th smallest element of Ci,x, and we define the Q-Nearest Neighbors binary\nencoding as follow, for (d, i) ∈ D × I:\nφ(x)d,i = { 1, if ‖pi,x − d‖ ≤ τi,x 0, otherwise.\n(2)\nEq. 2 can be viewed as a Vector Quantization (VQ) step with hard-assignment (Coates and Ng, 2011). The representation φ encodes the patch neighborhood in a subset of randomly selected patches and can be seen as a crude description of the topological geometry of the image patches. Moreover, it allows to view the distance between two images x, y as a Hamming distance between the patches neighborhood encoding as:\n‖φ(x)− φ(y)‖2 = ∑ i,d 1φ(x)d,i 6=φ(y)d,i . (3)\nIn order to reduce the computational burden of our method, we perform an intermediary averagepooling step. Indeed, we subdivide I in squared overlapping regions Ij ⊂ I, leading to the representation Φ defined, for d ∈ D, j by:\nΦ(x)d,j = ∑ i∈Ij φ(x)d,i . (4)\nHence, the resulting kernel is simply given by K(x, y) = 〈Φ(x),Φ(y)〉. Implementation details can be found in Appendix B. The next section describes our classification pipeline, as we feed our representation Φ to a linear classifier on challenging datasets." }, { "heading": "4 EXPERIMENTS", "text": "We train shallow classifiers, i.e. linear classifier and 1-hidden layer CNN (1-layer) on top of our representation Φ on two major image classification datasets, CIFAR-10 and ImageNet, which consist respectively of 50k small and 1.2M large color images divided respectively into 10 and 1k classes. For training, we systematically used mini-batch SGD with momentum of 0.9, no weight decay and using the cross-entropy loss.\nClassifier parametrization In each experiments, the spatial subdivisions Ij are implemented as an average pooling with kernel size k1 and stride s1. We then apply a 2D batch-normalization (Ioffe and Szegedy, 2015) in order to standardize our features on the fly before feeding them to a linear classifier. In order to reduce the memory footprint of this linear classifier (following the same line of idea of a \"bottleneck\" (He et al., 2016)), we factorize it into two convolutional operators. The first one with kernel size k2 and stride 1 reduces the number of channels from D to c2 and the second one with kernel size k3 and stride 1 outputs a number of channel equal to the number of image classes. Then we apply a global average pooling. For the 1-hidden layer experiment, we simply add a ReLU non linearity between the first and the second convolutional layer." }, { "heading": "4.1 CIFAR-10", "text": "Implementation details Our data augmentation consists in horizontal random flips and random crops of size 322 after reflect-padding with 4 pixels. For the dictionary, we choose a patch size of P = 6 and tested various sizes of the dictionary |D| and whitening regularization λ = 0.001 . In all cases, we used Q = 0.4|D|. The classifier is trained for 175 epoch with a learning rate decay of 0.1 at epochs 100 and 150. The initial learning rate is 0.003 for |D| = 2k and 0.001 for larger |D|.\nSingle layer experiments For the linear classification experiments, we used an average pooling of size k1 = 5 and stride s1 = 3, k2 = 1 and c2 = 128 for the first convolutional operator and k3 = 6 for the second one. Our results are reported and compared in Tab. 1a. First, note that contrary to experiments done by Coates et al. (2011), our methods has surprisingly good accuracy despite the hard-assignment due to VQ. Sparse coding, soft-thresholding and orthogonal matching pursuit based representations used by Coates and Ng (2011); Recht et al. (2019) can be seen as soft-assignment VQ and yield comparable classification accuracy (resp. 81.5% with 6.103 patches and 85.6% with 2.105 patches). However, these representations contain much more information than hard-assignment\nTable 1: Classification accuracies on CIFAR-10. VQ indicates whether vector quantization with hard-assignment is applied on the first layer.\n(a) One layer patch-based classification accuracies on CIFAR-10. Amongst methods relying on random patches ours is the only approach operating online (and therefore allowing for scalable training).\nMethod |D| VQ Online P Acc.\nCoates et al. (2011) 1k X × 6 68.6 Ba and Caruana (2014) 4k × X - 81.6\nWavelets (Oyallon and Mallat, 2015) - × × 8 82.2 Recht et al. (2019) 0.2M × × 6 85.6 SimplePatch (Ours) 10k X X 6 85.6 SimplePatch (Ours) 60k X X 6 86.7 SimplePatch (Ours) 60k × X 6 86.9\n(b) Supervised accuracies on CIFAR-10 with comparable shallow supervised classifiers. Here, e2e stands for end-to-end classifier and 1-layer for a 1-layer classifier.\nMethod VQ Depth Classifier Acc.\nSimplePatch (Ours) X 2 1-layer 88.5 AlexNet (Krizhevsky et al., 2012) × 5 e2e 89.1\nNK (Shankar et al., 2020) × 5 e2e 89.8 CKN (Mairal, 2016) × 9 e2e 89.8\n(c) Accuracies on CIFAR-10 with Handcrafted Kernels classifiers with and without data-driven reprensentations. For SimplePatch we replace patches with random gaussian noise. D-D stands for Data-Driven and D-I for Data-Independent.\nMethod VQ Online Depth D-I Accuracy Data used (D-D Improvement)\nLinearized (Samarin et al., 2020) × X 5 65.6 (13.2) e2e NK (Shankar et al., 2020) × × 5 77.7 (8.1) ZCA\nSimple (random) Patch (Ours) X X 1 78.6 (8.1) Patches CKN (Mairal, 2016) × × 2 81.1 (5.1)a Patches NTK (Li et al., 2019) × × 8 82.2 (6.7) Patches\naThis result was obtained from a private communication with the author.\nVQ as they allow to reconstruct a large part of the signal. We get better accuracy with only coarse topological information on the image patches, suggesting that this information is highly relevant for classification. To obtain comparable accuracies with a linear classifier, we use a single binary encoding step compared to Mairal (2016) and we need a much smaller number of patches than Recht et al. (2019); Coates and Ng (2011). Moreover, Recht et al. (2019) is the only work in the litterature, besides us, that achieves good performance using solely a linear model with depth one. To test the VQ importance, we replace the hard-assignment VQ implemented with a binary non-linearity 1‖pi,x−d‖≤τi,x (see Eq. 2) by a soft-assignment VQ with a sigmoid function (1 + e\n‖pi,x−d‖−τi,x)−1. The accuracy increases by 0.2%, showing that the use soft-assignment in VQ which is crucial for performance in Coates and Ng (2011) does not affect much the performances of our representation.\nImportance of data-driven representations As we see in Tab.1c, the data-driven representation is crucial for good performance of handcrafted kernel classifiers. We remind that a data-independent kernel is built without using the dataset, which is for instance the case with a neural network randomly initialized. The accuracies from Shankar et al. (2020) correspond to Myrtle5 (CNN and kernel), because the authors only report an accuracy without ZCA for this model. As a sanity check, we consider D whose atoms are sampled from a Gaussian white noise: this step leads to a drop of 8.1%. This is aligned with the finding of each work we compared to: performances drop if no ZCA is\napplied or if patches are not extracted. Using a dictionary of size |D| = 2048, the same model trained end-to-end (including the learning of D) yields to the same accuracy (- 0.1 %), showing that here, sampling patches is as efficient as optimizing them. Note that our method also outperforms linearized deep neural networks (Samarin et al., 2020), i.e. trained in a lazy regime (Chizat et al., 2019).\nNon-linear classification experiments To test the discriminative power of our features, we use a 1-hidden layer classifier with ReLU non-linearity and an average pooling of size k1 = 3 and stride s1 = 2, k2 = 3, c2 = 2048 and k3 = 7 . Our results are reported and compared with other non-linear classification methods in Tab.1b. Using a shallow non-linear classifier, our method is competitive with end-to-end trained methods (Li et al., 2019; Shankar et al., 2020; Krizhevsky et al., 2012). This further indicates the relevance of patches neighborhood information for classification task.\nHyper-parameter analysis CIFAR-10 is a relatively small dataset that allows fast benchmarking, thus we conducted several ablation experiments in order to understand the relative improvement due to each hyper-parameter of our pipeline. We thus vary the size of the dictionary |D|, the patch size P , the number of nearest neighbors Q and the whitening regularization λ which are the hyper-parameters of Φ. Results are shown in Fig. 3. Note that even a relatively small number of patches is competitive with much more complicated representations, such as Oyallon and Mallat (2015). While it is possible to slightly optimize the performances according to P or Q, the fluctuations remain minor compared to other factors, which indicate that the performances of our method are relatively stable w.r.t. this set of hyper-parameters. The whitening regularization behaves similarly to a thresholding operator on the eigenvalues of Σ1/2, as it penalizes larger eigenvalues. Interestingly, we note that under a certain threshold, this hyper-parameter does almost not affect the classification performances. This goes in hand with both a fast eigenvalue decay and a stability to noise, that we discuss further in Sec. 4.3." }, { "heading": "4.2 IMAGENET", "text": "Implementation details To reduce the computational overhead of our method on ImageNet, we followed the same approach as Chrabaszcz et al. (2017): we reduce the resolution to 642, instead of the standard 2242 length. They observed that this does not alter much the top-performances of standard models (5% to 10% drop of accuracy on average), and we also believe it introduces a useful dimensionality reduction, as it removes high-frequency part of images that are unstable (Mallat, 1999). We set the patch size to P = 6 and the whitening regularization to λ = 10−2. Since ImageNet is a much larger than CIFAR-10, we restricted to |D| = 2048 patches. As for CIFAR-10, we set Q = 0.4|D|. The parameters of the linear convolutional classifier are chosen to be: k1 = 10, s1 = 6, k2 = 1, c2 = 256, k3 = 7. For the 1-hidden layer experiment, we used kernel size of k2 = 3 for the first convolution. Our models are trained during 60 epochs with an initial learning rate of 0.003 decayed by a factor 10 at epochs 40 and 50. During training, similarly to Chrabaszcz et al. (2017) we use random flip and we select random crops of size 64, after a reflect-padding of size 8. At testing, we simply resize the image to 64. Note this procedure differs slightly from the usual procedure, which consists in resizing images while maintaining ratios, before a random cropping.\nClassification experiments Tab.2a reports the accuracy of our method, as well as the accuracy of comparable methods. Despite a smaller image resolution, our method outperforms by a large margin ( ∼ 10% Top5) the Scattering Transform (Mallat, 2012), which was the previous state-of-the-artmethod in the context of no-representation learning. Note that our representation uses only 2.103 randomly selected patches which is a tiny fraction of the billions of ImageNet patches.\n(b) Supervised accuracies on ImageNet, for which our model uses |D| = 2048 patches. e2e, 1-layer respectively stand for end-to-end, 1-hidden layer classifier.\nMethod VQ P Depth Resolution Classifier Top1 Top5\nBelilovsky et al. (2018) × - 1 224 e2e - 26 Belilovsky et al. (2018) × - 2 224 e2e - 44\nSimplePatch (Ours) X 6 2 64 1-layer 39.4 62.1 BagNet (Brendel et al., 2019) × 9 50 224 e2e - 70.0\nIn Tab.2b, we compare our performances with supervised models trained end-to-end, which also use convolutions with small receptive fields. Here, D = 2k. BagNets (Brendel et al., 2019) have shown that competitive classification accuracies can be obtained with patch-encoding that consists of 50 layers. The performance obtained by our shallow experiment with a 1-hidden layer classifier is competitive with a BagNet with similar patch-size. It suggests once again that hard-assignment VQ does not degrade much of the classification information. We also note that our approach with a linear classifier outperforms supervised shallow baselines that consists of 1 or 2 hidden-layers CNN (Belilovsky et al., 2018), which indicates that a patch based representation is a non-trivial baseline.\nTo measure the importance of the resolution on the performances, we run a linear classification experiment on ImageNet images with twice bigger resolution (N = 1282, Q = 12, k1 = 20, s1 = 12). We observe that it improves classification performances. Note that the patches used are in a space of dimension 432 1: this improvement is surprising since distance to nearest neighbors are known to be meaningless in high-dimension (Beyer et al., 1999). This shows a form of low-dimensionality in the natural image patches, that we study in the next Section." }, { "heading": "4.3 DICTIONARY STRUCTURE", "text": "The performance obtained with the surprisingly simple classifier hints to a low dimensional structure in the classification problem that is exploited by the patch based classifier we proposed. This motivates us to further analyse the structure of the dictionary of patches to uncover a lower dimensional structure and to investigate how the whitening, which highly affects performance, relates to such lower-dimensional structure.\nSpectrum of D As a preliminary analysis, we propose to analyse the singular values (spectrum) of Σ1/2 sorted by a decreasing order as λ1 ≥ ... ≥ λdext with dext = 3P 2 being the extrinsic dimension (number of colored pixels in each patch). From this spectrum, it is straightforward to compute the covariance dimension dcov of the patches defined as the smallest number of dimensions needed to explain 95% of the total variance. In other words, dcov is the smallest index such that∑dcov i=1 λi ≥ 0.95 ∑dext i=1 λi. Fig. 4 (top) shows the spectrum for several values of P , normalized by λ1 on CIFAR-10 and ImageNet-32. The first observation is that patches from ImageNet-32 dataset tend to be better conditioned than those from CIFAR-10 with a conditioning ratio of 102 for ImageNet vs 103 for CIFAR-10. This is probably due to the use of more diverse images than on CIFAR-10. Second, note that the spectrum tends to decay at an exponential rate (linear rate in semi-logarithmic scale). This rate decreases as the size of the patch increases (from dark brown to light brown) suggesting an increased covariance dimension for larger patches. This is further confirmed in Fig. 4(bottom-left) which shows the covariance dimension dcov as a function of the\nextrinsic dimension dext, with and without whitening. Before whitening, this linear dimension is much smaller than the ambient dimension: whitening the patches increases the linear dimensionality of the patches, which still increases at a linear growth as a function of P 2.\nIntrinsic dimension of D We propose to refine our measure of linear dimensionality to a nonlinear measure of the intrinsic dimension. Under the assumption of low-dimensional manifold, if the manifold is non-linear, the linear dimensionality is only an upper bound of the true dimensionality of image patches. To get more accurate non-linear estimates, we propose to use the notion of intrinsic dimension dint introduced in (Levina and Bickel, 2004). It relies on a local estimate of the dimension around a patch point p, obtained by finding the k-Nearest Neighbors to this patch in the whole dataset and estimating how much the Euclidean distance τk(p) between the k-Nearest Neighbor and patch p varies as k increases up to K ∈ N:\ndint(p) =\n( 1\nK − 1 K−1∑ k=1 log τK(p) τk(p)\n)−1 . (5)\nIn high dimensional spaces, it is possible to have many neighbors that are equi-distant to p, thus τk(p) would barely vary as k increases. As a result the estimate dint(p) will have large values. Similarly, a small dimension means large variations of τk(p) since it is not possible to pack as many equidistant neighbors of p. This results in a smaller value for dint(p). An overall estimate of the dint is then obtained by averaging the local estimate dint(p) over all patches, i.e. dint = 1|D| ∑ p∈D dint(p). Fig. 4 (bottom-right) shows the intrinsic dimension estimated using K = 4 · 103 and a dictionary of size |D| = 16 · 103. In all cases, the estimated intrinsic dimension dint is much smaller than the extrinsic dimension dext = 3P 2. Moreover, it grows even more slowly than the linear dimension when the patch size P increases. Finally, even after whitening, dint is only about 10% of the total dimension, which is a strong evidence that the natural image patches are low dimensional." }, { "heading": "5 CONCLUSION", "text": "In this work, we shed light on data-driven kernels: we emphasize that they are a necessary steps of any methods which perform well on challenging datasets. We study this phenomenon through\nablation experiments: we used a shallow, predefined visual representations, which is not optimized by gradient descent. Surprisingly, this method is highly competitive with others, despite using only whitened patches. Due to limited computational resources, we restricted ourselves on ImageNet to small image resolutions and relatively small number of patches. Conducting proper large scale experiments is thus one of the next research directions." }, { "heading": "ACKNOWLEDGEMENTS", "text": "EO was supported by a GPU donation from NVIDIA. This work was granted access to the HPC resources of IDRIS under the allocation 2021-AD011011216R1 made by GENCI. This work was partly supported by ANR-19-CHIA “SCAI”. EB acknowledges funding from IVADO fundamentals grant. The authors would like to thank Alberto Bietti, Bogdan Cirstea, Lénaic Chizat, Arnak Dalayan, Corentin Dancette, Stéphane Mallat, Arthur Mensch, Thomas Pumir, John Zarka for helpful comments and suggestions. Julien Mairal provided additional numerical results that were helpful to this project." }, { "heading": "A MAHANALOBIS DISTANCE AND WHITENING", "text": "The Mahalanobis distance (Chandra et al., 1936; McLachlan, 1999) between two samples x and x′ drawn from a random vector X with covariance Σ is defined as\nDM (x, x ′) = √ (x− x′)TΣ−1(x− x′)\nIf the random vector X has identity covariance, it is simply the usual euclidian distance :\nDM (x, x ′) = ‖x− x′‖ .\nUsing the diagonalization of the coraviance matrix, Σ = PΛPT , the affine whitening operators of the random vector X are the operators\nw : X 7→ OΛ−1/2PT (X− µ), ∀O ∈ On(R) . (6) For example, the PCA whitening operator is\nwPCA : X 7→ Λ−1/2PT (X− µ) and the ZCA whitening operator is\nwZCA : X 7→ PΛ−1/2PT (X− µ) . For all whitening operator w we have\n‖w(x)− w(x′)‖ = DM (x, x′) since\n‖w(x)− w(x′)‖ = ‖OΛ−1/2PT (x− x′)‖ = √ (x− x′)TPΛ−1/2OTOΛ−1/2PT (x− x′)\n= √\n(x− x′)TPΛ−1PT (x− x′) = DM (x, x ′) .\nB IMPLEMENTATION OF THE PATCHES K-NEAREST-NEIGHBORS ENCODING\nIn this section, we explicitly write the whitened patches with the whitening operator W . Recall that we consider the following set of euclidean pairwise distances:\nCi,x = {‖Wpi,x −Wd‖ d ∈ D} .\nFor each image patch we encode the K nearest neighbors of Wpi,x in the set Wd, d ∈ D, for some K ∈ 1 . . . |D|. We can use the square distance instead of the distance since it doesn’t change the K nearest neighbors. We have\n‖Wpi,x −Wd‖2 = ‖Wpi,x‖2 − 2〈pi,x,WTWd〉+ ‖Wd‖2\nThe term ‖Wpi,x‖2 doesn’t affect the K nearest neighbors, so the K nearest neighbors are the K smallest values of {‖Wd‖2\n2 + 〈pi,x,−WTWd〉, d ∈ D } This can be implemented in a convolution of the image using −WTWd as filters and ‖Wd‖2/2 as bias term, followed by a \"vectorwise\" non-linearity that binary encodes the K smallest values in the channel dimension. Once this is computed, we can then easily compute{‖Wd‖2\n2 + 〈pi,x,WTWd〉, d ∈ D } which is the quantity needed to compute the K nearest neighbors in the set of negative patches D. This is a computationally efficient way of doubling the number of patches while making the representation invariant to negative transform." }, { "heading": "C ABLATION STUDY ON CIFAR-10", "text": "For this ablation study on CIFAR-10, the reference experiment uses |D| = 2048 patches, a patch size Q = 6 a number of neighbors K = 0.4× 2048 = 820 and a whitening regularizer λ = 1e− 3, and yields 82.5% accuracy. Figure 5 shows the results in high resolution. We further performed an experiment, where we replaced the patches of CIFAR-10 by the patches of ImageNet: this leads to a drop of 0.4% accuracy compared to the reference model. Note that the same model without data augmentation performs about 2% worse than the reference model.\nD INTRINSIC DIMENSION ESTIMATE\nThe following estimate of the intrinsic dimension dint is introduced in Levina and Bickel (2004) as follows\ndint(p) =\n( 1\nK − 1 K−1∑ k=1 log τK(p) τk(p)\n)−1 , (7)\nwhere τk(p) is the euclidean distance between the patch p and it’s k-th nearest neighbor int the training set." } ]
2,021
IN DEEP CONVOLUTIONAL KERNELS METHODS
SP:5724a8799e8b77f19887fb7925405a7f151523cc
[ "This paper studies the problem of modeling spatial-temporal point clouds which are sampled at irregular space and time points. It proposes the Temporal PointConv model which is an extension of the PointConv model (Wu et al., 2019). In particular, PointConv computes a convolution by aggregating the features of nearby points of a point p as the new feature of p. Temporal PointConv extends this by aggregating the features of points near p in both space and time in a two-step process: first weighting the aggregation by the space distance and then weighting the aggregation by the temporal distance. " ]
We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model’s flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines.
[]
[ { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Christopher Choy", "JunYoung Gwak", "Silvio Savarese" ], "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Nan Du", "Hanjun Dai", "Rakshit Trivedi", "Utkarsh Upadhyay", "Manuel Gomez-Rodriguez", "Le Song" ], "title": "Recurrent marked temporal point processes: Embedding event history to vector", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Benjamin Graham", "Laurens van der Maaten" ], "title": "Submanifold sparse convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "P. Hermosilla", "T. Ritschel", "P-P Vazquez", "A. Vinacua", "T. Ropinski" ], "title": "Monte carlo convolution for learning on non-uniformly sampled point clouds", "venue": "ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2018),", "year": 2018 }, { "authors": [ "Max Horn", "Michael Moor", "Christian Bock", "Bastian Rieck", "Karsten Borgwardt" ], "title": "Set functions for time series", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Shuang Li", "Shuai Xiao", "Shixiang Zhu", "Nan Du", "Yao Xie", "Le Song" ], "title": "Learning temporal point processes via reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yulia Rubanova", "Ricky TQ Chen", "David K Duvenaud" ], "title": "Latent ordinary differential equations for irregularly-sampled time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Satya Narayan Shukla", "Benjamin Marlin" ], "title": "Interpolation-prediction networks for irregularly sampled time series", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Hang Su", "Varun Jampani", "Deqing Sun", "Subhransu Maji", "Evangelos Kalogerakis", "Ming-Hsuan Yang", "Jan Kautz" ], "title": "Splatnet: Sparse lattice networks for point cloud processing", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hugues Thomas", "Charles R Qi", "Jean-Emmanuel Deschaud", "Beatriz Marcotegui", "François Goulette", "Leonidas J Guibas" ], "title": "Kpconv: Flexible and deformable convolution for point clouds", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E. Sarma", "Michael M. Bronstein", "Justin M. Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Wenxuan Wu", "Zhongang Qi", "Li Fuxin" ], "title": "Pointconv: Deep convolutional networks on 3d point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "SHI Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Yifan Xu", "Tianqi Fan", "Mingye Xu", "Long Zeng", "Yu Qiao" ], "title": "Spidercnn: Deep learning on point sets with parameterized convolutional filters", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Huaxiu Yao", "Xianfeng Tang", "Hua Wei", "Guanjie Zheng", "Zhenhui Li" ], "title": "Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Qiang Zhang", "Aldo Lipani", "Omer Kirnap", "Emine Yilmaz" ], "title": "Self-attentive hawkes processes", "venue": "arXiv preprint arXiv:1907.07561,", "year": 2019 }, { "authors": [ "Simiao Zuo", "Haoming Jiang", "Zichong Li", "Tuo Zhao", "Hongyuan Zha" ], "title": "Transformer hawkes process", "venue": "arXiv preprint arXiv:2002.09291,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many real-world problems feature observations that are sparse and irregularly sampled in both space and time. Weather stations scattered across the landscape reporting at variable rates without synchronization; citizen-science applications producing observations at the whim of individuals; or even opportunistic reports of unit positions in search-and-rescue or military operations. These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process. Critically, the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood.\nModelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs, every time step in RNNs, or both for spatio-temporal models like Convolutional LSTMs (Xingjian et al., 2015). While there has been work examining irregularly sampled data through time (Rubanova et al., 2019; Shukla & Marlin, 2018) and in space (Wu et al., 2019), modeling both simultaneously has received little attention (Choy et al., 2019). This is due in part to the difficulty of scaling prior solutions across both space and time. For instance, voxelization followed by sparse convolution (Choy et al., 2019) or dense imputation (Shukla & Marlin, 2018) now face a multiplicative increase in the number of cells.\nRather than forcing irregular data into dense representations, an emerging line of research treats spatial point-clouds as first-class citizens (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018). Several works directly extend 2D convolutions to point clouds (Simonovsky & Komodakis, 2017; Wang et al., 2019; Hermosilla et al., 2018), with (Wu et al., 2019) being the first that allows efficient exact computation of convolution with dozens of layers. In this work, we build on this line of research to model spatio-temporal point clouds. Specifically, we extend the work of Wu et al. (2019) with an additional module to reason about point representations through time.\nOur new model, TemporalPointConv (TPC), is a simple but powerful extension that can learn from an arbitrary number of space-time points. Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time. By factorizing the representation update into separate spatial and temporal operators, we gain significant modeling flexibility. Further, by operating directly on point clouds, we can predict observations at arbitrary space-time, regardless of the distribution of observations.\nWe demonstrate TemporalPointConv on two distinct problems: 1) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups, and 2) predicting the weather at stations distributed throughout the state of Oklahoma. Further, we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem. The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance, ability to detect anomalies, and generalization to previously unseen input and query distributions." }, { "heading": "2 RELATED WORK", "text": "Xingjian et al. (2015) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM. This approach is appropriate for situations where the data is regularly sampled in both space and time, which is different from our setting. Interaction networks (Battaglia et al., 2016) and related approaches allow for modeling sets of interacting objects or points over time, with an original motivation to model physics processes. These models are more flexible in their modeling of spatial relationships among points. However, there is an assumption of uniform temporal sampling, which is violated in our setting.\nA significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks (GCNs) for modeling spatial interactions. For example, Li et al. (2018b) used a GCN followed by an RNN and Yu et al. (2018) used GCNs for spatial correlation and temporal convolution for temporal correlations. They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph. Rather, our approach generalizes to any spatio-temporal point outside of the training data. Yao et al. (2019) introduces an attention model to deal with dynamic spatial relationships, however this is only possible for the dense CNN version in their paper, whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches.\nPointNet (Qi et al., 2017a) sparked significant interest in networks for 3D Point cloud processing. A number of networks have been proposed (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018) with the highest performing using either sparse convolutional networks (Graham & van der Maaten, 2018; Choy et al., 2019) or point convolutional networks (Wu et al., 2019; Thomas et al., 2019). Set networks, such as DeepSets (Zaheer et al., 2017b), are similar to PointNet (Qi et al., 2017a) with neither explicitly considering neighborhood information of elements/points, making them less powerful than convolutional methods. Recently, Horn et al. (2020) proposed a set network approach for non-uniform time-series prediction, which encodes time into the feature vector of points. Our experiments show that this approach is outperformed by our convolutional method.\nSparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time, but they are only computed at locations with occupied points. Minkowski networks (Choy et al., 2019) is a sparse convolutional network that models spatio-\ntemporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract. It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions, and too coarse a resolution may result in an inaccurate representation of the data. Furthermore, the approach has difficulties accounting for the case where points should be treated as moving entities themselves.\nOn the other hand, point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density. Early versions (Simonovsky & Komodakis, 2017; Hermosilla et al., 2018; Wang et al., 2019) require explicit discretization hence cannot scale to large networks. Recently, PointConv (Wu et al., 2019) proposes an equivalent form that avoids explicit discretization and significantly improved scalability. However, so far it has been applied only to static point clouds. Our work builds on PointConv, by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection.\nOn the temporal side, a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective (Du et al., 2016; Li et al., 2018a; Zuo et al., 2020; Zhang et al., 2019). These support irregular temporal sampling, but generally do not consider the spatial correlation among points." }, { "heading": "3 PROBLEM SETUP", "text": "We consider extrapolative tasks in which the value at new locations must be inferred from existing observations. Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = (lj , tj , oj) where pj exists at location lj at time tj and has associated features oj (e.g. temperature and humidity values for a weather station). Further, let Q be a set of query locations at which the model is to make predictions given P . For example, a forecasting model might be given queries qk = (lk, tk) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted. We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time. Further, query points may be in the future, the past, or concurrent with those in P – corresponding to weather forecasting, backcasting, or nowcasting respectively. We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = {(Pi, Qi)}Ni=1." }, { "heading": "4 TEMPORAL POINTCONV ARCHITECTURE", "text": "Given a spatio-temporal point-cloud containing points pj = (lj , tj , oj), a Temporal PointConv layer is an operator that produces an updated point representation p′j = (lj , tj , o ′ j) for each point. The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band, and then a temporal PointConv over points within a narrow spatial band. These Temporal PointConv layers can be stacked to arbitrary depth. Below we give background on PointConv and describe our model." }, { "heading": "4.1 PRELIMINARIES: POINTCONV", "text": "PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points: Conv(P,p0;w, d(·, ·)) = ∑\npi∈Nd(p0)\n〈w(pi − p0),oi〉 (1)\nwhere P is a point cloud with features at each point, w(·) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0, defined by a metric d, and oi is the input features at pi. w(·) can be learned with a neural network (Simonovsky & Komodakis, 2017). PointConv (Wu et al., 2019) introduces an equivalent form so that w does not need to be computed explicitly, saving computation and memory.\nThis approach is flexible since w(·) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd. We note that this even holds when we did not have any feature at p0, since a neighborhood can still be found even in this\ncase and eq. (1) can still be used. Previously, PointConv has only been used in spatial domains in cases where p0 has features associated with it. In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points.\nFor expositional clarity, we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq. (1): P ′ = PointConv(P,Q; d(·, ·)), where we will omit Q if Q = P ." }, { "heading": "4.2 TEMPORAL POINTCONV", "text": "Given a spatio-temporal point-cloud Pin = {(lj , tj , o(in)j )|j} and set of queries Q, the Temporal PointConv operations considers the relative position from each query to the elements of Pin and their representative features to produce a set of predictions X corresponding to the query set Q.\nSpatial Convolution. First, each point’s feature is updated based on the spatial neighborhood of temporally co-occurring points. However, as the points may be irregularly spaced in time, there may be no points that precisely co-occur. We instead consider those in a fixed window of time. Thanks to the flexibility of PointConv operations, we describe this by defining the piece-wise distance function:\ndspatial(pi, pj) = { || li − lj ||2 if |ti − tj | ≤ t ∞ otherwise . (2)\nWe then apply a PointConv operator to update features: Pspatial = PointConv(Pin; dspatial), where each point in Pspatial has updated feature (li, ti, o (s) i ).\nTemporal Convolution. We then perform an analogous operation through time. We would like to consider the past and future of each point; however, this requires determining correspondence between points through time. If the underlying point-cloud represents static points such as weather stations, this can simply be based on a small spatial window. If the points correspond to known entities that are moving, we instead assume tracking and can use those entity labels to determine temporal neighborhoods each consisting exclusively of a single entity’s samples throughout time. For clarity, we present the distance function for the first case below:\ndtemporal(pi, pj) = { || ti − tj ||2 if || li − lj ||2 ≤ s ∞ otherwise . (3)\nBefore applying the temporal PointConv, we first apply a residual connection for each point, concatenating the input and spatial features. We denote this as Pres = {(lj , tj , [o(in)j , o (s) j ]) | j} where [·, ·] denotes concatenation. As before, we apply a PointConv operator with kernels defined only over differences in time as: Ptemporal = PointConv(Pres; dtemporal(·, ·)), where Ptemporal = {(lj , tj , o(tmp)j ])|j}.\nCombined Representation. To compute the final output point-cloud, we concatenate the original, spatial, and temporal representations and transform them through an MLP f such that\nPout = {(lj , tj , f([o(in)j , o (s) j , o (tmp) j ]) | j}. (4)\nWe denote multiple stacked layers via P (d+1) = TemporalPointConv(P (d))." }, { "heading": "4.3 EXTRAPOLATING TO NEW POINTS", "text": "After applying one or more layers of Temporal PointConv as described above, we apply one final query PointConv to the latent spatio-temporal point cloud Pout resulting from this encoding process. For this, we define a new problem-dependent query distance function dquery(·, ·), which could be dspatial, dtemporal, or a combination of both. This enables us to calculate a corresponding latent feature y for the each query point.\nY = PointConv(Pout, Q; dquery(·, ·)) (5)\nFinally, we apply an MLP g to transform each latent query representation into a final predictions X = {g(oy)|y ∈ Y } corresponding to the set of queries Q." }, { "heading": "5 EXPERIMENTS", "text": "We consider two problem domains for our experiments which we describe below.\nStarcraft II. To evaluate TemporalPointConv on entity-based dynamics, we designed a custom Starcraft II scenario in which two opposing armies consisting of random numbers of three distinct unit types are created and then fight on a featureless battlefield. Each episode is allowed to run without any external influence until one team has been eliminated or the time limit expires. This allows us to learn the dynamics of a battle between a large group of units without any confounding factors such as player inputs. We use the PySC2 library (Vinyals et al., 2017) to record regular observations of the game state as each episode plays out.\nWe use these regularly sampled episode histories to generate individual training examples. Specifically, we select a ‘reference timestep’ t within the episode, sample a set of ‘history offsets’ H from a provided history distribution, and a set of ‘query offsets’R from a provided query distribution. We collect unit properties corresponding to these sampled relative time steps to serve as point features. We determine the prediction targets with the same procedure using the sampled query offsets. This procedure is used to sample an arbitrary number of training examples from the set of episode histories by varying the reference timestep t and re-sampling the history and query offsets as desired. Following this procedure on our dataset of 92,802 episodes yields 2.5 million training examples.\nWe define the ‘property loss’ for a unit state prediction as the sum of the mean squared error of each of the unit’s predicted numeric properties (i.e. health, shields, position) and the cross entropy loss of the unit’s predicted categorical properties (i.e. orientation). Similarly, the ‘alive loss’ is the cross entropy loss between the network’s alive/dead prediction values and a flag indicating if the unit was present and alive in the given timestep. We then define the total loss for a set of unit state predictions as the sum of the alive loss for all units and with property loss for every unit that is actually alive at the given timesteps. This additional condition is necessary due to the fact that dead units do not have recorded properties we can use to determine property loss.\nAs PySC2 assigns a unique, consistent ID to each unit which provides perfect tracking across all timesteps, we use an entity-based temporal distance function when instantiating the query PointConv layer for this problem as described in section 4.2 above.\nWeather Nowcasting. To evaluate the ability of the TemporalPointConv architecture to reason about spatio-temporal dynamics, we derive weather nowcasting problems from a dataset of weather conditions as recorded by weather stations throughout Oklahoma. The original dataset consists of weather sensor readings from each weather station every five minutes throughout the entirety of the year 2008, associated quality metrics for each sensor in each reading, and metadata about each weather station such as its position and local soil properties.\n10% of the weather stations are randomly selected to be held out as test stations and excluded from the training process, while the remaining 90% are used to generate problems for training. We derive training problems from the larger dataset by selecting a time point t and randomly selecting 10% of the remaining training stations to be targets. All non-target training station readings and their associated station metadata within the hour preceding t are collected as input weather data. Any sample within the collected data with an associated quality metric indicating a malfunctioning or missing sensor is discarded. Furthermore, we randomly discard an additional 20% of the remaining samples to decrease the level of time synchronization in the input. Following this procedure on our dataset of weather sensor readings results in over 14,000 training examples.\nThe model is then tasked with predicting weather properties at time t for each of the target stations using the provided input data from the preceding hour. Specifically, the networks are evaluated on their ability to predict the relative humidity, air temperature, air pressure, and wind speed at each specified target location. We define the prediction loss as the sum of the mean square error between the network’s prediction for each of these properties and the actual recorded values. Due to the large difference in magnitudes between these readings, normalize each prediction and target measurement value such that the 10th percentile value to 90th percentile value of that measurement within the entire dataset is mapped to the range [0, 1]. This prevents the training process from naturally favoring measurements with a much higher average magnitude than the others.\nAs our queries for this problem are purely spatial, we use the spatial distance function eq.(2) as the query distance function when instantiating the query PointConv layer for this problem." }, { "heading": "5.1 BASELINE IMPLEMENTATIONS", "text": "Set Functions for Time Series & DeepSets. Our Temporal PointConv architecture leverages PointConv as a convolution-equivalent set function. We can evaluate this choice by replacing each PointConv module with a different set function, such as DeepSets (Zaheer et al., 2017a) or Set Functions for Time Series (SeFT) (Horn et al., 2020). Whereas PointConv takes as input a set of point locations and a set of point features, SeFT and DeepSets only consume a single set of features. However, the neighborhood and distance function mechanisms introduced for Temporal PointConv can still be applied. Therefore, we evaluate the other set functions by simply replacing each instance of PointConv(P ) with SeFT ({[li, ti, oi]|i}) or DeepSets({[li, ti, oi]|i}). Minkowski Networks. We evaluate Minkowski networks (Choy et al., 2019) by replacing each spatial-temporal PointConv step with a Minkowski convolution layer that operates on the combined spatio-temporal vector space inhabited by the raw input samples. This necessarily requires discretizing said vector space into a sparse voxel grid. We choose a voxel resolution of 6km for the weather domain, and 0.05 in game units for the starcraft domain. We use nVidia’s MinkowskiEngine codebase to provide the Minkowski convolution implementation.\nWe trained Temporal PointConv (TPC), Set Function for Time Series (SeFT), DeepSets, and Minkowski networks instantiated with the hyperparameter settings described in appendix B on both the Starcraft II and weather nowcasting domains. For the Starcraft II domain, models were trained for one epoch (owing to the massive size of the generated Starcraft II dataset), whereas for weather nowcasting they were trained for 24 epochs. All networks were trained with a cosine learning rate decay with warm restarts configured such that the learning rate cycles from its maximum value to its minimum three times throughout each training run." }, { "heading": "5.2 RESULTS", "text": "Dynamics Prediction Accuracy. To evaluate prediction accuracy, three of each model were trained on both domains. Unless otherwise specified, the Starcraft history distribution was set to be a uniform distribution over [−10,−1] and the query distribution was set to fixed time offsets {1, 2, 4, 7}. Figure 2 shows the validation loss for each model throughout training, and tables 1 and 2 show in detail the average error across each individual query the final trained networks predict for the test datasets. Our results show that TPC is significantly more accurate than the baseline algorithms, es-\npecially on the Starcraft II unit state prediction problem. In all cases, the Minkowski network was unable to outperform either of the set function-based models, and in the weather nowcasting domain it consistently failed to find a good solution, as indicated by the loss orders of magnitude higher than the set function approaches. We believe this failure is due to the difficulty of selecting a suitably sized kernel and voxelization resolution for a spatio-temporal problem at the scale of an entire state. We were unable to increase the size of the kernel without driving the network’s parameter count prohibitively high, and we were unable to decrease the resolution of voxelization without starting to ‘lose’ a significant number of weather stations which would be occupying the same cell. This result suggests that applying ‘true’ point cloud convolution that directly exploits sample positions is preferable for these domains, as opposed to discretizing or voxelizing the samples’ locations so that a traditional fixed-size filter convolution such as Minkowski networks can be applied.\nImpact of Train and Test Distributions. We investigate the robustness of TPC to a change in the distribution of input samples or query points. Since the TPC architecture is completely decoupled from the distribution of the input samples, we can accomplish this comparison by simply defining several distribution types, training a model with each type of input distribution on the Starcraft II domain, and comparing the results after evaluating each trained model across each of the input distribution types selected for evaluation.\nWe selected four input distributions for evaluation: Two ‘fixed’ distributions that always return the same set of time offsets, the uniform distribution over the range [−10, 0], and half of a normal distribution over the range [−10, 0]. Figure 6 visualizes the difference between these distributions, and presents a bar chart plotting the average loss when each model is evaluated on each distribution type. In all cases, the query distribution was kept constant and fixed. The results show that TPC and SeFT trained on fixed distributions perform poorly when evaluated on any distribution it was not trained on, while the Minkowski network suffers much less of a penalty despite worse absolute performance. Alternatively, the networks trained on the uniform and normal distributions suffer much less degradation when switching to different input distributions. The only case with a noticeable\nperformance drop is for networks trained on the normal distribution and evaluated on the uniform distribution, which is unsurprising since the normal distribution is biased toward t = 0.\nWe perform a similar experiment to evaluate the behavior of TPC when trained on different query distributions. Figure 4 visualizes the query distributions selected for training alongside a plot of the average loss for each query by their offset from the reference time (e.g. t = 0). As before, the models trained on fixed distributions only consistently perform well on the exact query points they were trained on, with the model trained on Fixed1 distribution’s prediction error rising sharply as the distance from its small cluster of expected query points increases. In contrast, the model trained on the variable distributions saw a relatively small increase in prediction error, even for query points that are outside of the range of query points it was trained on. This suggests that the ability to train the TemporalPointConv architecture on randomized input and query distributions is key to enabling it to generalize well across timesteps and behave reasonably in off-distribution scenarios.\nApplication to Anomaly Detection. We now consider the utility of our TPC model for anomaly detection, where the goal is to detect which samples in a temporal point cloud are anomalous. We focus on the weather dataset, where anomalies correspond to broken sensors. We introduce anomalies to the set of test station samples by randomly selecting 33% of the stations. For these, we randomly increase or decrease the value of one station property by a factor of 25%. The models are then tasked with predicting each of the test samples’ properties given the preceding hour of weather data. Their prediction error on each individual sample is then used as an anomaly score for detection purposes.\nAs expected based on prior prediction results, TPC significantly outperforms SeFT owing to its superior nowcasting accuracy with an area under receiver-operator curve (AUROC) of 0.927 compared to SeFT’s 0.836. The Minkowski network struggles to perform above chance level. See appendix A for the complete ROC curves." }, { "heading": "6 CONCLUSION", "text": "In this work, we proposed a novel extension to the set function PointConv that enables it to be composed with standard deep learning layers to reason about irregularly sampled spatio-temporal processes and calculate predictions for arbitrary domain-specific queries. We show that TemporalPointConv’s ability to directly consume each sample’s positional and feature data without downsampling or discretization enables it to significantly outperform state of the art sparse convolution algorithms across two complex, meaningfully different domains. Similarly, TemporalPointConv’s equivalence to standard convolution enables it to more efficiently reason about relative spatial and temporal relationships than other set functions which are not endowed with these useful properties. These promising results and TemporalPointConv’s flexible parameterization suggest that it can be effectively applied to a wide range of problems with an irregular structure that prevents most other deep learning approaches from functioning efficiently." }, { "heading": "A ANOMALY DETECTION ROC CURVES", "text": "" }, { "heading": "B HYPERPARAMETER SETTINGS", "text": "" }, { "heading": "C JOINT SPACE-TIME NEIGHBORHOODS", "text": "Though TemporalPointConv decomposes spatio-temporal processes into separate ‘space’ and ‘time’ neighborhoods, this is not strictly necessary. Space and time could be combined into one single vector space, allowing for a single PointConv layer to jointly consider samples’ spatial and temporal distances to determining their local neighborhood.\nWe investigate this possibility by training TemporalPointConv networks to do exactly that. This requires specifying a space-time distance function which we define as follows: Dst = √ D2s + xD 2 t\nwhere Ds and Dt are spatial and temporal distance functions, respectively. x then represents the tradeoff factor that dictates whether distant spatial samples should be favored over temporally distant samples when constructing a neighborhood.\nSpecifically, we test three values for x for these ‘combined’ PointConv models: 0.2. 1, and 5. The results in figure C show that all of the networks with combined spatial-temporal neighborhood functions were outperformed by our approach which considers spatial and temporal relationships separately but sequentially. Additionally, this combined distance function depends on a hyperparamter x which is likely domain-specific and nontrivial to find a good value for. These results validate our decision to treat spatial and temporal distances separately." } ]
2,020
null
SP:55e0dbbbabecb54def7b092761ee4e6bd41095c4
[ "In this work, the authors propose a greedy search approach for designing biological sequences in an active learning, batch setting. The algorithm is a fairly standard evolutionary algorithm which identifies the set of candidates at each iteration by adapting the best candidates from the previous iteration, and then evaluating those adaptations using a surrogate model. A set of experiments (using an evaluation sandbox also proposed by the authors for this work) suggest the proposed approach finds more high-quality solutions than competing approaches; however, especially in the experiments using the realistic/empirical surrogate model (an ensemble of 3 CNNs), the quality of the best solutions found by several approaches are statistically similar." ]
Efficient design of biological sequences will have a great impact across many industrial and healthcare domains. However, discovering improved sequences requires solving a difficult optimization problem. Traditionally, this challenge was approached by biologists through a model-free method known as “directed evolution”, the iterative process of random mutation and selection. As the ability to build models that capture the sequence-to-function map improves, such models can be used as oracles to screen sequences before running experiments. In recent years, interest in better algorithms that effectively use such oracles to outperform model-free approaches has intensified. These span from approaches based on Bayesian Optimization, to regularized generative models and adaptations of reinforcement learning. In this work, we implement an open-source Fitness Landscape EXploration Sandbox (FLEXS) environment to test and evaluate these algorithms based on their optimality, consistency, and robustness. Using FLEXS, we develop an easy-to-implement, scalable, and robust evolutionary greedy algorithm (AdaLead). Despite its simplicity, we show that AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
[]
[ { "authors": [ "Babak Alipanahi", "Andrew Delong", "Matthew T Weirauch", "Brendan J Frey" ], "title": "Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning", "venue": "Nature biotechnology,", "year": 2015 }, { "authors": [ "Christof Angermueller", "David Dohan", "David Belanger", "Ramya Deshpande", "Kevin Murphy", "Lucy Colwell" ], "title": "Model-based reinforcement learning for biological sequence design", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Frances H Arnold" ], "title": "Design by directed evolution", "venue": "Accounts of chemical research,", "year": 1998 }, { "authors": [ "Claudia Bank", "Sebastian Matuszewski", "Ryan T Hietpas", "Jeffrey D Jensen" ], "title": "On the (un) predictability of a large intragenic fitness landscape", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "Luis A Barrera", "Anastasia Vedenko", "Jesse V Kurland", "Julia M Rogers", "Stephen S Gisselbrecht", "Elizabeth J Rossin", "Jaie Woodard", "Luca Mariani", "Kian Hong Kock", "Sachi Inukai" ], "title": "Survey of variation in human transcription factors reveals prevalent dna binding", "venue": "changes. Science,", "year": 2016 }, { "authors": [ "David Belanger", "Suhani Vora", "Zelda Mariet", "Ramya Deshpande", "David Dohan", "Christof Angermueller", "Kevin Murphy", "Olivier Chapelle", "Lucy Colwell" ], "title": "Biological sequence design using batched bayesian optimization. 2019", "venue": null, "year": 2019 }, { "authors": [ "Surojit Biswas", "Gleb Kuznetsov", "Pierce J Ogden", "Nicholas J Conway", "Ryan P Adams", "George M Church" ], "title": "Toward machine-guided design of proteins", "venue": "bioRxiv, pp", "year": 2018 }, { "authors": [ "Surojit Biswas", "Grigory Khimulya", "Ethan C Alley", "Kevin M Esvelt", "George M Church" ], "title": "Low-n protein engineering with data-efficient deep learning", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "David H Brookes", "Jennifer Listgarten" ], "title": "Design by adaptive sampling", "venue": "arXiv preprint arXiv:1810.03714,", "year": 2018 }, { "authors": [ "David H Brookes", "Hahnbeom Park", "Jennifer Listgarten" ], "title": "Conditioning by adaptive sampling for robust design", "venue": "arXiv preprint arXiv:1901.10060,", "year": 2019 }, { "authors": [ "Sidhartha Chaudhury", "Sergey Lyskov", "Jeffrey J Gray" ], "title": "Pyrosetta: a script-based interface for implementing molecular modeling algorithms using rosetta", "venue": null, "year": 2010 }, { "authors": [ "J Arjan GM de Visser", "Santiago F Elena", "Inês Fragata", "Sebastian Matuszewski" ], "title": "The utility of fitness landscapes and big data for predicting evolution, 2018", "venue": null, "year": 2018 }, { "authors": [ "Carl I DeLuca", "Peter L Davies", "Qilu Ye", "Zongchao Jia" ], "title": "The effects of steric mutations on the structure of type iii antifreeze protein and its interaction with ice", "venue": "Journal of molecular biology,", "year": 1998 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry" ], "title": "Implementation matters in deep RL: A case study on PPO and TRPO", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Richard J Fox", "S Christopher Davis", "Emily C Mundorff", "Lisa M Newman", "Vesna Gavrilovic", "Steven K Ma", "Loleta M Chung", "Charlene Ching", "Sarena Tam", "Sheela Muley" ], "title": "Improving catalytic function by ProSAR-driven enzyme evolution", "venue": "Nature biotechnology,", "year": 2007 }, { "authors": [ "Peter I Frazier" ], "title": "A tutorial on bayesian optimization", "venue": "arXiv preprint arXiv:1807.02811,", "year": 2018 }, { "authors": [ "Javier Gonzalez", "Joseph Longworth", "David C James", "Neil D Lawrence" ], "title": "Bayesian optimization for synthetic gene design", "venue": "arXiv preprint arXiv:1505.01627,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Sergio Guadarrama", "Anoop Korattikara", "Oscar Ramirez", "Pablo Castro", "Ethan Holly", "Sam Fishman", "Ke Wang", "Ekaterina Gonina", "Neal Wu", "Efi Kokiopoulou", "Luciano Sbaiz", "Jamie Smith", "Gabor Bartok", "Jesse Berent", "Chris Harris", "Vincent Vanhoucke", "Eugene Brevdo" ], "title": "TF-Agents: A library for reinforcement learning in tensorflow", "venue": "https://github.com/tensorflow/agents,", "year": 2018 }, { "authors": [ "Anvita Gupta", "James Zou" ], "title": "Feedback GAN (FBGAN) for DNA: a novel feedback-loop architecture for optimizing protein functions", "venue": "arXiv preprint arXiv:1804.01694,", "year": 2018 }, { "authors": [ "Nikolaus Hansen", "Andreas Ostermeier" ], "title": "Completely derandomized self-adaptation in evolution strategies", "venue": "Evolutionary computation,", "year": 2001 }, { "authors": [ "Artem Kaznatcheev" ], "title": "Computational complexity as an ultimate constraint on", "venue": "evolution. Genetics,", "year": 2019 }, { "authors": [ "Nathan Killoran", "Leo J Lee", "Andrew Delong", "David Duvenaud", "Brendan J Frey" ], "title": "Generating and designing DNA with deep generative models", "venue": "arXiv preprint arXiv:1712.06148,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Brian Kuhlman", "Gautam Dantas", "Gregory C Ireton", "Gabriele Varani", "Barry L Stoddard", "David Baker" ], "title": "Design of a novel globular protein fold with atomic-level", "venue": "accuracy. science,", "year": 2003 }, { "authors": [ "Ronny Lorenz", "Stephan H Bernhart", "Christian Höner Zu Siederdissen", "Hakim Tafer", "Christoph Flamm", "Peter F Stadler", "Ivo L Hofacker" ], "title": "ViennaRNA package 2.0", "venue": "Algorithms for molecular biology,", "year": 2011 }, { "authors": [ "Ali Madani", "Bryan McCann", "Nikhil Naik", "Nitish Shirish Keskar", "Namrata Anand", "Raphael R Eguchi", "Po-Ssu Huang", "Richard Socher" ], "title": "Progen: Language modeling for protein generation", "venue": null, "year": 2004 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "H Allen Orr" ], "title": "The population genetics of beneficial mutations", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2010 }, { "authors": [ "Jakub Otwinowski", "Colin LaMont" ], "title": "Information-geometric optimization with natural selection", "venue": "arXiv, pp. arXiv–1912,", "year": 2019 }, { "authors": [ "Jakub Otwinowski", "David Martin McCandlish", "Joshua Plotkin" ], "title": "Inferring the shape of global epistasis", "venue": "bioRxiv, pp", "year": 2018 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Manish Purohit", "Zoya Svitkina", "Ravi Kumar" ], "title": "Improving online algorithms via ml predictions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Roshan Rao", "Nicholas Bhattacharya", "Neil Thomas", "Yan Duan", "Xi Chen", "John Canny", "Pieter Abbeel", "Yun S Song" ], "title": "Evaluating protein transfer learning with tape", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam J Riesselman", "John B Ingraham", "Debora S Marks" ], "title": "Deep generative models of genetic variation capture mutation effects", "venue": "arXiv preprint arXiv:1712.06527,", "year": 2017 }, { "authors": [ "Philip A Romero", "Andreas Krause", "Frances H Arnold" ], "title": "Navigating the protein fitness landscape with Gaussian processes", "venue": "Proceedings of the National Academy of Sciences,", "year": 2013 }, { "authors": [ "Reuven Rubinstein" ], "title": "The cross-entropy method for combinatorial and continuous optimization", "venue": "Methodology and computing in applied probability,", "year": 1999 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Sam Sinai", "Eric Kelsic", "George M Church", "Martin A Nowak" ], "title": "Variational auto-encoding of protein sequences", "venue": "arXiv preprint arXiv:1712.03346,", "year": 2017 }, { "authors": [ "David H Wolpert", "William G Macready" ], "title": "No free lunch theorems for optimization", "venue": "IEEE transactions on evolutionary computation,", "year": 1997 }, { "authors": [ "Kevin K Yang", "Zachary Wu", "Frances H Arnold" ], "title": "Machine-learning-guided directed evolution for protein engineering", "venue": "Nature methods,", "year": 2019 }, { "authors": [ "Brookes" ], "title": "The parameters and the generator are identical to the DbAS implementation, see Section A.2.3. The difference here is that, compared to a DbAS cycle, where the top sequences in each cycle are selected based on the fitness predicted by the oracle, in a CbAS cycle the sequences are weighted by the score (in this case", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "An important problem across many domains in biology is the challenge of finding DNA, RNA, or protein sequences which perform a function of interest at a desired level. This task is challenging for two reasons: (i) the map φ between sequences X = {x1, · · · ,xn} and their biological function y = {y1, · · · , yn} is non-convex and (ii) has sparse support in the space of possible sequences AL, which also grows exponentially in the length of the sequence L for alphabet A. This map φ is otherwise known as a “fitness landscape” (de Visser et al., 2018). Currently, the most widely used practical approach in sequence design is “directed evolution\" (Arnold, 1998), where populations of biological entities are selected through an assay according to their function y, with each iteration becoming more stringent in the selection criteria. However, this model-free approach relies on evolutionary random walks through the sequence space, and most attempted optimization steps (mutations) are discarded due to their negative impact on y.\nRecent advances in DNA sequencing and synthesis technologies allow large assays which query y for specific sequences x with up to 105 physical samples per batch (Barrera et al., 2016). This development presents an opening for machine learning to contribute in building better surrogate models φ′ : X → y which approximate the oracle φ that maps each sequence to its true function. We may use these models φ′ as proxies of φ, in order to generate and screen sequences in silico before they are sent to synthesis (Yang et al., 2019; Fox et al., 2007). While a large body of work has focused on building better local approximate models φ′ on already published data (Otwinowski et al., 2018; Alipanahi et al., 2015; Riesselman et al., 2017; Sinai et al., 2017), the more recent work is being done on optimization in this setting (Biswas et al., 2018; Angermueller et al., 2020; Brookes & Listgarten, 2018; Brookes et al., 2019). Although synthesizing many sequences within a batch is now possible, because of the labor-intensive nature of the process, only a handful of iterations of learning can be performed. Hence data is often collected in serial batches bi, comprising data Dt = {b0, · · · , bt} and the problem of sequence design is generally cast as that of proposing batches so that we may find the optimal sequence x∗t = argmaxx∈Dt φ(x) over the course of these experiments.\nIn this paper, we focus our attention on ML-augmented exploration algorithms which use (possibly non-differentiable) surrogate models φ′ to improve the process of sequence design. While the work is under an active learning setting, in which an algorithm may select samples to be labelled, with data arriving in batches bi, our primary objective is black-box optimization, rather than improving the accuracy of surrogate model. We define Eθ(D, φ′) to denote an exploration algorithm with parameters θ, which relies on dataset D and surrogate model φ′. When the context is clear, we will simply use E as shorthand.\nIn most contexts, the sequence space is large enough that even computational evaluation is limited to a very small subset of possible options. For this reason, we consider the optimization as samplerestricted, not only in the number of queries to the ground truth oracle, but also the number of queries to the surrogate model(Among other reasons, this allows us to thoroughly study the algorithms on landscapes that can be brute-forced, simulating a similar situation when the sequence space is very large compared to computation power, a very common setting.) The algorithm E may perform v sequence evaluations in silico for every sequence proposed. For example, v × B samples may be evaluated by the model before B samples are proposed for measurement. Ideally, E should propose strong sequences even when v is small; that is, the algorithm should not need to evaluate many sequences to arrive at a strong one." }, { "heading": "2 CONTRIBUTIONS", "text": "In this study, we make three main contributions towards improving algorithms for sequence design:\n1. To build on recent progress in biological sequence design, the research community needs good benchmarks and reference algorithm implementations against which to compare new methods. We implement an open-source simulation environment FLEXS that can emulate complex biological landscapes and can be readily used for training and evaluating sequence design algorithms. We hope that FLEXS will help ensure meaningful and reproducible research results and accelerate the process of algorithm development for ML-guided biological sequence design.\n2. We introduce an abstracted oracle to allow the empirical study of exploration strategies, independent of the underlying models. This helps us understand relevant properties, such as robustness and consistency of the algorithms.\n3. Inspired by evolutionary and Follow the Perturbed Leader approaches in combinatorial optimization, we propose a simple model-guided greedy approach, termed Adapt-with-the-Leader (ADALEAD). ADALEAD is simple to implement and is competitive with previous state-of-the-art algorithms. We propose ADALEAD as a strong, accessible baseline for testing sequence design algorithms. We show that in general, simple evolutionary algorithms are strong benchmarks to compete against and should be included in future analyses of new methods." }, { "heading": "3 EVALUATION", "text": "We evaluate the algorithms on a set of criteria designed to be relevant to both the biological applicability as well as the soundness of the algorithms considered (Purohit et al., 2018). We run the algorithms using FLEXS, where all of these algorithms and criteria evaluators are implemented.\n• Optimization: We let maximization be the objective. Most optimization algorithms operate under the assumption that critical information such as the best possible y∗ or the set of all local maxima M is unknown. While it is reasonable to assume that the best sequence is the one with the highest fitness, this is not necessarily the case in reality. For instance, we might wish to bind a particular target, but binding it too strongly may be less desirable than binding it at a moderate level. As measurements of this criterion, we consider the maximum y = maxx φ(x) over all sequences considered, as well as the cardinality |S|, where S = {xi | φ(xi) > yτ} and yτ > 0 is a minimum threshold value. It is noteworthy that we often do not know if any solutions y > yτ exists, hence finding many solutions by an algorithm is a sign of its strength.\n• Robustness: A major challenge for input design in model-guided algorithms is that optimizing directly on the surrogate φ′ can result in proposing a sequence x with large error, instead of approximating x∗ (e.g. if the proposed input x is far outside D). Additionally, while biological\nmodels φ may contain regularities that are universally shared, those regularities are not known. Hence, a desired property of the algorithm is that it is robust in the face of a poor model.\n• Consistency: The algorithm E should produce better performing sequences if it has access to a higher quality model φ′.\nAdditionally, we desire that the high-performing sequences proposed by the algorithm are diverse. Because a sequence may be disqualified for reasons which are unknown during the design phase, we would like to find distinct solutions which meet the optimality criteria. However, metrics for measuring diversity can be ambiguous and we only focus on measuring diversity in a narrow sense. We measure diversity using |S|. When the ground truth model can be fully enumerated to find its local optima (peaks) by brute force (i.e. we know the maximaM), we can use the number of found maxima |M′yτ | as a measure of diversity, whereM′yτ ⊂ M represents the maxima found by the algorithm above fitness yτ ." }, { "heading": "4 RELATED WORK", "text": "Bayesian Optimization (BO). BO algorithms are designed to optimize black-box functions which are expensive to query. Importantly, these algorithms make use of the uncertainty of model estimates to negotiate exploration versus exploitation. In a pioneering study, Romero et al. (2013) demonstrate the use of BO for protein engineering. Many successful attempts have followed since (e.g. Gonzalez et al. (2015) and Yang et al. (2019)), however in each of these cases a specific model of the design space is assumed to shrink the search space significantly. Otherwise BO is known to perform poorly in higher dimensional space (Frazier, 2018), and to our knowledge, no general purpose sequence design algorithm using BO has performed better than the models considered below. For this reason, while we implement variations of BO as benchmarks (see similar interpretations in Belanger et al. (2019)), we do not consider these implementations as competitive standards. In our figures, we use the EI (Expected Improvement) acquisition function with an evolutionary sequence generator as our BO algorithm, and show comparisons with alternatives (on TF landscapes) in the supplement.\nGenerative models. Another class of algorithms approach the task of sequence design by using regularized generative models. At a high level, these approaches pair a generative model Gϕ with an oracle φ′, and construct a feedback loop that updates Gϕ (and sometimes φ′) to produce high-performing sequences. In Feedback-GAN (FBGAN), Gupta & Zou (2018) pair a generative adversarial network (Goodfellow et al., 2014) that is trained to predict whether sequences belong to the functional set using a frozen oracle φ′ which filters a subset of sequences at each training step. They bias the training of the generator and discriminator towards high-performing sequences. Killoran et al. (2017) pursue the sequence optimization by regularizing an “inverted model\", φ′θ −1 (yi) = xi with a Wasserstein GAN (Arjovsky et al., 2017) which is trained to produce realistic samples. In this case, both φ′θ and the generator are trained jointly.\nBrookes & Listgarten (2018) propose an algorithm, Design by Adaptive Sampling (DbAS), that works by training a generative model Gϕ on a set of sequences X0, and generating a set of proposal sequences X̂ ∼ Gϕ. They then use φ′θ to filter X̂ for high-performing sequences, retrain Gϕ, and iterate this process until convergence. This scheme is identical to the cross-entropy method with a VAE as the generative model, an important optimization scheme (Rubinstein, 1999). In follow-up work, termed Conditioning by Adaptive Sampling (CbAS) (Brookes et al., 2019), the authors aim to address the pitfall in which the oracle is biased, and gives poor estimates outside its training domain D. The authors enforce a soft pessimism penalty for samples that are very distinct from those that the oracle could have possibly learned from. Specifically, they modify the paradigm so that as the generator updates its parameters ϕt → ϕ′t while training on samples in the tail of the distribution, it discounts the weight of the samples xi by Pr(xi|G;ϕ0) Pr(xi|G;ϕt) . In other words, if the generative model that was trained on the original data was more enthusiastic about a sample than the one that has updated according to the oracle’s recommendations, that sample is up-weighted in the next training round (and vice versa). Notably, as the oracle is not updated during the process, there are two rounds of experiments where they maximize the potential gains from their oracle given what it already knows: a round to create the oracle, and a round to improve the generative model. While it is trivial to repeat the process for multiple rounds, the process can be improved by incorporating information about how\nmany rounds it will be used for. We use DbAS and CbAS as state of the art representatives in this class of regularized generative algorithms.\nReinforcement Learning (RL). RL algorithms are another method used to approach this problem. As these algorithms learn to perform tasks by experience, their success is often dependent on whether interactions with the environment are cheap or if there is a good simulator of the environment which they can practice on (Mnih et al., 2015). However, in our setting, good simulators often do not exist, and sampling the environment directly can be very expensive. As a result, recent approaches to this problem have built locally accurate simulators and used them to train an RL agent. With DyNA-PPO, Angermueller et al. (2020) achieve state-of-the-art results in sequence design tasks following this approach. They train a policy network based on Proximal Policy Optimization (PPO) (Schulman et al., 2017) by simulating φ through an ensemble of models φ′′. The models are trained on all measured data so far, and those that achieve a high R2 score in cross-validation are selected as part of the ensemble. Additionally, to increase the diversity of proposed sequences, they add a penalty for proposing samples that are close to previously proposed samples. They compare their results against some of the methods mentioned above, as well as other RL training schemes, showing superior results. While DyNA-PPO reaches state of the art performance, a major drawback of policy gradient algorithms is that they are complex to implement, and their performance is highly sensitive to implementation (Engstrom et al., 2019). We use DyNA-PPO as a representative state of the art for all sequence design algorithms for our study.\nMost recent sequence design algorithms do not use evolution-inspired methods. Despite their lack of popularity, as we show here, they are fairly strong baselines that should be considered. Here we investigate classic algorithms like standard Wright-Fisher process, without and without models, Covariance Matrix Adaptation evolutionary strategy, (CMA-ES) (Hansen & Ostermeier, 2001) and ADALEAD.\nIt’s important to note that the focus of our study is distinct from those focused on better local approximate models (e.g. (Rao et al., 2019; Riesselman et al., 2017)), or semi-supervised and transfer learning approaches (e.g. (Madani et al., 2020; Biswas et al., 2020))." }, { "heading": "5 METHODS", "text": "In this study, we evaluate representative state of the art algorithms in addition to a simple greedy algorithm which we call Adapt-with-the-Leader (ADALEAD) (See Algorithm 1). The main advantage\nof ADALEAD as a benchmark is that it is easy to implement and reproduce. The algorithm selects a set of seed sequences x such that φ(x) is within (1− κ) of the maximum y observed in the previous batch. These seeds are then iteratively recombined and mutated (see appendix for details) and the ones that show improvement on their seed according to φ′ are added to a set of candidates M . Finally, all candidates are sorted according to φ′ and the top B are proposed for the next batch. We consider the recombination rate r and mutation rate µ as well as the threshold κ as hyperparameters. However, the algorithm is fairly robust in performance for κ, r < 0.5. We note that recombination is known to be a powerful generative method which promotes diversity and avoids local maxima (Otwinowski & LaMont, 2019). Despite this, the performance gain due to recombination in ADALEAD is small (see supplement Fig. A2 for details).\nAlgorithm 1 ADALEAD Input: model φ′, batch bt, threshold κ, virtual evaluations v\nInitialize parents P ← ∅ Initialize mutants M ← ∅ Update φ′ with data from bt S = {x | φ(x) ≥ maxy∈bt y · (1− κ),∀x ∈ bt} while |M | < v · |bt| do P = P ∪ RECOMBINE(S) for xi ∈ P do {xi1, . . . ,xik} = ROLLOUT(xi, φ′) M =M ∪ {xi1, . . . ,xik}\nend for end while Use φ′ to select bt+1, the top |bt| sequences from M RETURN bt+1\nAs ADALEAD perturbs the best (known) sequences, it is a greedy algorithm where the threshold κ determines the greediness. However, it is adaptive in the sense that given a fixed threshold κ, when the optimisation surface is flat, many sequences will clear the (1− κ) ·maxy∈bt y filter, and therefore the algorithm encourages diversity. When the surface has a prominent peak, it will opt to climb rapidly from the best-known samples. As we will see, this yields a robust, yet surprisingly effective algorithm that uses the same principle of hill-climbing as a Wright-Fisher process, but faster and more scalable to compute (as it does not require fitness based sampling), and is supported by the intuition behind Fisher’s fundamental theorem (see Otwinowski & LaMont (2019) for a helpful discussion). Notably, this is distinct from rank-based and quantile-based algorithms, where diversity may be compromised due to the dominance of sequences that are trivial changes to x∗t , and hence are more likely to remain at the same local optima. The ROLLOUT procedure mutates (with mutation rate 1/L) proposed candidates in S until φ′(xi,k) < φ′(xi,0) where k is the number of times a candidate has been subject to a mutation operation. Finally, all candidates are sorted according to φ′ and the top B are proposed for the next batch. We find that the rollout process has a small beneficial effect on the performance of the algorithm (see supplement Fig. A4). Overall, we speculate that ADALEAD takes advantage of correlation structure of biological sequences. Namely that sequences close to each other are more likely to have similar values. Since it always starts from the best known sequences, it ensures better robustness (less dependence on the model). The optimization side is greedy hill-climbing with the help of the model’s foresight. We found it surprising that such a simple method can compete with more sophisticated state of the art approaches presented here.\nIn order to focus our attention on comparing the power of exploration algorithms, rather than the power of an oracle, we make use of, but do not limit ourselves to, an abstract model φ′α, which is a noise-corrupted version of the ground truth landscape. Specifically,\nφ′α = α dφ+ (1− αd) (1)\nwhere d is the distance from the closest measured neighbor and is a noise parameter sampled from an exponential distribution with λ equal to φ operating on the closest measured neighbor1. We find\n1An alternative approach is to let the noise be a random sample from the empirical distribution of known mutants; since behaves more stochastically in this setting, we do not evaluate the models with this approach.\nthat this setup allows us to emulate the performance of trained models well, while controlling for model accuracy. We also define the null model φ′null as an exponential distribution with λ equal to the average measured fitness (Orr, 2010). The null model is a special case of the abstract model where α = 0. Importantly, this abstraction allows us to control α to investigate consistency and robustness (see supplement for additional description and Fig. 1B for a comparison of abstract and empirical models on RNA landscapes).\nTo validate the applicability of this strategy, we also use it with trainable sequence models, including a simple linear regressor, random forest regressors, Gaussian process regressors, and several neural network architectures. As we show using the abstract models, ADALEAD is consistent, and any improvement on the model is beneficial for the algorithm. Although ADALEAD is compatible with single models, we implement and recommend using it with an ensemble of models. Additionally, our implementation is adaptive: model estimates are re-weighted based on how well they predicted the labels on sequences proposed on the previous batch. Herein, we show the results from an ensemble of 3 CNN models as a representative choice of the empirical models, denoted by φ′′. This ensemble was our strongest empirical model across different landscapes, among those mentioned above." }, { "heading": "6 EXPERIMENTS", "text": "Prior to discussing the empirical experiments, two preliminaries are noteworthy. Firstly, proposing a general algorithm that can optimize on arbitrary landscapes is not possible, and even local optima can take an exponential number of samples to find with hill-climbing algorithms (Wolpert et al., 1997; Kaznatcheev, 2019). Nonetheless, since biological landscapes are governed by physio-chemical laws that may constrain and structure them in specific ways, some generalizability can be expected. Secondly, the problem of choosing suitable optimization challenges that are biologically representative requires some consideration. Biological fitness landscapes are notoriously hard to model outside the locality where rich data is available (see Bank et al. (2016) as well as Fig. A3 in the supplement). As such, models that are built on data around some natural sequence are often only representative in that context, and the model becomes pathological rapidly as the distance from the context increases. This is the motivation for Brookes et al. (2019) as an improvement on Brookes & Listgarten (2018) to ensure the model φ′ is not trusted outside its zone of applicability. Here, we are interested in exploring deep into the sequence space, through multiple “experimental design” rounds. Hence it is highly preferable that the quality of our ground-truth simulator φ remains consistent as we drift away from the starting sequence. Note that this is not the case for models that are trained on empirical data such as those in Rao et al. (2019) where the model is accurate in local contexts where data is available, but the behavior outside the trust region is unknown and may be pathological.\nWe test our algorithms on multiple sequence design tasks. We first choose contexts in which the ground truth and the optimal solutions are known (to the evaluator). We then challenge the algorithms with more complex landscapes, in which the ground truth may be queried, but the optimal solution is unknown.\nTF binding landscapes. Barrera et al. (2016) surveyed the binding affinity of more than one hundred transcription factors (TF) to all possible DNA sequences of length 8. Since the oracle φ is entirely characterized, and biological, it is a relevant benchmark for our purpose (Angermueller et al., 2020). We randomly sample five of these measured landscapes, and evaluate the performance of these algorithms on these landscapes. For each landscape, we start the process at 13 random initial sequences which are then fixed. We do not pre-train the algorithms on other landscapes before using them, since TF binding sites for different proteins can be correlated (e.g. high-performing sequences in some landscapes may bind many proteins well). Should pre-training be required, we impose a cost for collecting the data required. We shift the function distribution such that y ∈ [0, 1], and therefore y∗ = 1. We show the results of our optimization tasks on these landscape in Fig. 2A. All algorithms perform similarly well in terms of optimization in these landscapes, suggesting that while the task itself is biologically suitable, the landscape is rather easy to optimize on, partially due to its size.\nRNA landscapes. The task of predicting RNA secondary structures has been well-studied and is relatively mature. Classic algorithms use dynamic programming to efficiently fold RNA structures. RNA landscapes are particularly suitable because they provide a relatively accurate model of biological sequences that is consistent across the domain, even for landscapes of size 4100, and are\nnon-convex with many local optima (see Fig. A3). We use the ViennaRNA package to simulate binding landscapes of RNA sequences as φ (Lorenz et al., 2011).\nWe test our algorithms on many RNA binding landscapes of various complexity and size (e.g. sequence length 14–100). In short, ADALEAD outperforms the other algorithms in all of the attempted landscapes. As a basic ground-truth test, we optimize sequences of size 14 for binding hidden 50 nucleotide targets. We use the duplex energy of the sequence and the target as our objective y, which means that the sequence can bind the target in multiple different ways, producing many local minima. We also consider more complex landscapes with hidden targets and compute the objective as √ y1y2. Due to the size of these landscapes we can enumerate all 414 sequences with the oracle, and as with the TF binding landscapes, find out how well the algorithms explore the space. Table 1 summarizes the number of local optima with function greater than yτ that each algorithm finds. We define local optima in this case as sequences whose immediate neighbors all have lower y.\nWe also test RNA sequences of size 100, that bind hidden targets of size 100. In these cases we do not know the actual optima in advance, hence, we estimate by computing the binding energy of the perfect complement of the target. Using this normalization, y∗ ≈ 1. Like before, we use both single\nand double target binding objectives. Additionally, we define conserved patterns in sequences which would not allow mutations,meaning the sequence needs to preserve those positions in order to remain viable. In these cases, we conserve a fifth of the sequence, which results in roughly a fifth of the landscape providing no gradients. This resembles the scenario for many biological objectives, where gradients are not available over large regions of the space. We refer to this challenging landscape as “Swampland” (see a breakdown in Fig. A3).Please refer to section A3 for a more detailed discussion. Despite using a greedy heuristic, ADALEAD outperforms the other algorithms in landscapes that\nsimple landscape, even a model free evolution algorithm can optimize well. B: Consistency (performance vs. model quality α) and robustness (performance at low α) of the algorithms on a 2 target RNA landscapes of L = 14. C: Time evolution of the cumulative maximum over an RNA landscape with sequence length 14, and 2 hidden targets (5 initializations, α = 1). Top: φα=1, bottom: φ′′, ensemble of 3 CNNs. D: Comparison of overall performance for 3 landscape classes. Swampland landscapes show high variance due to the difficulty of finding good sequences starting\nfrom dead sequences. ADALEAD, and evolutionary algorithms in general tend to be strongly competitive in more complex landscapes.\nare highly epistatic (include a lot of local peaks). As shown in Fig. A3, even the set of shortest path permutations on the landscapes between one of the starting positions and the global peak may include valleys of multiple deleterious steps. The time evolution of the best sequence found at each batch, shown in Fig. 2C, suggests that some algorithms are faster to climb in the first couple of batches, but none outperform ADALEAD in the longer horizon. As we show in Fig. 2B, ADALEAD is also more robust (performs well even with an uninformative model), and as consistent as all the other algorithms. The relative ranking of algorithms remains similar to the α = 1 case when CNN ensembles are used (Fig. 2C).\nProtein design. As a final challenge we also compare the performance of algorithms with multiple protein design tasks. While ground-truth simulators for protein design are much less accurate than the RNA landscapes, the larger alphabet size of ∼ 20 and complexity of the landscapes are of high relevance. In this case we use PyRosetta (Chaudhury et al., 2010) as φ. The Rosetta design objective function is a scaled estimate of the folding energy, which has been found to be an indicator of the probability that a sequence will fold to the desired structure (Kuhlman et al., 2003). We optimize for the structure of 3MSI, a 66 amino acid antifreeze protein found in the ocean pout (DeLuca et al., 1998) starting from 5 sequences with 3–27 mutations from the wildtype. Here, we normalize energy scores by scaling and shifting their distribution and then applying the sigmoid function." }, { "heading": "7 CONCLUSIONS AND FUTURE DIRECTIONS", "text": "We implement an open-source simulation environment FLEXS that can emulate complex biological landscapes and can be readily used for training and evaluating sequence design problems. We also include “Swampland” landscapes with large areas of no gradient, a biological aspect of sequence design rarely explored (see Fig A3). We also provide additional interfaces for protein design based on trained black-box oracles (e.g. (Rao et al., 2019) that we don’t study for reasons explained in the manuscript. Additionally, we proposed a simple evolutionary algorithm, ADALEAD as a competitive benchmark. We demonstrate that ADALEAD is able to robustly optimize in challenging settings, and consistently performs better as model performance improves. We show that in general, simple evolutionary algorithms are strong benchmarks to compete against. While we have investigated consistency and robustness for the queries to ground truth oracle, the same concepts also apply to variations in v. These would affect sample efficiency, and scalability of the algorithms. There are also other properties of interest, also mentioned in Purohit et al. (2018) (e.g. independence), which are closely connected to consistency and robustness, where the algorithm can operate with oracles with different biases and noise profiles. Additionally, in the online batch setting, for any fixed total sequences proposed, the algorithm is expected to pay a performance penalty as the batch size grows. This is due to lack of model updates for the sequences proposed within each batch. Algorithms that incur lower penalties can be desirable in low-round batch setting. This is known as adaptivity. We do not evaluate these properties directly in this work, but implement tools that allow for their study within FLEXS.\nWe hope that FLEXS provides a useful environment for future development of better sequence design algorithms, and hope that ADALEAD help discipline such efforts towards more simple approaches that are reproducible and translatable in practice." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NOISE MODELS", "text": "" }, { "heading": "A.1.1 NOISY ORACLE", "text": "In order to control for model accuracy, we define noise-corrupted versions of the landscape oracle φ′α = α\ndφ+ (1− αd) , where d is the distance from the closest measured neighbor and is a noise parameter sampled from an exponential distribution with the rate parameter λ equal to φ operating on the closest measured neighbor. An alternative approach is to sample from the empirical distribution of known mutants; since behaves more stochastically in this setting, we use the former approach, while we confirm that the results are qualitatively the same if the second approach is used.\nIn Fig. 1B we show the performance of these models on an RNA14_B1 landscape as α is varied between 0 and 1. We also show the predictive power of the empirical CNN ensembles, when trained on 100 random mutants with varying distances from the wild type on the same landscape. The noise-corrupted oracles allow testing for consistency and robustness in particular." }, { "heading": "A.1.2 EMPIRICAL MODELS", "text": "For our sandbox, we implement a suite of empirical models. We use linear regression, random forest regression from the scikit-learn library (Pedregosa et al., 2011). We also implement two neural architectures: (1) A “global epistasis” network that first collapses the input to a single linear unit, and then a non-linear transform through two 50 unit ReLUs, and a final single linear unit (Otwinowski et al., 2018). (2) A convolutional neural network. We tested a variety of other architectures and sklearn models, but found these sufficient as representative models. We find that the CNN is often the most accurate model. For ensembling, we use ensembles of 3 initializations of the CNN architecture. Ensembles may be run in “adaptive mode\" where the model outputs are averaged based on weights proportional to the R2 of the model on the data generated in the previous step. While ensembles come with the additional benefit of allowing uncertainty estimation (which is useful in most of our algorithms), the performance gain of using them is often small. In the paper, we show the results for an adaptive ensemble of 3 CNNs, which was the strongest empirical model we attempted." }, { "heading": "A.2 ALGORITHMS", "text": "" }, { "heading": "A.2.1 BAYESIAN OPTIMIZATION", "text": "We first test classical GP-BO algorithms on TF landscapes, as they are of small enough (48 total sequences) to enumerate fully. We use a Gaussian process regressor (GPR) with default settings from the sklearn library. Furthermore, we lift the virtual screen restriction for this particular optimization method. We propose a batch of sequences as determined by an acquisition function. We use the standard EI, UCB as well as Thompson sampling (where the posterior is sampled for each sequence, and top B are selected for next round). The GPR based, enumerative approach scales poorly and cannot accommodate domains where the sequence space is much larger (e.g. RNA landscapes or protein landscapes).\nAs a compromise, we also implement a BO-guided evolutionary algorithm, where mutants are generated at random from sequences, defining a tractable action space. To generate the batch sequences in each round, we take a state sequence s and sample v sequences with per position mutation probability of 1L . Instead of GPRs, we use ensembles of the empirical models to compute uncertainty in the posterior, similar to (Belanger et al., 2019). We evaluate the ensemble on each proposed mutant s′i,0, and use EI and UCB acquisition functions for selecting s ′ i,0 to mutate further to s′i,1. We stop adding mutations once Var(φ ′′(si,t)) > 2 · Var(φ′′(si,0)), or once we reach the virtual screen limit. We collect all candidates that were generated through this process and we then use Thompson sampling based on φ′′(s) to choose a subset of size B as our batch. After the batch is filled and the ensemble is updated, the state sequence for the next batch is chosen to be the sequence from the previous batch with the highest predicted score (Frazier, 2018). As shown in Fig. A1A, the evolutionary BO is competitive with, or better than, the enumerative BO algorithms we attempted on the TF landscapes. We therefore use the evolutionary BO in the main paper.\nA BGP regression on TF landscapes Comparison on Evolutionary algorithms C um ul at iv e m ax y\nFigure A1: A: Running GP-based BO on the TF binding landscape with full enumeration of the sequence space (v =∞). For comparison, we show that the evolutionary BO, used in the paper as a benchmark, outperforms these methods. B: A comparison of multiple evolutionary algorithms that were run on RNA binding landscapes of length 14 and one target, with similar µ = 1/L, r = 0.2." }, { "heading": "A.2.2 ADALEAD", "text": "We performed basic parameter tuning for ADALEAD, testing recombination rates, mutation rates and threshold. For comparisons with other algorithms, we use recombination rate of 0.2 (i.e. 1/5 probability of a crossover at each position), mutation rate of 1/L, where L is the sequence size, and threshold of τ = 0.05. We find that presence of recombination helps the performance of the algorithm both in optimality and ability to find diverse peaks. However, both of these effects are small (see Fig. A2) and most of the benefits are present within rate < 0.3, above which the stochastic effects tend to be detrimental with noisier models. While higher mutations rates can help with exploration we chose 1/L to accommodate the simpler models and uniformity across all algorithms (some that introduce one change at a time). The τ parameter begins to be effective below 0.5, but tends to be too restrictive (resulting in less than enough seeds when generating) when τ < 0.05. We use τ = 0.05 in shown experiments. Overall, ADALEAD is fairly robust to these parameters. Since ADALEAD is a evolutionary algorithm, we also compare its performance to other evolutionary algorithms as benchmarks. Our results, shown in Fig. A1B, show that ADALEAD is roughly equivalent to a modelguided Wright-Fisher process (this is expected, as ADALEAD operates on the same hill-climbing principle although it is faster to compute and less memory intensive, and hence it can be scaled better). It consistently outperforms model-free evolution (WF), CMA-ES, and Evolutionary BO.\nA.2.3 DBAS\nWe implement the sampling algorithm introduced in Brookes & Listgarten (2018) with a variational autoencoder (VAE) (Kingma & Welling, 2013) as the generator. The encoder and decoder of the VAE both consist of three dense layers with 250 exponential linear units. There is a 30% dropout layer after the first encoder and the second decoder layer as well as batch normalization after the second encoder layer. The input is a one-hot-encoding of a sequence and the output layer has sigmoid neurons that result into a positional weight matrix as a reconstruction of the input. The latent space dimension is 2. We use the Adam optimizer (Kingma & Ba, 2014). In each cycle, the DbAS algorithm starts by training the VAE on the sequences whose fitness is known and selects the top 20% as scored by the oracle. New samples are then generated by drawing from the multivariate Gaussian distribution of the latent space of the VAE. Once again, the top 20% of the generated sequences (or the ones whose fitness is above the 80th percentile of the previous batch, whichever is the stricter threshold) are selected and the process is repeated until convergence (which in our case is defined as not having improved the top scoring sequence for 10 consecutive rounds) or the computational budget is exhausted. At that point the batch of sequences proposed in the latest iteration is returned.\nM ax\nc um\nul at\niv e\ny\nAlpha=1 Ensemble of 3x CNNsA B\nFigure A2: Effects of these hyperparameters on the performance of ADALEAD, on RNA landscape of length 14 and two hidden targets. ADALEAD is robust to hyperparameter choices for κ, r. The setting used in the paper shown is in blue (r = 0.2, κ = 0.05). The same κ with no recombination is shown in red. All other hyperparameters r ∈ [0, 5] and κ are shown as “alt. HP”. µ was set to mirror the µ used for all other algorithms, and v = 20. A: The case where α = 1, i.e. the model has perfect\ninformation. B: When an ensemble of 3 CNNs was used.\nA.2.4 CBAS\nWe implement the adaptive sampling algorithm proposed in Brookes et al. (2019). The parameters and the generator are identical to the DbAS implementation, see Section A.2.3. The difference here is that, compared to a DbAS cycle, where the top sequences in each cycle are selected based on the fitness predicted by the oracle, in a CbAS cycle the sequences are weighted by the score (in this case the reconstruction probability of the VAE) assigned to it by the generator trained on the ground truth data divided by the score assigned to it by the generator trained on the all sequences proposed so far.\nA.2.5 CMA-ES\nWe adapt the covariance matrix adaptation evolution strategy (CMA-ES) (Hansen & Ostermeier, 2001) for the purpose of sequence generation. Let n = A× L be the product of the alphabet length and the sequence length. We initialize the mean value m ∈ Rn of the search distribution to a zero vector, and the covariance matrix C ∈ Rn×n to an identity matrix of the same dimension. At every iteration, we propose λ sequences, where λ is equal to the batch size. We sample λ× V sequences, where V is the virtual screening ratio. Every sample x ∼ N (m,C) is converted into a one-hot representation of a sequence x by computing the argmax at each sequence position. A model provides a fitness value for the proposed sequence. Out of the λ× V proposed samples, the top λ sequences (by fitness value) are returned to be evaluated by the oracle. Because the sampled sequences are continuous but the one-hot representations are discrete, we normalize m such that ‖m‖2 = 1 to prevent value explosion.\nA.2.6 PPO\nAs a sanity check for ensuring that DyNA-PPO is implemented well, we also test proximal policy optimization (PPO) to train the policy network which selects the best action given a state. The policy network is pretrained on B×V2 sequences, whereB is the batch size and V is the virtual screening ratio. The remaining budget for evaluating sequences is then amortized among the remaining iterations – that is, each sequence proposal step trades some of its evaluation power in order to train a stronger policy network in the beginning. In each iteration where we propose sequences, we allow the agent to propose B × V sequences, and take the top B sequences according to their fitnesses predicted by the model.\nWe use the TF-Agents library (Guadarrama et al., 2018) to implement the PPO algorithm. Our policy and value networks each consist of one fully-connected layer with 128 hidden units. Our results are consistent with those in Angermueller et al. (2020)." }, { "heading": "A.2.7 DYNA-PPO", "text": "We closely follow the algorithm presented in Angermueller et al. (2020). We perform 10 experiment rounds. In each experiment round, the policy network is trained on samples collected from an oracle. Each model comprising an ensemble model is then trained on this data. The top models are retained in the ensemble, while the remaining models are removed. Initially, the ensemble model is composed of models with the same sklearn architectures as in Angermueller et al. (2020) as well as several neural network architectures which we add. We compare models based on their cross-validation score; top models cross a predetermined threshold (which we also set to be 0.5). We compute the cross-validation score via five-fold cross-validation scored on the same section for each model.\nThe ensemble model serves to approximate the oracle function. The mean prediction of all models in the ensemble is used to approximate the true fitness of a sequence. For each experiment round, we perform up to 20 virtual rounds of model-based optimization on each sequence based on the outputs of this ensemble model. A model-based round is ended early if the ensemble model uncertainty (measured by standard deviation of individual model rewards) is over twice as high as the uncertainty of the ensemble in the first model-based evaluation.\nAs mentioned in the main text, TRPO algorithms can be sensitive to implementation (Engstrom et al., 2019). To ensure that we build a fair comparison, we implement a variant of DyNA-PPO that uses a sequence generation process akin to evolutionary algorithms (which we call mutative), as well as one that follows closely to that of the paper (which we call constructive). In the first case, when proposing a new sequence, the agent will begin at a previously measured sequence and mutate individual residues until the reward (which is the fitness value of a sequence) is no longer increasing, or until the same move is made twice (signalling that the agent thinks that no better action can be taken). In the constructive case in Angermueller et al. (2020), the agent will add residues onto an originally empty sequence until it reaches the desired sequence length, which is fixed beforehand. In this case, the step reward is zero until the last step is reached, in which case it is the fitness value of the sequence. We find that the mutative version of the algorithm performs better than the constructive version, likely due to the rewards no longer being sparse in the mutative setting.\nAdditionally, the DyNA-PPO algorithm as presented in Angermueller et al. (2020) trains the policy on a set of true fitnesses of sequences before entering the proposal loop. In our setting, all explorers are allowed to make B queries to assess the true fitness of sequences, equal to the batch size of sequences proposed at the end of a round. We further limit the computational queries to v × B samples (a difference with the original algorithm). No hyper-parameter tuning on held-out landscapes are done, as opposed to the original paper." }, { "heading": "A.3 LANDSCAPES", "text": "" }, { "heading": "A.3.1 RNA LANDSCAPES", "text": "To better clarify the structure of the RNA landscapes, and demonstrate their complexity, particularly of the Swampland fitness landscapes, we break down the components that go into them in Fig. A3. Even small landscapes of size 14 show this type of non-convexity that is seen in the figure. We pick 6 sequences of interest: The max peak for hidden target 1, the max peak for hidden target 2, the starting position (termed wildtype), the top known sequence in the swampland landscape and the top sequence found by a model-free WF process. We then compute 30 direct (shortest mutational paths) between each pair of these sequences, and show their fitness on the landscape. It is clear that there is significant non-monotonicity and permutations of order of mutations can change the shape of the trajectory. We believe that these landscapes enable an appropriate challenge for sequence design algorithms, while enabling full information of ground-truth (which can be queried quickly) and is consistent across the domain.\nPr op\nor tio\nn of\nm ax\nim um\ny\n60 57 Pairwise (shortest path) walk 58 55 48 55 31 39 39 59\n1 2 3 4 5 6 7 8 9 10\nBinding target 1 Binding target 2\n1+2 C om bined C om posite\nTop binding t1\nWildtype\nTop WF Top known\nTop binding t2\nSequence\nModel estimate Within training domain Out of training domain\nFigure A3: A tour of a composite “Swampland\" fitness landscape with sequence size 100 by direct walks between sequences of interest. Colored circles represent sequences of significance. Each grey line is the fitness of a walk from one sequence to another. Walks are defined as shortest paths between two sequences, and different walks between the same sequences represent different\norders of making the same set of substitutions (30 walks shown for each pair). The x-axis shows the number of steps between two sequences. The third panel shows the combined binding landscape of\nthe first two panels (computed as √ y1y2). The composite “Swampland\" landscape has the same\ntargets as the combined landscape, but is also subject to the constraint that 20/100 nucleotides cannot be mutated. We also train a CNN on 1000 mutants around the wildtype. The points\nunderneath the plots represent the CNN’s prediction of random samples from the paths. Cyan points show predictions within the same distance as the training set (here of max distance 12), and black\npoints are extrapolations outside that range. The model fits the in-domain samples with high accuracy (R2 > 0.7), but often misses global structure, as can be seen with the black points.\n↵ <latexit sha1_base64=\"JtDqaCSYHdUsArJlViGZOYtHm8o=\">AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZllZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwaalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA==</latexit><latexit sha1_base64=\"JtDqaCSYHdUsArJlViGZOYtHm8o=\">AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZllZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwaalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA==</latexit><latexit sha1_base64=\"JtDqaCSYHdUsArJlViGZOYtHm8o=\">AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZllZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwaalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA==</latexit><latexit sha1_base64=\"JtDqaCSYHdUsArJlViGZOYtHm8o=\">AAAB7XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoMegF48RzAOSJfROJsmY2ZllZlYIS/7BiwdFvPo/3vwbJ8keNLGgoajqprsrSgQ31ve/vcLa+sbmVnG7tLO7t39QPjxqGpVqyhpUCaXbERomuGQNy61g7UQzjCPBWtH4dua3npg2XMkHO0lYGONQ8gGnaJ3U7KJIRtgrV/yqPwdZJUFOKpCj3it/dfuKpjGTlgo0phP4iQ0z1JZTwaalbmpYgnSMQ9ZxVGLMTJjNr52SM6f0yUBpV9KSufp7IsPYmEkcuc4Y7cgsezPxP6+T2sF1mHGZpJZJulg0SAWxisxeJ32uGbVi4ghSzd2thI5QI7UuoJILIVh+eZU0L6qBXw3uLyu1mzyOIpzAKZxDAFdQgzuoQwMoPMIzvMKbp7wX7937WLQWvHzmGP7A+/wBi4GPGA==</latexit>\nConsistency without rollout (RNA L14) Batch size effects on performance (RNA L14)A B\nFigure A4: A: Effects of batch-size on ADALEAD, on RNA landscape of length 14 and two hidden targets.The case where α = 1, i.e. the model has perfect information. B: Ablation study for Adalead\nwithout ROLLOUT on the same landscape." } ]
2,020
ADALEAD: A SIMPLE AND ROBUST ADAPTIVE GREEDY SEARCH ALGORITHM FOR SEQUENCE DESIGN
SP:aff048d4b28972615e99e9d2e82258fb0e35f656
[ "This paper tackles zero-shot learning by leveraging the large-scale knowledge graph i.e., ConceptNet to propagate the knowledge learned from seen classes to unseen classes. The authors propose a novel propagation rule that aggregates node embeddings by the self-attention technique. It is infeasible to run GCN on such large-scale knowledge graph. Therefore they reduce the knolwedge graph by adopting a neighborhood sampling strategy based on random walks. The method is evaluated on multiple zero-shot learning tasks including object classification, intent classification and fine-grained entity typing. The SOTA results are achieved." ]
Zero-shot learning relies on semantic class representations such as hand-engineered attributes or learned embeddings to predict classes without any labeled examples. We propose to learn class representations from common sense knowledge graphs. Common sense knowledge graphs are an untapped source of explicit high-level knowledge that requires little human effort to apply to a range of tasks. To capture the knowledge in the graph, we introduce ZSL-KG, a general-purpose framework with a novel transformer graph convolutional network (TrGCN) to generate class representations. Our proposed TrGCN architecture computes non-linear combinations of the node neighbourhood and leads to significant improvements on zero-shot learning tasks. We report new state-of-the-art accuracies on six zero-shot benchmark datasets in object classification, intent classification, and fine-grained entity typing tasks. ZSL-KG outperforms the specialized state-of-the-art method for each task by an average 1.7 accuracy points and outperforms the general-purpose method with the best average accuracy by 5.3 points. Our ablation study on ZSL-KG with alternate graph neural networks shows that our transformer-based aggregator adds up to 2.8 accuracy points improvement on these tasks.
[]
[ { "authors": [ "Ashutosh Adhikari", "Xingdi Yuan", "Marc-Alexandre Côté", "Mikuláš Zelinka", "Marc-Antoine Rondeau", "Romain Laroche", "Pascal Poupart", "Jian Tang", "Adam Trischler", "William L Hamilton" ], "title": "Learning dynamic knowledge graphs to generalize on text-based games", "venue": "arXiv preprint arXiv:2002.09127,", "year": 2020 }, { "authors": [ "Zeynep Akata", "Florent Perronnin", "Zaid Harchaoui", "Cordelia Schmid" ], "title": "Label-embedding for image classification", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),", "year": 2015 }, { "authors": [ "Joost Bastings", "Ivan Titov", "Wilker Aziz", "Diego Marcheggiani", "Khalil Sima’an" ], "title": "Graph convolutional encoders for syntax-aware neural machine translation", "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2017 }, { "authors": [ "Soravit Changpinyo", "Wei-Lun Chao", "Boqing Gong", "Fei Sha" ], "title": "Synthesized classifiers for zero-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Long Chen", "Hanwang Zhang", "Jun Xiao", "Wei Liu", "Shih-Fu Chang" ], "title": "Zero-shot visual recognition using semantics-preserving adversarial embedding networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Alice Coucke", "Alaa Saade", "Adrien Ball", "Théodore Bluche", "Alexandre Caulier", "David Leroy", "Clément Doumouro", "Thibault Gisselbrecht", "Francesco Caltagirone", "Thibaut Lavril" ], "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "venue": "ICML workshop on Privacy in Machine Learning and Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yann N Dauphin", "Gokhan Tur", "Dilek Hakkani-Tur", "Larry Heck" ], "title": "Zero-shot learning for semantic utterance classification", "venue": "arXiv preprint arXiv:1401.0509,", "year": 2013 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Ali Farhadi", "Ian Endres", "Derek Hoiem", "David A. Forsyth" ], "title": "Describing objects by their attributes", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Andrea Frome", "Greg S Corrado", "Jon Shlens", "Samy Bengio", "Jeff Dean", "Marc’Aurelio Ranzato", "Tomas Mikolov" ], "title": "Devise: A deep visual-semantic embedding model", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2013 }, { "authors": [ "Matt Gardner", "Joel Grus", "Mark Neumann", "Oyvind Tafjord", "Pradeep Dasigi", "Nelson F. Liu", "Matthew Peters", "Michael Schmitz", "Luke S. Zettlemoyer" ], "title": "Allennlp: A deep semantic natural language processing", "venue": null, "year": 2017 }, { "authors": [ "Dan Gillick", "Nevena Lazic", "Kuzman Ganchev", "Jesse Kirchner", "David Huynh" ], "title": "Context-dependent fine-grained entity type tagging", "venue": "arXiv preprint arXiv:1412.1820,", "year": 2014 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "William L Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "IEEE Data Engineering Bulletin,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Chaitanya Joshi" ], "title": "Transformers are graph neural networks", "venue": "The Gradient,", "year": 2020 }, { "authors": [ "Michael Kampffmeyer", "Yinbo Chen", "Xiaodan Liang", "Hao Wang", "Yujia Zhang", "Eric P Xing" ], "title": "Rethinking knowledge graph propagation for zero-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Anjishnu Kumar", "Pavankumar Reddy Muddireddy", "Markus Dreyer", "Björn Hoffmeister" ], "title": "Zero-shot learning across heterogeneous overlapping domains", "venue": "Proc. Interspeech 2017,", "year": 2017 }, { "authors": [ "Vinay Kumar Verma", "Gundeep Arora", "Ashish Mishra", "Piyush Rai" ], "title": "Generalized zero-shot learning via synthesized examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Christoph H. Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Attribute-based classification for zero-shot visual object categorization", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),", "year": 2014 }, { "authors": [ "Jingjing Li", "Mengmeng Jing", "Ke Lu", "Zhengming Ding", "Lei Zhu", "Zi Huang" ], "title": "Leveraging the invariant side of generative zero-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bill Yuchen Lin", "Xinyue Chen", "Jamin Chen", "Xiang Ren" ], "title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Xiao Ling", "Daniel S Weld" ], "title": "Fine-grained entity recognition", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2012 }, { "authors": [ "Han Liu", "Xiaotong Zhang", "Lu Fan", "Xuandi Fu", "Qimai Li", "Xiao-Ming Wu", "Albert YS Lam" ], "title": "Reconstructing capsule networks for zero-shot intent classification", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Hugo Liu", "Push Singh" ], "title": "Conceptnet—a practical commonsense reasoning tool-kit", "venue": "BT technology journal,", "year": 2004 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Attribute propagation network for graph zero-shot learning", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yang Liu", "Jishun Guo", "Deng Cai", "Xiaofei He" ], "title": "Attribute attention for semantic disambiguation in zero-shot learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yukun Ma", "Erik Cambria", "Sa Gao" ], "title": "Label embedding for zero-shot fine-grained named entity typing", "venue": "In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers,", "year": 2016 }, { "authors": [ "Sébastien Marcel", "Yann Rodriguez" ], "title": "Torchvision: The machine-vision package of torch", "venue": "In International Conference on Multimedia,", "year": 2010 }, { "authors": [ "Diego Marcheggiani", "Ivan Titov" ], "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2017 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2013 }, { "authors": [ "Ashish Mishra", "Shiva Krishna Reddy", "Anurag Mittal", "Hema A Murthy" ], "title": "A generative model for zero shot learning using conditional variational autoencoders", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Ryan L Murphy", "Balasubramaniam Srinivasan", "Vinayak Rao", "Bruno Ribeiro" ], "title": "Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jinseok Nam", "Eneldo Loza Mencı́a", "Johannes Fürnkranz" ], "title": "All-in text: Learning document, label, and word representations jointly", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2016 }, { "authors": [ "Rasha Obeidat", "Xiaoli Fern", "Hamed Shahbazi", "Prasad Tadepalli" ], "title": "Description-based zero-shot fine-grained entity typing", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Nikolaos Pappas", "James Henderson" ], "title": "Gile: A generalized input-label embedding for text classification", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Xiang Ren", "Wenqi He", "Meng Qu", "Lifu Huang", "Heng Ji", "Jiawei Han" ], "title": "Afet: Automatic fine-grained entity typing by hierarchical partial-label embedding", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Bernardino Romera-Paredes", "Philip Torr" ], "title": "An embarrassingly simple approach to zero-shot learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Edgar Schonfeld", "Sayna Ebrahimi", "Samarth Sinha", "Trevor Darrell", "Zeynep Akata" ], "title": "Generalized zero-and few-shot learning via aligned variational autoencoders", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Chao Shang", "Yun Tang", "Jing Huang", "Jinbo Bi", "Xiaodong He", "Bowen Zhou" ], "title": "End-to-end structureaware convolutional networks for knowledge base completion", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Sonse Shimaoka", "Pontus Stenetorp", "Kentaro Inui", "Sebastian Riedel" ], "title": "Neural architectures for fine-grained entity type classification", "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume", "year": 2017 }, { "authors": [ "Richard Socher", "Milind Ganjoo", "Christopher D Manning", "Andrew Ng" ], "title": "Zero-shot learning through cross-modal transfer", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2013 }, { "authors": [ "Robyn Speer", "Joshua Chin", "Catherine Havasi" ], "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2017 }, { "authors": [ "Niket Tandon", "Gerard De Melo", "Gerhard Weikum" ], "title": "Webchild 2.0: Fine-grained commonsense knowledge distillation", "venue": "In Proceedings of ACL 2017, System Demonstrations,", "year": 2017 }, { "authors": [ "Shikhar Vashishth", "Soumya Sanyal", "Vikram Nitin", "Partha Talukdar" ], "title": "Composition-based multirelational graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Vinay Kumar Verma", "Dhanajit Brahma", "Piyush Rai" ], "title": "A meta-learning framework for generalized zero-shot learning", "venue": "AAAI Conference on Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Wei Wang", "Vincent W Zheng", "Han Yu", "Chunyan Miao" ], "title": "A survey of zero-shot learning: Settings, methods, and applications", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Yufei Ye", "Abhinav Gupta" ], "title": "Zero-shot recognition via semantic embeddings and knowledge graphs", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Ralph Weischedel", "Ada Brunstein" ], "title": "Bbn pronoun coreference and entity type corpus", "venue": "Linguistic Data Consortium, Philadelphia,", "year": 2005 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Congying Xia", "Chenwei Zhang", "Xiaohui Yan", "Yi Chang", "Philip S Yu" ], "title": "Zero-shot user intent detection via capsule neural networks", "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yongqin Xian", "Zeynep Akata", "Gaurav Sharma", "Quynh Nguyen", "Matthias Hein", "Bernt Schiele" ], "title": "Latent embeddings for zero-shot classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yongqin Xian", "Christoph H Lampert", "Bernt Schiele", "Zeynep Akata" ], "title": "Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),", "year": 2018 }, { "authors": [ "Yongqin Xian", "Tobias Lorenz", "Bernt Schiele", "Zeynep Akata" ], "title": "Feature generating networks for zero-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Guo-Sen Xie", "Li Liu", "Xiaobo Jin", "Fan Zhu", "Zheng Zhang", "Jie Qin", "Yazhou Yao", "Ling Shao" ], "title": "Attentive region embedding network for zero-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wenhan Xiong", "Jiawei Wu", "Deren Lei", "Mo Yu", "Shiyu Chang", "Xiaoxiao Guo", "William Yang Wang" ], "title": "Imposing label-relational inductive bias for extremely fine-grained entity typing. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (NAACL), 2019", "venue": null, "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Wenjia Xu", "Yongqin Xian", "Jiuniu Wang", "B. Schiele", "Zeynep Akata" ], "title": "Attribute prototype network for zero-shot learning", "venue": "ArXiv, abs/2008.08290,", "year": 2020 }, { "authors": [ "Liang Yao", "Chengsheng Mao", "Yuan Luo" ], "title": "Graph convolutional networks for text classification", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Wenpeng Yin", "Jamaal Hay", "Dan Roth" ], "title": "Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Dani Yogatama", "Daniel Gillick", "Nevena Lazic" ], "title": "Embedding methods for fine grained entity type classification", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "year": 2015 }, { "authors": [ "Y. Yu", "Z. Ji", "J. Han", "Z. Zhang" ], "title": "Episode-based prototype generating network for zero-shot learning", "venue": "In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Zheng Yuan", "Doug Downey" ], "title": "Otyper: A neural architecture for open named entity typing", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Seongjun Yun", "Minbyul Jeong", "Raehyun Kim", "Jaewoo Kang", "Hyunwoo J Kim" ], "title": "Graph transformer networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chenrui Zhang", "Xiaoqing Lyu", "Zhi Tang" ], "title": "Tgg: Transferable graph generation for zero-shot and few-shot learning", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Hongguang Zhang", "Piotr Koniusz" ], "title": "Zero-shot kernel learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hongming Zhang", "Daniel Khashabi", "Yangqiu Song", "Dan Roth" ], "title": "Transomcs: From linguistic graphs to commonsense knowledge", "venue": "Proceedings of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2020 }, { "authors": [ "Jingqing Zhang", "Piyawat Lertvittayakumjorn", "Yike Guo" ], "title": "Integrating semantic knowledge to tackle zero-shot text classification", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association", "year": 2019 }, { "authors": [ "Li Zhang", "Tao Xiang", "Shaogang Gong" ], "title": "Learning a deep embedding model for zero-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ben Zhou", "Daniel Khashabi", "Chen-Tse Tsai", "Dan Roth" ], "title": "Zero-shot open entity typing as typecompatible grounding", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Yizhe Zhu", "Jianwen Xie", "Zhiqiang Tang", "Xi Peng", "Ahmed Elgammal" ], "title": "Semantic-guided multiattention localization for zero-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ren" ], "title": "prediction, p times the same output is predicted. B DATASET DETAILS Here, we provide details about the datasets used in our experiments. Table 6 shows the statistics about the datasets. We obtain the OntoNotes and BBN dataset", "venue": null, "year": 2016 }, { "authors": [ "Farhadi" ], "title": "15px padding on all sides and crop them. C EXPERIMENTAL SETUP C.1 GRAPH NEURAL NETWORK SETUP GCNZ (Wang et al., 2018) uses symmetrically normalized graph Laplacian to generate the class representations", "venue": "SGCN (Kampffmeyer et al.,", "year": 2019 }, { "authors": [ "Adam (Kingma", "Ba" ], "title": "2015) to train our parameters with a learning rate of 0.001. For intent classification, we experiment with a weight decay of 1e-05 and 5e-05. We found that weight decay of 5e-05 gives the best performance overall in intent classification for all the baseline graph aggregators. In intent classification, ZSL-KG use weight decay of 1e-05. We add a weight decay of 5e-05 for the OntoNotes experiments. Finally, all experiments in zero-shot object classification have a weight", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks require large amounts of labeled training data to achieve optimal performance. This is a severe bottleneck, as obtaining large amounts of hand-labeled data is an expensive process. Zero-shot learning is a training strategy which allows a machine learning model to predict novel classes without the need for any labeled examples for the new classes (Romera-Paredes & Torr, 2015; Socher et al., 2013; Wang et al., 2019). Zero-shot models learn parameters for seen classes along with their class representations. During inference, new class representations are provided for the unseen classes. Previous zero-shot learning systems have used hand-engineered attributes (Akata et al., 2015; Farhadi et al., 2009; Lampert et al., 2014), pretrained embeddings (Frome et al., 2013) and learned embeddings (e.g. sentence embeddings) (Xian et al., 2016) as class representations.\nClass representations in a zero-shot learning framework should satisfy the following properties: (1) they should adapt to unseen classes without requiring additional human effort, (2) they should provide rich features such that the unseen classes have sufficient distinguishing characteristics among themselves, (3) they should be applicable to a range of downstream tasks. Previous approaches for class representations have various limitations. On one end of the spectrum, attribute-based methods provide rich features but the attributes have to be fixed ahead of time for the unseen classes. On the other end of the spectrum, pretrained embeddings such as GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) offer the flexibility of easily adapting to new classes but rely on unsupervised training on large corpora—which may not provide distinguishing characteristics necessary for zero-shot learning. Many methods lie within the spectrum and learn class representations for zero-shot tasks from descriptions such as attributes, text, and image prototypes. Existing approaches that have achieved state-of-the-art performance make task-specific adjustments and cannot exactly be adapted to tasks in different domains (Liu et al., 2019a; Verma et al., 2020). Methods using graph neural networks on the ImageNet graph to learn class representations have achieved strong performance on zero-shot object classification (Kampffmeyer et al., 2019; Wang et al., 2018). These methods are general-purpose, since we show that they can be adapted to other tasks as well. However, the ImageNet graph may not provide rich features suitable for a wide range of downstream tasks.\nIn our work, we propose to learn class representations from common sense knowledge graphs. Common sense knowledge graphs (Liu & Singh, 2004; Speer et al., 2017; Tandon et al., 2017; Zhang et al., 2020) are an untapped source of explicit high-level knowledge that requires little human effort to apply to a range of tasks. These graphs have explicit edges between related concept nodes and provide valuable information to distinguish between different concepts. However, adapting existing zero-shot learning frameworks to learn class representations from common sense knowledge graphs is challenging in several ways. GCNZ (Wang et al., 2018) learns graph neural networks with a symmetrically normalized graph Laplacian, which not only requires the entire graph structure during training but also needs retraining if the graph structure changes, i.e., GCNZ is not inductive. Common sense knowledge graphs can be large (2 million to 21 million edges) and training a graph neural network on the entire graph can be prohibitively expensive. DGP (Kampffmeyer et al., 2019) is an inductive method and aims to generate expressive class representations but assumes a directed acyclic graph such as WordNet. Common sense knowledge graphs do not have a directed acyclic graph structure.\nTo address these limitations, we propose ZSL-KG, a general-purpose framework with a novel transformer graph convolutional network (TrGCN) to learn class representations. Graph neural networks learn to represent the structure of graphs by aggregating information from each node’s neighbourhood. Aggregation techniques used in GCNZ, DGP, and most other graph neural network approaches are linear, in the sense that they take a (possibly weighted) mean or maximum of the neighbourhood features. To capture the complex information in the common sense knowledge graph, TrGCN learns a transformer-based aggregator to compute the non-linear combination of the node neighbours. A few prior works have considered LSTM-based aggregators (Hamilton et al., 2017a) as a way to increase the expressivity of graph neural networks, but their outputs can be sensitive to the ordering imposed on the nodes in each neighborhood. For example, on the Animals with Attributes 2 dataset, we find that when given the same test image 10 times with different neighborhood orderings, an LSTM-based graph neural network outputs inconsistent predictions 16% of the time (Appendix A). One recent work considers trying to make LSTMs less sensitive by averaging the outputs over permutations, but this significantly increases the computational cost and provides only a small boost to prediction accuracy (Murphy et al., 2019). In contrast, TrGCN learns a transformer-based aggregator, which is non-linear and naturally permutation invariant. Additionally, our framework is inductive, i.e., the graph neural network can be executed on graphs that are different from the training graph, which is necessary for inductive zero-shot learning under which the test classes are unknown during training.\nWe demonstrate the effectiveness of our framework on three zero-shot learning tasks in vision and language: object classification, intent classification, and fine-grained entity typing. We report new state-of-the-art accuracies on six zero-shot benchmark datasets (Xian et al., 2018a; Farhadi et al., 2009; Deng et al., 2009; Coucke et al., 2018; Gillick et al., 2014; Weischedel & Brunstein, 2005). ZSL-KG outperforms the state-of-the-art specialized method for each task by an average 1.7 accuracy points. ZSL-KG also outperforms GCNZ, the best general-purpose method on average by 5.3 accuracy points. Our ablation study on ZSL-KG with alternate graph neural networks shows that our transformer-based aggregator adds up to 2.8 accuracy points improvement on these tasks.\nIn summary, our main contributions are the following:\n1. We propose to learn class representations from common sense knowledge graphs for zeroshot learning. 2. We present ZSL-KG, a general-purpose framework based on graph neural networks with a novel transformer-based architecture. Our proposed architecture learns non-linear combination of the nodes neighbourhood and generates expressive class representations. 3. ZSL-KG achieves new state-of-the-art accuracies on Animals with Attributes 2 (Xian et al., 2018a), aPY (Farhadi et al., 2009), ImageNet (Deng et al., 2009), SNIPS-NLU (Coucke et al., 2018), Ontonotes (Gillick et al., 2014), and BBN (Weischedel & Brunstein, 2005)." }, { "heading": "2 BACKGROUND", "text": "In this section, we summarize zero-shot learning and graph neural networks." }, { "heading": "2.1 ZERO-SHOT LEARNING", "text": "Zero-shot learning has several variations (Wang et al., 2019; Xian et al., 2018a). Our work focuses on inductive zero-shot learning, under which we do not have access to the unseen classes during training. We train a zero-shot classifier by optimizing over the seen classes. But, unlike traditional methods, zero-shot classifiers are trained along with class representations such as attributes, pretrained embeddings, etc.\nRecent approaches learn a class encoder φ(y) ∈ Rd to produce vector-valued class representations from an initial input, such as a string or other identifier of the class. (In our case, y is a node in a graph and its k-hop neighborhood.) During inference, the class representations are used to label examples with the unseen classes by passing the examples through an example encoder θ(x) ∈ Rd and predicting the class whose representation has the highest inner product with the example representation.\nRecent work in zero-shot learning commonly uses one of two approaches to learn the class encoder φ(y). One approach uses a bilinear similarity function defined by a compatibility matrix W ∈ Rd×d (Frome et al., 2013; Xian et al., 2018a):\nf (θ(x),W , φ(y)) = θ(x)TWφ(y) . (1) The bilinear similarity function gives a score for each example-class pair. The parameters of θ, W , and φ are learned by taking a softmax over f for all possible seen classes y ∈ YS and minimizing either the cross entropy loss or a ranking loss with respect to the true labels. In other words, f should give a higher score for the correct class(es) and lower scores for the incorrect classes. W is often constrained to be low rank, to reduce the number of learnable parameters (Obeidat et al., 2019; Yogatama et al., 2015). Lastly, other variants of the similarity function add minor variations such as non-linearities between factors of W (Socher et al., 2013; Xian et al., 2016).\nThe other common approach is to first train a neural network classifier in a supervised fashion. The final fully connected layer of this network has a vector representation for each seen class, and the remaining layers are used as the example encoder θ(x). Then, the class encoder φ(y) is trained by minimizing the L2 loss between the representations from supervised learning and φ(y) (Kampffmeyer et al., 2019; Socher et al., 2013; Wang et al., 2018).\nThe class encoder that we propose in Section 3 can be plugged into either approach." }, { "heading": "2.2 GRAPH NEURAL NETWORKS", "text": "The basic idea behind graph neural networks is to learn node embeddings that reflect the structure of the graph (Hamilton et al., 2017b). Consider the graph G = (V,E,R), where V is the set of vertices with node features Xv and (vi, r, vj) ∈ E are the labeled edges and r ∈ R are the relation types. Graph neural networks learn node embeddings by iterative aggregation of the k-hop neighbourhood. Each layer of a graph neural network has two main components AGGREGATE and COMBINE (Xu et al., 2019):\na(l)v = AGGREGATE (l) ({ h(l−1)u ∀u ∈ N (v) })\n(2)\nwhere a(l)v ∈ Rdl−1 is the aggregated node feature of the neighbourhood, h(l−1)u is the node feature in neighbourhood N (.) of node v including a self loop. The aggregated node is passed to the COMBINE to generate the node representation h(l)v ∈ Rdl for the l-th layer:\nh(l)v = COMBINE (l) ( h(l−1)v ,a (l) v ) (3)\nh (0) v = xv where xv is the initial feature vector for the node. Previous works on graph neural networks for zero-shot learning have used GloVe (Pennington et al., 2014) to represent the initial features (Kampffmeyer et al., 2019; Wang et al., 2018)." }, { "heading": "3 THE ZSL-KG FRAMEWORK", "text": "Here we introduce ZSL-KG: a general-purpose framework with a novel Transformer Graph Convolutional Network to learn class representation from commmon sense knowledge graphs. Figure 1 shows the ZSL-KG architecture with an example elephant concept." }, { "heading": "3.1 COMMON SENSE KNOWLEDGE GRAPHS FOR ZERO-SHOT LEARNING.", "text": "Common sense knowledge graphs organize high-level knowledge implicit to humans in a graph. The nodes in the graph are concepts associated with each other via edges. These associations in the graph offer a rich source of information, which makes them applicable to a wide range of tasks. Publicly available common sense knowledge graphs range roughly from 100,000 to 8 million nodes and 2 million to 21 million edges (Speer et al., 2017; Zhang et al., 2020). To learn class representations from common sense knowledge graphs, we look to graph neural networks.\nExisting zero-shot learning methods such as GCNZ (Wang et al., 2018) and DGP (Kampffmeyer et al., 2019) that learn class representations from structured knowledge are applicable only to small graphs such as ImageNet or Wordnet as they make restrictive assumptions. GCNZ requires the entire graph structure during training, and learns class representations from the ImageNet graph which is significantly smaller than common sense knowledge graphs. For instance, the ImageNet graph used in GCNZ has about 32,000 nodes and 65,000 edges. DGP learns a more expressive class representation but requires a directed acyclic graph or parent-child relationship in the graph. We are not restricted to parent-child relationships in a common sense knowledge graph." }, { "heading": "3.2 TRANSFORMER GRAPH CONVOLUTIONAL NETWORK.", "text": "To overcome these limitations, we propose to learn class representations with a novel graph neural network: transformer graph convolutional networks (TrGCN). Transformers (Vaswani et al., 2017) are non-linear modules typically used for machine translation and language modeling tasks. They achieve a non-linear combination of the input sequences using two-layer feedforward neural networks and scaled dot product attention. We exploit this property to learn a non-linear aggregator that captures the complex structure of a common sense knowledge graph. Finally, they are not tied to the graph structure which makes them well suited for inductive zero-shot learning.\nHere we describe TrGCN. We pass the neighbhourhood node features h(l−1)u through a two-layer feedforward neural network with a ReLU activation between the layers. The previous features are added to its output features with a skip connection, followed by layer normalization (Ba et al., 2016):\nh′ (l−1) u = LayerNorm { W (l) fh [ ReLU ( W (l) hf h (l−1) u )] + h(l−1)u ∀u ∈ N (v) } (4)\nwhere W (l)hf ∈ Rd(l−1)×d(f) and W (l) fh ∈ Rd(f)×d(l−1) are learnable weight matrices for the feedforward neural network. The non-linear neighbourhood features are then passed through the scaled dot product attention layer to compute the weighted combination of the features for each query node:{\nz(l)u ∀u ∈ N (v) } = softmax ( QKT√ d(p) ) V (5)\nwhere Q = W (l)q h′ (l−1) u is the set of all neighbourhood query vectors, K = W (l) k h ′(l−1) u is the set of all key vectors, V = W (l)v · h′ (l−1)\nu is the set of values vectors, and Wq ∈ Rd(l−1)×d(p) , Wk ∈ Rd(l−1)×d(p) , Wv ∈ Rd(l−1)×d(p) are learnable weight matrices with the projection dimension d(p). The output features from the attention layer is projected with another linear layer and added to its previous features with a skip connection, followed by layer normalization:{\nz′ (l) u ∀u ∈ N (v) } = LayerNorm ( W (l)z z (l−1) u + h ′(l−1) u ∀u ∈ N (v) ) (6)\nwhere W (l)z ∈ Rd(p)×d(l−1) is a learnable weight matrix.\nTo get the aggregated vector a(l)v for node v, we pass the output vectors {z′ (l)\nu ∀u ∈ N (v)} from the transformer through a permutation invariant pooling function µ(.) such as mean-pooling. The aggregated vector is passed through a linear layer followed by a non-linearity σ(.) such as ReLU or LeakyReLU:\na(l)v = µ ({ z′ (l) u ∀u ∈ N (v) }) h(l)v = σ ( W (l) · a(l)v ) (7)\nwhere W (l) ∈ Rd(l−1)×d(l) is a learnable weight matrix. Existing work has drawn parallels between transformers and graph attention networks (GAT), suggesting that they are equivalent (Joshi, 2020). GAT (Veličković et al., 2018) computes the aggregated vector by taking the linear combination of the node features in the neighbourhood. In contrast, TrGCN learns a transformer-based aggregator and computes a non-linear combination of the node features in the neighbourhood, followed by a pooling function to get the aggregated vector which leads to the difference in architecture.\nNeighbourhood Sampling. In our experiments, we use ConceptNet (Speer et al., 2017) as our common sense knowledge graph but our approach applies to other knowledge graphs. ConceptNet has high node degree, which poses a challenge to train the graph neural network. To solve this problem, we explored numerous neighbourhood sampling strategies. Existing work on sampling neighbourhood includes random sampling (Hamilton et al., 2017a), importance sampling (Chen et al., 2018a), random walks (Ying et al., 2018), etc. Similar to PinSage (Ying et al., 2018), we simulate random walks for the nodes in the graph and assign hitting probabilities to the neighbourhood nodes. During training and testing the graph neural network, we select the top N nodes from the neighbourhood based on their hitting probability." }, { "heading": "4 TASKS AND RESULTS", "text": "We evaluate our framework on three zero-shot learning tasks: object classification, intent classification, and fine-grained entity typing. In all our experiments, we compare ZSL-KG with the state-of-the-art specialized methods for the task as well as general-purpose methods: GCNZ, SGCN, and DGP. The code and hyperparameters are included in the supplementary material, which will be released upon acceptance.\nSetup. We use two-layer graph neural networks for the general-purpose baselines and ZSl-KG. We use the code from Kampffmeyer et al. (2019) and adapt GCNZ (Wang et al., 2018), SGCN, and DGP (Kampffmeyer et al., 2019) for zero-shot learning tasks in language (See Appendix C.1). In all our experiments with ZSL-KG, we map each class to a node in ConceptNet 5.7 (Speer et al., 2017) and query its 2-hop neighbourhood. We simulate random walks for each node in the ConceptNet graphs and compute the hitting probability for the nodes in the neighbourhood. Then, we sample 50 and 100 node neighbours with the highest hitting probabilities for the first and the second hop, respectively. See Appendix C.2 for more details." }, { "heading": "4.1 OBJECT CLASSIFICATION", "text": "Object classification is a computer vision task of categorizing objects into different class categories.\nDatasets. We evaluate on the Animals with Attributes 2 (AWA2) (Xian et al., 2018a), attribute Pascal Yahoo (aPY) (Farhadi et al., 2009), and ImageNet datasets (Deng et al., 2009). AWA2 contains images of animals with 40 classes in the train set and 10 classes in the test set. aPY contains images\nAWA2 aPY\nAccuracy Accuracy\nSP-AEN 59.7 37.0 LisGAN 55.8 36.8 ZSML 77.5 64.0\nGCNZ 77.00 ± 2.05 56.02 ± 0.63 SGCN 77.76 ± 1.60 54.98 ± 0.66 DGP 76.26 ± 1.69 52.22 ± 0.49 ZSL-KG 78.08 ± 0.84 64.36 ± 0.59\nTable 1: Results for object classification on the AWA2 and aPY dataset. We report the average class-balanced accuracy of the models on 5 random seeds and the standard error. The results for SP-AEN, LisGAN, and ZSML are obtained from Verma et al. (2020).\nSNIPS-NLU\nAccuracy\nZero-shot DNN 71.16 IntentCapsNet 77.52 ReCapsNet-ZS 79.96\nGCNZ 82.47 ± 03.09 SGCN 50.27 ± 14.13 DGP 64.41 ± 12.87 ZSL-KG 88.98 ± 01.22\nTable 2: Results for intent classification on the SNIPS-NLU dataset. We report the average accuracy of the models on 5 random seeds and the standard error. The results for Zero-shot DNN, IntentCapsNet, and ReCapsNet-ZS are obtained from Liu et al. (2019a).\nof objects with 20 classes in the train set and 12 classes in the test set. ImageNet contains images from 1000 classes in the train set and 21K classes in the test set.\nExperiment. Following prior work (Kampffmeyer et al., 2019; Wang et al., 2018), here we use the L2 loss architecture for zero-shot learning. The example encoder and seen class representations come from the ResNet 101 model (He et al., 2016) in Torchvision (Marcel & Rodriguez, 2010) pretrained on ILSVRC 2012 (Russakovsky et al., 2015). We map the ILSVRC 2012 training and validation classes, and the AWA2 and aPY test classes to ConceptNet. The model is trained for 1000 epochs on 950 random classes and the remaining 50 ILSVRC 2012 classes are used for validation. We use the same setting for GCNZ, SGCN, and DGP using the authors’ implementation. The model with the least loss on the validation classes is used to make predictions on the test classes. For the ImageNet experiment, we train the model for 3000 epochs on 1000 classes from ILSVRC 2012 with the common sense knowledge graph and switch to ImageNet graph during test time. Similar to DGP, we freeze the final layer with the generated class representations and fine-tune the ResNet-backbone on the ILSVRC images for 15 epochs using SGD with a learning rate 0.0001 and momentum of 0.9. Following prior work, we report the class-balanced accuracy (Xian et al., 2018a) on unseen classes for AWA2 and aPY. We follow the train/test split from (Frome et al., 2013), and evaluate ZSL-KG on two levels of difficulty: 2-hops, and 3-hops. The hops refer to the distance of the classes from the ILSVRC train classes. We evaluate ZSL-KG on two settings: zero-shot learning (ZSL) where only unseen classes are present and generalized zero-shot learning (GZSL) where both seen and unseen classes are present. Following previous work (Kampffmeyer et al., 2019) on ImageNet evaluation, we report the class-balanced top-K accuracy.\nWe compare ZSL-KG against state-of-the-art specialized methods for AWA2 and aPY: SP-AEN (Chen et al., 2018b), LisGAN (Li et al., 2019), and ZSML (Verma et al., 2020). ZSML is a GAN-based approach and has reported the highest results on AWA2 and aPY, and DGP and SGCN have reported the highest on results on ImageNet.\nResults. Table 1 shows the results for zero-shot object classification. ZSL-KG outperforms existing state-of-the-art methods on the AWA2 and aPY datasets. The general-purpose methods show a significant drop in accuracy from AWA2 to aPY, whereas our method consistently achieves the highest accuracy on both the datasets. This suggests that the class representations generated from a richer graph help in performance. In contrast to specialized methods, ZSL-KG trains only one model and does not require any specialized training datasets to achieve state-of-the-art performance on both the datasets. Finally, our experiments with ImageNet show that ZSL-KG, despite being trained on a noisier graph, reports the state-of-the-art on zero-shot learning and generalized zero-shot learning. ZSL-KG offers accuracy improvements up to 2.3 points and relative improvements up to 20% over the state-of-the-art methods." }, { "heading": "4.2 INTENT CLASSIFICATION", "text": "To assess ZSL-KG’s versatility, we experiment on zero-shot intent classification. Intent classification is a text classification task of identifying users’ intent expressed in chatbots and personal voice assistants.\nDataset. We evaluate on the main open-source benchmark for intent classification: SNIPS-NLU (Coucke et al., 2018). The dataset was collected using crowdsourcing to benchmark the performance of voice assistants. The training set has 5 seen classes which we split into 3 train classes and 2 development classes.\nExperiment. Zero-shot intent classification is a multi-class classification task. The example encoder used in our experiments is a biLSTM with attention (Appendix F). We train the model for 10 epochs by minimizing the cross entropy loss and pick the model with the least loss on the development set. We measure accuracy on the test classes.\nWe compare ZSL-KG against existing specialized state-of-the-art methods in the literature for zeroshot intent classification: Zero-shot DNN (Kumar et al., 2017), IntentCapsNet (Xia et al., 2018), and ResCapsNet-ZS (Liu et al., 2019a). IntentCapsNet and ResCapsNet-ZS are CapsuleNet-based (Sabour et al., 2017) approaches and have reported the best performance on the task.\nResults. Table 2 shows the results. ZSL-KG significantly outperforms the existing approaches and improves the state-of-the-art accuracy to 88.98%. The general-purpose methods have mixed performance on intent classification and suggests ZSL-KG works well on a broader range of tasks." }, { "heading": "4.3 FINE-GRAINED ENTITY TYPING", "text": "To test ZSL-KG’s ability to classify fine-grained types, we experiment on zero-shot fine-grained entity typing. Fine-grained entity typing is the task of classifying named entities into one or more narrowly scoped semantic types. This task also tests generalized zero-shot learning when seen classes appear in the test set.\nDatasets. We evaluate on popular fine-grained entity typing datasets: OntoNotes (Gillick et al., 2014) and BBN (Weischedel & Brunstein, 2005).\nWe split the dataset into two: coarse-grained labels and fine-grained labels. Following the prior work (Obeidat et al., 2019), we train on the coarse-grained labels and predict on both coarse-grained and fine-grained labels in the test set. Details about the datasets can be found in Appendix B.\nExperiment. Fine-grained entity typing is a zero-shot multi-label classification task because each entity can be associated with more than one type. We reconstructed OTyper (Yuan & Downey, 2018) and DZET (Obeidat et al., 2019), the state-of-the-art specialized methods for this task. Both methods use the AttentiveNER biLSTM (Shimaoka et al., 2017) as the example encoder. See Appendix G for more details.\nWe train each model for 5 epochs by minimizing the cross-entropy loss. During inference, we pass the scores from the bilinear similarity model through a sigmoid and pick the labels that have probability\nof 0.5 or greater as our prediction. As is common for this task (Obeidat et al., 2019), we evaluate the performance of our model on strict accuracy, loose micro F1, and loose macro F1 (Appendix H). Strict accuracy penalizes the model for incorrect label predictions and the number of the label predictions have to match the ground truth, whereas loose micro F1 and loose macro F1 measures if the correct label is predicted among other false positive predictions.\nResults. Table 4 shows the results. ZSL-KG outperforms the existing state-of-the-art specialized methods on strict accuracy for both OntoNotes and BBN. DZET has higher loose micro on both the datasets because it overpredicts labels and has greater false positives compared to the ZSL-KG. ZSL-KG has higher precision for label predictions and therefore, higher strict accuracy compared to other methods. These results demonstrate that our method works well even in a generalized multilabel setting where the test set has multiple seen and unseen labels." }, { "heading": "4.4 COMPARISON OF GRAPH AGGREGATORS", "text": "We conduct an ablation study with different aggregators with our framework. Existing graph neural networks include GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), RGCN (Schlichtkrull et al., 2018) and LSTM (Hamilton et al., 2017a). We provide all the architectural details in Appendix I. We train these models with the same experimental setting for the tasks mentioned in their respective sections.\nResults. Table 5 shows results for our ablation study. Our results show that ZSL-KG almost always outperforms existing graph neural networks with linear aggregators. ZSL-KG-LSTM which uses an LSTM-based aggregator shows inconsistent performance across different tasks indicating that they are more useful on low-dimensional tasks such as node classification. With relational aggregators (ZSL-KG-RGCN), we observe that they do not outperform ZSL-KG and may reduce the overall performance (as seen in AWA2 and aPY). Finally, it is worth comparing SGCN and ZSL-KG-GCN as they use the same linear aggregator to learn the class representation but train on different graphs. We see that ZSL-KG-GCN trained on common sense knowledge graphs adds an average improvement of 8.1 accuracy points across the tasks suggesting that the choice of graphs is crucial for downstream performance." }, { "heading": "5 RELATED WORK", "text": "We broadly describe related works on zero-shot learning and graph neural networks.\nZero-Shot Learning. Zero-shot learning has been thoroughly researched in the computer vision community for object classification (Akata et al., 2015; Farhadi et al., 2009; Frome et al., 2013; Lampert et al., 2014; Wang et al., 2019; Xian et al., 2018a). Recent works in zero-shot learning have used graph neural networks for object classification (Kampffmeyer et al., 2019; Wang et al., 2018). In our work, we extend their approach to common sense knowledge graphs to generate class representations. Furthermore, we learn TrGCN, a novel graph neural network with a non-linear aggreagtor to learn the structure of common sense knowledge graphs. Recent work on zero-shot object classification has used attention over image regions to achieve strong results on the task (Zhu et al., 2019; Liu et al., 2019b; Xie et al., 2019; Liu et al., 2020). In contrast, our work focuses on the class encoder (TrGCN), where we learn attention over the graph and could potentially complement methods that focus on image encoders. Recently work on zero-shot learning has focused on applying attention over the image regions. (Zhu et al., 2019; Liu et al., 2019b; Xie et al., 2019; Liu et al., 2020) but they focus on the example encoder, particularly, they apply attention over the image regions. Other notable works use generative methods for generalized zero-shot learning where both seen and unseen classes are evaluated at test time (Kumar Verma et al., 2018; Schonfeld et al., 2019). However, these methods still rely on hand-crafted attributes for classification. Zero-shot learning has been studied in text classification as well (Dauphin et al., 2013; Nam et al., 2016; Pappas & Henderson, 2019; Zhang et al., 2019b; Yin et al., 2019). Previously, ConceptNet has been used for transductive zero-shot text classification as shallow features for class representation (Zhang et al., 2019b). They use ConceptNet to generate a sparse vector which is combined with pretrained embeddings and natural language description to obtain the class representation. On the other hand, we use ConceptNet to generate dense vector representations from a graph neural network and use them as our class representation. Finally, zero-shot fine-grained entity typing has been previously studied with a variety of class representations (Ma et al., 2016; Obeidat et al., 2019; Yuan & Downey, 2018). We note that ZOE (Zhou et al., 2018) is a specialized method for zero-shot fine-grained entity typing and achieves best results on the task. However, they use a subset of the test set that contains both seen and unseen types to tune the threshold parameters which reveals information about the unseen types and makes ZOE a transductive zero-shot learning method.\nGraph Neural Networks. Recent work on graph neural networks has demonstrated significant improvements for several downstream tasks such as node classification and graph classification (Hamilton et al., 2017a;b; Kipf & Welling, 2017; Veličković et al., 2018; Wu et al., 2019). Extensions of graph neural networks to relational graphs have produced significant results in several graph-related tasks (Marcheggiani & Titov, 2017; Schlichtkrull et al., 2018; Shang et al., 2019; Vashishth et al., 2020). Previous work has used transformers with graph neural networks as a method to generate metapaths in the graph rather than as a neighbourhood aggregation technique (Yun et al., 2019). A related work Zhang et al. (2019a) combines common sense knowledge graph and graph neural networks for zero-shot learning. ZSL-KG learns to map nodes in a knowledge graph to class representations without any other human input. In contrast, Zhang et al. (2019a) learns to transfer hand-engineered attributes to new nodes in the graph. On the other hand, our work learns a graph neural network over the common sense knowledge graph to generate rich class representations. Finally, several diverse applications using graph neural networks have been explored: common sense reasoning (Lin et al., 2019), fine-grained entity typing (Xiong et al., 2019), text classification (Yao et al., 2019), reinforcement learning (Adhikari et al., 2020) and neural machine translation (Bastings et al., 2017). For a more in-depth review, we point readers to Wu et al. (2020)." }, { "heading": "6 CONCLUSION", "text": "ZSL-KG is a flexible framework for zero-shot learning with common sense knowledge graphs and can be adapted to a wide range of tasks without requiring additional human effort. Our framework introduces a novel transformer graph convolutional network (TrGCN) which captures the complex associations in the graph to learn class representations. We achieve state-of-the-art performance on five benchmark datasets across three zero-shot tasks. Our work demonstrates that common sense knowledge graphs are a powerful source of high-level knowledge and can benefit a range of tasks." }, { "heading": "A LSTM PREDICTIONS", "text": "LSTMs have been used in graph neural networks as aggregators to generate more expressive node embeddings. However, LSTMs assume an ordering of the inputs which is not present in a graph neighbourhood. To apply LSTMs to an unordered set of nodes, Hamilton et al. (2017a) randomly permute the nodes in the neighbourhood.\nWe test whether randomly permuting the node neighbours makes LSTM-based aggregator permutation invariant. We replicate the LSTM-based aggregator to work with the ZSL-KG framework. The model is trained with the setup described in Section 4.1. We run the prediction on the Animals with Attributes 2 dataset by computing 10 class representation for each of the classes using the trained LSTM-based aggregator model.\nThe experiments reveal that 1325 out of 7913 (16.78%) have multiple predictions for the same image. For images that have multiple predictions, we take the count of the mode prediction and plot the histogram. Figure 2 shows inconsistency in predictions. The graph for a given value p on the x-axis is read as for every 10 prediction, p times the same output is predicted." }, { "heading": "B DATASET DETAILS", "text": "Here, we provide details about the datasets used in our experiments. Table 6 shows the statistics about the datasets.\nWe obtain the OntoNotes and BBN dataset from Ren et al. (2016). OntoNotes has three levels of types such as /location, /location/structure, /location/structure/government where /location and /location/structure are treated as coarse-grained entity types and /location/structure/government is treated as fine-grained entity type. Similarly, BBN has two levels of types and we consider the level two types as fine-grained types. Furthermore, we process the datasets by removing all the fine-grained labels from the train set. We also remove all the from the train set where coarse-grained entity types are not present in the test set. Another key point to remember is that /other type present in the OntoNotes dataset cannot be mapped to a meaningful concept or a wikipedia article. To overcome this limitation, we learn an embedding for /other type during training.\nFor object classification datasets, AWA2 and aPY, we do not require the training examples because we use pretrained weights from ResNet101 to learn the class representations. We crop objects from APY test dataset as multiple objects are present in the same image. To crop the objects, we use bounding box provided in Farhadi et al. (2009), add 15px padding on all sides and crop them." }, { "heading": "C EXPERIMENTAL SETUP", "text": "C.1 GRAPH NEURAL NETWORK SETUP\nGCNZ (Wang et al., 2018) uses symmetrically normalized graph Laplacian to generate the class representations. SGCN (Kampffmeyer et al., 2019) uses an asymmetrical normalized graph Laplacian to learn the class representations. Finally, DGP (Kampffmeyer et al., 2019) exploits the hierarchical graph structure and avoids dilution of knowledge from intermediate nodes. They use a dense graph connectivity scheme with a two-stage propagation from ancestors and descendants to learn the class representations.\nC.2 CONCEPTNET SETUP\nWe further preprocess the queried graph from ConceptNet for all the datasets. We remove all the non-English concepts and their edges from the graph and make all the edges bidirectional. For fine-grained entity typing and object classification, we also take the union of the concepts’ neighbourhood that share the same prefix. For example, we take the union of the /c/en/elephant and /c/en/elephant/n. Then, we compute the embeddings for the concept using the pretrained GloVe 840B (Pennington et al., 2014). We average the individual word in the concept to get the embedding. These embeddings serve as initial features for the graph neural network.\nFor the random walk, the number of steps is 20, and the number of restarts is 10. We add one smoothing to the visit counts and normalize the counts for the neighboring nodes.\nC.3 FINE-GRAINED ENTITY TYPING SETUP\nWe reconstructed OTyper (Yuan & Downey, 2018) and DZET (Obeidat et al., 2019) for fine-grained entity typing. Otyper averages the GloVe embeddings for the words in the name of each class to represent it. For DZET, we manually mapped the classes to Wikipedia articles. We pass each article’s first paragraph through a learnable biLSTM with attention to obtain the class representations (Appendix F)." }, { "heading": "D PSEUDOCODE", "text": "Our framework can be trained in two ways - (1) joint training (2) class encoder training. Joint training means that our example encoder and class encoder is trained jointly with the task. We use joint training for intent classification and fine-grained entity typing. Class encoder training means that we use a pretrained example encoder such as ResNet101 and train only the class encoder. In object classification, we train only the class encoder and use a pretrained ResNet101. In Algorithm 1, we describe the forward pass with the ZSL-KG framework.\nAlgorithm 1: Forward pass with the ZSL-KG framework Input :example x, example encoder θ(x), linear layer W , class encoder φ(y), graph\nG(V,E,R), class nodes {v1y, v2y, ..., vny }, node initialization H = [h0, ..., hv], depth L, graph neural network weights {W 1, ...,WL}, transformers {T 1, ..., TL}, neighbourhood sample sizes {s1, , ..., sL},\nOutput : logits for classes y = {y1, y2, ..., yn} TrGCN(V, l = 0); if l = L then\nreturn HV ←H(V); else\ni← 0; for v ∈ V do\nN← N (v, sl+1); HN ← TrGCN(N, l + 1); ZN ← T (l+1)(HN); a (l+1) v ← mean(ZN); h (l+1) v ← σ(W (l+1) · av); H (l+1) i = h (l+1) v /||h(l+1)v ||2;\ni← i+ 1 end return H(l+1)\nend φ(y)← TrGCN({v1y, v2y, ..., vny }); return θ(x)TWφ(y);" }, { "heading": "E OBJECT CLASSIFICATION RESULTS ON AWA2 AND APY", "text": "Table 7 shows results comparing ZSL-KG with other related work from zero-shot object classification." }, { "heading": "F BILSTM WITH ATTENTION", "text": "The biLSTM with attention is used as the example encoder in intent classification and as class representation for DZET (Obeidat et al., 2019). BiLSTM with attention has two components: (1) biLSTM model (2) attention model. The input tokens w = w0,w1, ...,wn represented by GloVe are passed through the biLSTM to get the hidden states −→ h and ←− h . The hidden states are concatenated to get h = h0,h1, ...,hn. They are passed through the attention module to get the scalar attention values and normalize them:\nαi = Wα (tanh(We · hi)) (8)\nai = exp (αi)∑ i exp (αi)\n(9)\nThe scalar values are multiplied with their respective hidden vectors to get the final representation tw:\ntw = n∑ i=0 aihi (10)" }, { "heading": "G ATTENTIVENER", "text": "We describe AttentiveNER (Shimaoka et al., 2017) that we use to represent the example in the fine-grained entity typing task. Each mention m comprises of n tokens mapped to a pretrained word embedding from GloVe. We average the embeddings to obtain a single vector vm:\nvm = 1\nn n∑ j=1 mj (11)\nwhere mj ∈ Rd is the pretrained word embeddings from GloVe. We learn the context of the mention using two biLSTM with attention modules. The left context l is represented by {l1, l2, ..., ls} and the right context r by {r1, r2, ..., rs} where li ∈ Rd and rj ∈ Rd are the word embeddings for the left and the right context. s is the window size for the context. We pass l and r through their separate biLSTM layers to get the hidden states ←− hl, −→ hl for the left context and ←− hr, −→ hr for the right context.\nThe hidden states are passed through the attention layer to compute the attention scores. The attention layer is a 2 layer feedforward neural network and computes the normalized attention for each of the hidden states vc ∈ Rh:\nαli = Wα(tanh(We [←− hli−→ hli ] )) (12)\nali = exp ( αli )∑\ni exp ( αli ) + ∑ j exp ( αrj ) (13)\nThe scalar values are multiplied with their respective hidden states to get the final context vector representation vc:\nvc = s∑ i=0 alih l i + s∑ j=0 arjh r j (14)\nFinally, we concatenate the context vector vc and vm to get the example representation x." }, { "heading": "H FINE-GRAINE ENTITY TYPING EVALUATION", "text": "Fine-grained entity typing is a multi-label classification task. We follow the standard evaluation metric introduced in Ling & Weld (2012): Strict Accuracy, Loose Micro F1 and Loose Macro F1.\nWe denote the set of ground truth types as T and the set of predicted types as P. We use the F1 computed from the precision p and recall r for the evaluation metrics mentioned below:\nStrict Accuracy. The prediction is considered correct if and only if te = t̂e:\np =\n∑ e∈P∩T 1(te = t̂e)\n|P | (15)\nr =\n∑ e∈P∩T 1(te = t̂e)\n|T | (16)\nLoose Micro. The precision and recall scores are computed as:\np = ∑ e∈P |te ∩ t̂e|∑ e∈P |t̂e|\n(17)\np = ∑ e∈T |te ∩ t̂e|∑ e∈T |t̂e|\n(18)\nLoose Macro. The precision and recall scores are computed as:\np = 1 |P | ∑ e∈P |te ∩ t̂e| |t̂e|\n(19)\nr = 1 |T | ∑ e∈T |te ∩ t̂e| |t̂e|\n(20)" }, { "heading": "I GRAPH NEURAL NETWORKS ARCHITECTURE DETAILS", "text": "Method Aggregate Combine ZSL-KG-GCN a(l)v = Mean ({ h (l−1) u , u ∈ N (v) }) h (l) v = σ ( W (l)a (l) v ) ZSL-KG-GAT α(l)u = Attn ({ (h ′(l−1) u ||h′(l−1))v, u ∈ N (v) }) h (l) v = σ( ∑N (v)+1 u=1 α (l) u h ′(l−1) u )\nZSL-KG-RGCN a(l)v = ∑ r∈R ∑ j∈N(v)r 1 ci,r ∑ b∈B α (l) b,rV (l) b h (l−1) j hv = σ(av +W (l) s h (l−1) v )\nZSL-KG-LSTM a(l)v = LSTM(l) ( h (l−1) u ∀u ∈ N (v) ) h (l) v = σ ( W · [h(l−1)v ||a(l)v ] )\nTable 8: Graph Aggregators\nZSL-KG-GCN uses a mean aggregator to learn the neighbourhood structure. ZSL-KG-GAT projects the neighbourhood nodes to a new features h′(l−1)u = Wh (l−1) u . The neighbourhood node features are concatenated with self feature and passed through a self-attention module for get the attention coefficients. The attention coefficients are multiplied with the neighbourhood features to the get the node embedding for the l-th layer in the combine function. ZSL-KG-RGCN uses a relational aggregator to learn the structure of the neighbourhood. To avoid overparameterization from the relational weights, we perform basis decomposition of the weight vector into B bases. We learn |B| relational coefficients and |B| weight vectors in the aggregate function and add with the self feature in combine function. ZSL-KG-LSTM uses LSTM as an aggregator to combine the neighbourhood features. The nodes in the graph are passed through an LSTM and the last hidden state is taken as the aggregated vector. The aggregated vector is concatenated with the node’s previous layer feature and passed to the combine function to get the node representation." }, { "heading": "J HYPERPARAMETERS", "text": "In this section, we detail the hyperparameters used in our experiments.\nJ.1 TRAINING DETAILS\nOur framework is built using PyTorch and AllenNLP (Gardner et al., 2017). In all our experiments, we use Adam (Kingma & Ba, 2015) to train our parameters with a learning rate of 0.001. For intent classification, we experiment with a weight decay of 1e-05 and 5e-05. We found that weight decay of 5e-05 gives the best performance overall in intent classification for all the baseline graph aggregators. In intent classification, ZSL-KG use weight decay of 1e-05. We add a weight decay of 5e-05 for the OntoNotes experiments. Finally, all experiments in zero-shot object classification have a weight decay of 5e-04.\nFor intent classification and fine-grained entity typing, we assume a low-rank for the compatibility matrix W . The matrix W ∈ Rd×d is factorized into A ∈ Rh×d and B ∈ Rd×h where h is the low-rank dimension. Table 9 summarizes the hyperparameters used in the example encoders which is a biLSTM with attention or a task-specific variant of it.\nIn fine-grained entity typing, we have two baselines that do not use graph neural networks: OTyper and DZET. OTyper averages the GloVe embedding of 300-dim for the class representations. DZET uses a biLSTM with attention for the class encoder with the same hyperparameters as fine-grained entity typing from Table 9.\nJ.2 GRAPH AGGREGATOR SUMMARY\nTable 10 describes the output dimensions of the node embeddings after each graph neural network layer. ZSL-KG-GCN, DGP, GCNZ, and SGN are linear aggregators and learn only one weight matrix in each of the layers. ZSL-KG-GAT learns a weight matrix for the attention where Wa ∈ R2d(l)×1 and uses LeakyReLU activation in the attention. LeakyReLU has a negative slope of 0.2. ZSL-KGRGCN learns B bases weight vectors in the baseline. We found that B = 1 performs the best for fine-grained entity typing and object classification. For intent classification, we use 10 bases, i.e.,\nB = 10. In intent classification and fine-grained entity typing, the non-linear activation function after the graph neural network layer is ReLU and in object classification the activation function is LeakyReLU with a negative slope of 0.2.\nZSL-KG with the Transformer Graph Convolutional Network is a complex architecture with numerous parameters. In our transformer module, there are five hyperparameters - input dimension d(l−1), output dimension d(l−1), feedforward layer hidden dimension d(f), and projection dimension d(p). The input dimension and output dimensions are the same in the aggregator. We tuned the hyperparameters on the held out validation classes for object classification and intent classification. For fine-grained entity typing we manually tuned the hyperparameters. Table 11 details the hyperparameters used in our experiments for all the datasets." } ]
2,020
ZERO-SHOT LEARNING WITH COMMON SENSE KNOWLEDGE GRAPHS
SP:42f0f05d335d004a58b91ec986ddd5af72a35a15
[ "The paper proposes to combine MPC and model-free RL to overcome the possible modelling errors. Thereby the approach achieves the sample-efficiency of MPC and the control quality of model-free RL. The resulting MPQ(\\lambda) algorithm uses MPPI to obtain the actions by optimizing the blended MPC objective. The Q-targets for fitting the q function also use the blended Q-estimate." ]
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective strategy, but real-time performance requirements warrant the use of simple models. If the model is not sufficiently accurate, then the resulting controller can be biased, limiting performance. We present a framework for improving on MPC with model-free reinforcement learning (RL). The key insight is to view MPC as constructing a series of local Q-function approximations. We show that by using a parameter λ, similar to the trace decay parameter in TD(λ), we can systematically trade-off learned value estimates against the local Q-function approximations. We present a theoretical analysis that shows how error from inaccurate models in MPC and value function estimation in RL can be balanced. We further propose an algorithm that changes λ over time to reduce the dependence on MPC as our estimates of the value function improve, and test the efficacy our approach on challenging high-dimensional manipulation tasks with biased models in simulation. We demonstrate that our approach can obtain performance comparable with MPC with access to true dynamics even under severe model bias and is more sample efficient as compared to model-free RL.
[ { "affiliations": [], "name": "Mohak Bhardwaj" }, { "affiliations": [], "name": "Sanjiban Choudhury" }, { "affiliations": [], "name": "Byron Boots" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Exploration and apprenticeship learning in reinforcement learning", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Pieter Abbeel", "Adam Coates", "Andrew Y Ng" ], "title": "Autonomous helicopter aerobatics through apprenticeship learning", "venue": "The International Journal of Robotics Research,", "year": 2010 }, { "authors": [ "Thomas Anthony", "Zheng Tian", "David Barber" ], "title": "Thinking fast and slow with deep learning and tree search", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mohak Bhardwaj", "Ankur Handa", "Dieter Fox", "Byron Boots" ], "title": "Information theoretic model predictive q-learning", "venue": "In Learning for Dynamics and Control,", "year": 2020 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tom Erez", "Kendall Lowrey", "Yuval Tassa", "Vikash Kumar", "Svetoslav Kolev", "Emanuel Todorov" ], "title": "An integrated system for real-time model predictive control of humanoid robots", "venue": "In 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids),", "year": 2013 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Sham Kakade", "Michael J Kearns", "John Langford" ], "title": "Exploration in metric state spaces", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Gilwoo Lee", "Brian Hou", "Sanjiban Choudhury", "Siddhartha S Srinivasa" ], "title": "Bayesian residual policy optimization: Scalable bayesian reinforcement learning with clairvoyant experts", "venue": "arXiv preprint arXiv:2002.03042,", "year": 2020 }, { "authors": [ "Kendall Lowrey", "Aravind Rajeswaran", "Sham Kakade", "Emanuel Todorov", "Igor Mordatch" ], "title": "Plan online, learn offline: Efficient learning and exploration via model-based control", "venue": "arXiv preprint arXiv:1811.01848,", "year": 2018 }, { "authors": [ "David Q Mayne", "James B Rawlings", "Christopher V Rao", "Pierre OM Scokaert" ], "title": "Constrained model predictive control: Stability and optimality", "venue": null, "year": 2000 }, { "authors": [ "David Q Mayne", "Erric C Kerrigan", "EJ Van Wyk", "Paola Falugi" ], "title": "Tube-based robust nonlinear model predictive control", "venue": "International Journal of Robust and Nonlinear Control,", "year": 2011 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations", "venue": "In Proceedings of Robotics: Science and Systems (RSS),", "year": 2018 }, { "authors": [ "Fabio Ramos", "Rafael Carvalhaes Possas", "Dieter Fox" ], "title": "Bayessim: adaptive domain randomization via probabilistic inference for robotics simulators", "venue": null, "year": 1906 }, { "authors": [ "Stephane Ross", "J Andrew Bagnell" ], "title": "Agnostic system identification for model-based reinforcement learning", "venue": "arXiv preprint arXiv:1203.1007,", "year": 2012 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Pranav Shyam", "Wojciech Jaśkowski", "Faustino Gomez" ], "title": "Model-based active exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Colin Summers", "Kendall Lowrey", "Aravind Rajeswaran", "Siddhartha Srinivasa", "Emanuel Todorov" ], "title": "Lyceum: An efficient and scalable ecosystem for robot learning", "venue": "arXiv preprint arXiv:2001.07343,", "year": 2020 }, { "authors": [ "Wen Sun", "J Andrew Bagnell", "Byron Boots" ], "title": "Truncated horizon policy search: Combining reinforcement learning & imitation learning", "venue": "arXiv preprint arXiv:1805.11240,", "year": 2018 }, { "authors": [ "Nolan Wagener", "Ching-An Cheng", "Jacob Sacks", "Byron Boots" ], "title": "An online learning approach to model predictive control", "venue": "arXiv preprint arXiv:1902.08967,", "year": 2019 }, { "authors": [ "Grady Williams", "Paul Drews", "Brian Goldfain", "James M Rehg", "Evangelos A Theodorou" ], "title": "Aggressive driving with model predictive path integral control", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Grady Williams", "Nolan Wagener", "Brian Goldfain", "Paul Drews", "James M Rehg", "Byron Boots", "Evangelos A Theodorou" ], "title": "Information theoretic mpc for model-based reinforcement learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Mingyuan Zhong", "Mikala Johnson", "Yuval Tassa", "Tom Erez", "Emanuel Todorov" ], "title": "Value function approximation and model predictive control", "venue": null, "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Model-free Reinforcement Learning (RL) is increasingly used in challenging sequential decision-making problems including high-dimensional robotics control tasks (Haarnoja et al., 2018; Schulman et al., 2017) as well as video and board games (Silver et al., 2016; 2017). While these approaches are extremely general, and can theoretically solve complex problems with little prior knowledge, they also typically require a large quantity of training data to succeed. In robotics and engineering domains, data may be collected from real-world interaction, a process that can be dangerous, time consuming, and expensive.\nModel-Predictive Control (MPC) offers a simpler, more practical alternative. While RL typically uses data to learn a global model offline, which is then deployed at test time, MPC solves for a policy online by optimizing an approximate model for a finite horizon at a given state. This policy is then executed for a single timestep and the process repeats. MPC is one of the most popular approaches for control of complex, safetycritical systems such as autonomous helicopters (Abbeel et al., 2010), aggressive off-road vehicles (Williams et al., 2016) and humanoid robots (Erez et al., 2013), owing to its ability to use approximate models to optimize complex cost functions with nonlinear constraints (Mayne et al., 2000; 2011).\nHowever, approximations in the model used by MPC can significantly limit performance. Specifically, model bias may result in persistent errors that eventually compound and become catastrophic. For example, in non-prehensile manipulation, practitioners often use a simple quasi-static model that assumes an object does not roll or slide away when pushed. For more dynamic objects, this can lead to aggressive pushing policies that perpetually over-correct, eventually driving the object off the surface.\nRecently, there have been several attempts to combine MPC with model free RL, showing that the combination can improve over the individual approaches alone. Many of these approaches involve using RL to learn a terminal cost function, thereby increasing the effective horizon of MPC (Zhong et al., 2013; Lowrey et al., 2018; Bhardwaj et al., 2020). However, the learned value function is only applied at the end of the MPC horizon. Model errors would still persist in horizon, leading to sub-optimal policies. Similar approaches have also been applied to great effect in discrete games with known models (Silver et al., 2016; 2017; Anthony et al., 2017), where value functions and policies learned via model-free RL are used to\nguide Monte-Carlo Tree Search. In this paper, we focus on a somewhat broader question: can machine learning be used to both increase the effective horizon of MPC, while also correcting for model bias?\nOne straightforward approach is to try to learn (or correct) the MPC model from real data encountered during execution; however there are some practical barriers to this strategy. Hand-constructed models are often crude-approximations of reality and lack the expressivity to represent encountered dynamics. Moreover, increasing the complexity of such models leads to computationally expensive updates that can harm MPC’s online performance. Model-based RL approaches such as Chua et al. (2018); Nagabandi et al. (2018); Shyam et al. (2019) aim to learn general neural network models directly from data. However, learning globally consistent models is an exceptionally hard task due to issues such as covariate shift (Ross & Bagnell, 2012).\nWe propose a framework, MPQ(λ), for weaving together MPC with learned value estimates to trade-off errors in the MPC model and approximation error in a learned value function. Our key insight is to view MPC as tracing out a series of local Q-function approximations. We can then blend each of these Q-functions with value estimates from reinforcement learning. We show that by using a blending parameter λ, similar to the trace decay parameter in TD(λ), we can systematically trade-off errors between these two sources. Moreover, by smoothly decaying λ over learning episodes we can achieve the best of both worlds - a policy can depend on a prior model before its has encountered any data and then gradually become more reliant on learned value estimates as it gains experience.\nTo summarize, our key contributions are: 1. A framework that unifies MPC and Model-free RL through value function approximation. 2. Theoretical analysis of finite horizon planning with approximate models and value functions. 3. Empirical evaluation on challenging manipulation problems with varying degrees of model-bias." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 REINFORCEMENT LEARNING", "text": "We consider an agent acting in an infinite-horizon discounted Markov Decision Process (MDP). An MDP is defined by a tupleM= (S,A,c,P,γ,µ) where S is the state space, A is the action space, c(s,a) is the per-step cost function, st+1∼P (·|st,at) is the stochastic transition dynamics and γ is the discount factor and µ(s0) is a distribution over initial states. A closed-loop policy π(·|s) outputs a distribution over actions given a state. Let µπM be the distribution over state-action trajectories obtained by running policy π onM. The value function for a given policy π, is defined as V πM(s)=EµπM [ ∑∞ t=0γ\ntc(st,at) |s0 =s] and the action-value function as QπM(s, a) = EµπM [ ∑∞ t=0γ\ntc(st,at) |s0 =s,a0 =a]. The objective is to find an optimal policy π∗ = argmin\nπ Es0∼µ [V πM(s0)]. We can also define the (dis)-advantage\nfunction AπM(s,a)=Q π M(s,a)−V π(s), which measures how good an action is compared to the action taken by the policy in expectation. It can be equivalently expressed in terms of the Bellman error as AπM(s,a)=c(s,a)+γEs′∼P,a′∼π[QπM(s′,a′)]−Ea∼π[QπM(s,a)]." }, { "heading": "2.2 MODEL-PREDICTIVE CONTROL", "text": "MPC is a widely used technique for synthesizing closed-loop policies for MDPs. Instead of trying to solve for a single, globally optimal policy, MPC follows a more pragmatic approach of optimizing simple, local policies online. At every timestep on the system, MPC uses an approximate model of the environment to search for a parameterized policy that minimizes cost over a finite horizon. An action is sampled from the policy and executed on the system. The process is then repeated from the next state, often by warm-starting the optimization from the previous solution.\nWe formalize this process as solving a simpler surrogate MDP M̂= (S,A,ĉ,P̂ ,γ,µ̂,H) online, which differs fromM by using an approximate cost function ĉ, transition dynamics P̂ and limiting horizon to H. Since it plans to a finite horizon, it’s also common to use a terminal state-action value function Q̂ that estimates the cost-to-go. The start state distribution µ̂ is a dirac-delta function centered on the current state s0 =st. MPC can be viewed as iteratively constructing an estimate of the Q-function of the original MDPM, given policy πφ at state s:\nQφH(s,a)=EµπφM [ H−1∑ i=0 γiĉ(si,ai)+γ HQ̂(sH,aH) |s0 =s,a0 =a ] (1)\nMPC then iteratively optimizes this estimate (at current system state st) to update the policy parameters\nφ∗t =argmin φ QφH(st,πφ(st)) (2)\nAlternatively, we can also view the above procedure from the perspective of disadvantage minimization. Let us define an estimator for the 1-step disadvantage with respect to the potential function Q̂ as A(si,ai)=c(si,ai)+γQ̂(si+1,ai+1)−Q̂(si,ai). We can then equivalently write the above optimization as minimizing the discounted sum of disadvantages over time via the telescoping sum trick\nargmin π∈Π E µ πφ M\n[ Q̂(s0,a0)+\nH−1∑ i=0 γiA(si,ai) |s0 =st\n] (3)\nAlthough the above formulation queries the Q̂ at every timestep, it is still exactly equivalent to the original problem and hence, does not mitigate the effects of model-bias. In the next section, we build a concrete method to address this issue by formulating a novel way to blend Q-estimates from MPC and a learned value function that can balance their respective errors." }, { "heading": "3 MITIGATING BIAS IN MPC VIA REINFORCEMENT LEARNING", "text": "In this section, we develop our approach to systematically deal with model bias in MPC by blending-in learned value estimates. First, we take a closer look at the different sources of error in the estimate in (1) and then propose an easy-to-implement, yet effective strategy for trading them off." }, { "heading": "3.1 SOURCES OF ERROR IN MPC", "text": "The performance of MPC algorithms critically depends on the quality of the Q-function estimatorQφH(s,a) in (1). There are three major sources of approximation error. First, model bias can cause compounding errors in predicted state trajectories, which biases the estimates of the costs of different action sequences. The effect of model error becomes more severe asH−→∞. Second, the error in the terminal value function gets propagated back to the estimate of the Q-function at the start state. With discounting, the effect of error due to inaccurate terminal value function diminishes asH increases. Third, using a smallH with an inaccurate terminal value function can make the MPC algorithm greedy and myopic to rewards further out in the future.\nWe can formally bound the performance of the policy with approximate models and approximate learned value functions. In Theorem 3.1, we show the loss in performance of the resulting policy as a function of the model error, value function error and the planning horizon.\nTheorem 3.1 (Proof Appendix A.1.2). Let MDP M̂ be an α-approximation ofM such that ∀(s,a), we have ∣∣∣∣∣∣P̂(s′|s,a)−P(s′|s,a)∣∣∣∣∣∣ 1 ≤α and |̂c(s,a)−c(s,a)|≤α. Let the learned value function Q̂(s,a) be\nan -approximation of the true value function ∣∣∣∣∣∣Q̂(s,a)−Qπ∗M(s,a)∣∣∣∣∣∣∞≤ . The performance of the MPC\npolicy is bounded w.r.t the optimal policy as ∣∣∣∣V π∗M (s)−V π̂M(s)∣∣∣∣∞\n≤2 (\nγ(1−γH−1) (1−γH)(1−γ) αH\n( cmax−cmin\n2\n) + γHαH\n1−γH\n( Vmax−Vmin\n2\n) + α\n1−γ +\nγH\n1−γH\n) (4)\nThis theorem generalizes over various established results. SettingH=1, =0 gives us the 1-step simulation lemma in Kearns & Singh (2002) (Appendix A.1.1). Settingα=0, i.e. true model, recovers the cost-shaping result in Sun et al. (2018). Further inspecting terms in (4), we see that the model error increases with horizon H (the first two terms) while the learned value error decreases withH which matches our intuitions.\nIn practice, the errors in model and value function are usually unknown and hard to estimate making it impossible to set the MPC horizon to the optimal value. Instead, we next propose a strategy to blend the Q-estimates from MPC and the learned value function at every timestep along the horizon, instead of just the terminal step such that we can properly balance the different sources of error." }, { "heading": "3.2 BLENDING MODEL PREDICTIVE CONTROL AND VALUE FUNCTIONS", "text": "A naive way to blend Q-estimates from MPC with Q-estimates from the value function would be to consider a convex combination of the two\n(1−λ)Q̂(s,a)︸ ︷︷ ︸ model-free +λQφH(s,a)︸ ︷︷ ︸ model-based\n(5)\nwhere λ∈ [0,1]. Here, the value function is contributing to a residual that is added to the MPC output, an approach commonly used to combine model-based and model-free methods (Lee et al., 2020). However, this is solution is rather ad hoc. If we have at our disposal a value function, why invoke it at only at the first and last timestep? As the value function gets better, it should be useful to invoke it at all timesteps.\nInstead, consider the following recursive formulation for the Q-estimate. Given (si,ai), the state-action encountered at horizon i, the blended estimateQλ(si,ai) is expressed as\nQλ(si,ai)︸ ︷︷ ︸ current blended estimate =(1−λ)Q̂(si,ai)︸ ︷︷ ︸ model-free +λ( ĉ(si,ai)︸ ︷︷ ︸ model-based +γ Qλ(si+1,ai+1)︸ ︷︷ ︸ future blended estimate ) (6)\nwhere λ∈ [0,1]. The recursion ends at Qλ(sH,aH) = Q̂(sH,aH). In other words, the current blended estimate is a convex combination of the model-free value function and the one-step model-based return. The return in turn uses the future blended estimate. Note unlike (5), the model-free estimate is invoked at every timestep.\nWe can unroll (6) in time to showQλH(s,a), the blendedH−horizon estimate, is simply an exponentially weighted average of all horizon estimates\nQλH(s,a)=(1−λ) H−1∑ i=0 λiQφi (s,a)+λ HQφH(s,a) (7)\nwhere Qφk(s,a) = EµπφM\n[∑k−1 i=0 γ iĉ(si,ai)+γ kQ̂(sk,ak) |s0 =s,a0 =a ] is a k-horizon estimate. When\nλ=0, the estimator reduces to the just using Q̂ and whenλ=1 we recover the original MPC estimateQH in (1). For intermediate values of λ, we interpolate smoothly between the two by interpolating allH estimates.\nImplementing (7) naively would require running H versions of MPC and then combining their outputs. This is far too expensive. However, we can switch to the disadvantage formulation by applying a similar telescoping trick\nQλH(s,a)=EµπφM\n[ Q̂(s0,a0)+\nH−1∑ i=0 (γλ)iA(si,ai)\n] (8)\nThis estimator has a similar form as the TD(λ) estimator for the value function. However, while TD(λ) uses the λ parameter for bias-variance trade-off, our blended estimator aims trade-off bias in dynamics model with bias in learned value function.\nWhy use blending λ when one can simply tune the horizonH? First,H limits the resolution we can tune since it’s an integer – as H gets smaller the resolution becomes worse. Second, the blended estimator QλH(s,a) uses far more samples. Say we have access to optimal horizonH ∗. Even if bothQλH andQ φ H∗ had the same bias, the latter uses a strict subset of samples as the former. Hence the variance of the blended estimator will be lower, with high probability.\n4 THE MPQ(λ)ALGORITHM\nWe develop a simple variant of Q-Learning, called Model-Predictive Q-Learning with λWeights (MPQ(λ)) that learns a parameterized Q-function estimate Q̂θ. Our algorithm, presented in Alg. 1, modifies Q-learning to use blended Q-estimates as described in the (8) for both action selection and generating value targets. The parameter λ is used to trade-off the errors due to model-bias and learned Q-function, Q̂θ. This can be viewed as an extension of the MPQ algorithm from Bhardwaj et al. (2020) to explicitly deal with model bias by incorporating the learned Q-function at all timesteps. Unlike MPQ, we do not explicitly consider the entropy-regularized formulation, although our framework can be modified to incorporate soft-Q targets.\nAlgorithm 1: MPQ(λ)\nInput: Initial Q-function weights θ, Approximate dynamics P̂ and cost function ĉ Parameter: MPC horizonH, λ schedule [λ1,λ2,...],\ndiscount factor γ, minibatch sizeK, num mini-batchesN , update frequency tupdate 1 D←∅ 2 for t=1...∞ do\n// Update λ 3 λ=λt\n// Blended MPC action selection\n4 φt←argmin φ E µ πφ M\n[ Q̂θ(s0,a0)+ ∑H−1 i=0 (γλ) iA(si,ai) |s0 =st ]\n5 at∼πφt 6 Execute at on the system and observe (ct,st+1) 7 D←(st,at,ct,st+1) 8 if t%tupdate==0 then\n9 Sample N minibatches ({\nsk,n,ak,n,ck,n,s ′ k,n }K k=1 )N n=1 fromD\n// Generate Blended MPC value targets\n10 ŷk,n=ck,n+γminEµπφM\n[ Q̂θ(s0,a0)+ ∑H−1 i=0 (γλ) iA(si,ai) |s0 =s′k,n ]\n11 Update θ with SGD on loss L= 1N 1 K ∑N n=1 ∑K k=1 ( ŷk,n−Q̂θ(sk,n,ak,n) )2\nAt every timestep t, MPQ(λ) proceeds by using H-horizon MPC from the current state st to optimize a policy πφ with parameters φ. We modify the MPC algorithm to optimize for the greedy policy with respect to the blended Q-estimator in (8), that is\nφ∗t =argmin φ E µ πφ M\n[ Q̂θ(s0,a0)+\nH−1∑ i=0 (γλ)iA(si,ai) |s0 =st\n] (9)\nAn action sampled from the resulting policy is then executed on the system. A commonly used heuristic is to warm start the above optimization by shifting forward the solution from the previous timestep, which serves as a good initialization if the noise in the dynamics in small (Wagener et al., 2019). This can significantly cut computational cost by reducing the number of iterations required to optimize ( 9) at every timestep.\nPeriodically, the parameters θ are updated via stochastic gradient descent to minimize the following loss function withN mini-batches of experience tuples of sizeK sampled from the replay buffer\nL(θ)= 1 N 1 K N∑ n=1 K∑ k=1 ( ŷk,n−Q̂θ(sk,n,ak,n) )2 (10)\nTheH-horizon MPC with blended Q-estimator is again invoked to calculate the targets\nŷj=c(sj,aj)+γmin φ E µ πφ M\n[ Q̂θ(s0,a0)+\nH−1∑ i=0 (γλ)iA(si,ai) |s0 =s′k,n\n] (11)\nUsing MPC to reduce error in Q-targets has been previously explored in literature (Lowrey et al., 2018; Bhardwaj et al., 2020), where the model is either assumed to be perfect or model-error is not explicitly accounted for. MPC with the blended Q-estimator and an appropriate λ allows us to generate more stable Q-targets than using Qθ or model-based rollouts with a terminal Q-function alone. However, running H-horizon optimization for all samples in a mini-batch can be time-consuming, forcing the use of smaller batch sizes and sparse updates. In our experiments, we employ a practical modification where during\nthe action selection step, MPC is also queried for value targets which are then stored in the replay buffer, thus allowing us to use larger batch sizes and updates at every timestep.\nFinally, we also allow λ to vary over time. In practice, λ is decayed as more data is collected on the system. Intuitively, in the early stages of learning, the bias in Q̂θ dominates and hence we want to rely more on the model. A larger value of λ is appropriate as it up-weights longer horizon estimates in the blended-Q estimator. As Q̂θ estimates improve over time, a smaller λ is favorable to reduce the reliance on the approximate model." }, { "heading": "5 EXPERIMENTS", "text": "Task Details: We evaluate MPQ(λ) on simulated robot control tasks, including a complex manipulation task with a 7DOF arm and in-hand manipulation with a 24DOF anthropomorphic hand (Rajeswaran* et al., 2018) as shown in Fig. 1. For each task, we provide the agent with a biased version of simulation that is used as the dynamics model for MPC. We use Model Predictive Path Integral Control (MPPI) (Williams et al., 2017), a state-of-the-art sampling-based algorithm as our MPC algorithm throughout.\n1. CARTPOLESWINGUP: A classic control task where the agent slides a cart along a rail to swingup the pole attached via an unactuated hinge joint. Model bias is simulated by providing the agent incorrect masses of the cart and pole. The masses are set lower than the true values to make the problem harder for MPC as the algorithm will always input smaller controls than desired as also noted in Ramos et al. (2019). Initial position of cart and pole are randomized at every episode.\n2. SAWYERPEGINSERTION: The agent controls a 7DOF Sawyer arm to insert a peg attached to the end-effector into a hole at different locations on a table in front of the robot. We test the effects of inaccurate perception by simulating a sensor at the target location that provides noisy position measurements at every timestep. MPC uses a deterministic model that does not take sensor noise into account as commonly done in controls. This biases the cost of simulated trajectories, causing MPC to fail to reach the target.\n3. INHANDMANIPULATION: A challenging in-hand manipulation task with a 24DOF dexterous hand from Rajeswaran* et al. (2018). The agent must align the pen with target orientation within certain tolerance for succcess. The initial orientation of the pen is randomized at every episode. Here, we simulate bias by providing larger estimates of the mass and inertia of the pen as well as friction coefficients, which causes the MPC algorithm to optimize overly aggressive policies and drop the pen.\nPlease refer to the Appendix A.2 for more details of the tasks, success criterion and biased simulations.\nBaselines: We compare MPQ(λ) against both model-based and model-free baselines - MPPI with true dynamics and no value function, MPPI with biased dynamics and no value function and Proximal Policy Optimization (PPO) Schulman et al. (2017).\nLearning Details: We represent the Q-function with a feed-forward neural network. We bias simulation parameters like mass or friction coefficients using the formulam=(1+b)mtrue, where b is a bias-factor. We also employ a practical modification to Alg. 1 in order to speed up training times as discussed in Section 4. Instead of maintaining a large replay-buffer and re-calculating targets for every experience tuple in a mini-batch, as done by approaches such as Bhardwaj et al. (2020); Lowrey et al. (2018), we simply query MPC for the value targets online and store them in a smaller buffer, which allows us to perform updates at every timestep. We use publicly the available implementation at https://bit.ly/38RcDrj for PPO. Refer to the Appendix A.2 for more details.\n0 5 10 15 20 25 Validation Iteration\n900\n800\n700\n600\n500\n400\n300\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nMPPIH64(true), MPPIH64(biased) = 0.5 = 0.55 = 0.6 = 0.65 = 0.7 = 0.75 = 0.8 = 0.85 = 0.9 = 0.95 = 1.0 PPO PPO (asymptotic)\n(a) Fixed λ\n0 5 10 15 20 25 Validation Iteration\n900\n800\n700\n600\n500\n400\n300\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nMPPIH64(true) MPPIH64(biased) = 0.6 = 0.65 = 0.7 = 0.75 F = 0.6 F = 0.65 F = 0.7 F = 0.75\n(b) Fixed v/s Decaying λ\n0 5 10 15 20 25 Validation Iteration\n800\n600\n400\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nMPPIH64(true), b = 0 b = 0.1 b = 0.2 b = 0.3 b = 0.4 b = 0.5 b = 0.6 b = 0.7 b = 0.8 b = 0.9 PPO PPO (asymptotic)\n(c) Varying Model Bias\n0 5 10 15 20 25 Validation Iteration\n900\n800\n700\n600\n500\n400\n300\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nMPPIH64(true), MPPIH64(biased) F = 0.5 F = 0.55 F = 0.6 F = 0.65 F = 0.7 F = 0.75 F = 0.8 F = 0.85 F = 0.9 F = 0.95 PPO PPO (asymptotic)\n(d) λ decay withH=64\n0 5 10 15 20 25 Validation Iteration\n1000\n800\n600\n400\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nMPPIH32(true), MPPIH32(biased) F = 0.5 F = 0.55 F = 0.6 F = 0.65 F = 0.7 F = 0.75 F = 0.8 F = 0.85 F = 0.9 F = 0.95 PPO PPO (asymptotic)\n(e) λ decay withH=32\n0 5 10 15 20 25 Validation Iteration\n1000\n800\n600\n400\n200\nA ve\nra ge\nR ew\nar d\nValidation rewards\nH = 1 H = 2 H = 4 H = 8 H = 16 H = 32 H = 64 H = 64, F = 0.8\n(f) Varying Horizon v/s λ\n1 2 4 8 16 32 64\n800\n600\n400\n200\nM ax\nV al\nid at\nio n\nR ew\nar d\n100 particles\n1 2 4 8 16 32 64\n800\n600\n400\n200\n50 particles\n1 2 4 8 16 32 64\n800\n600\n400\n200\n20 particles\n1 2 4 8 16 32 64\n800\n700\n600\n500\n400\n300\n10 particles\nHorizon\n= 1.0 = 0.8\n(g) Bias-Variance Trade-off\nFigure 2: CARTPOLESWINGUP experiments. Solid lines show average rewards over 30 validation episodes (fixed start states) with length of 100 steps and 3 runs with different seeds. The dashed lines are average reward of MPPI for the same validation episodes. Shaded region depicts the standard error of the mean that denotes the confidence on the average reward estimated from finite samples. Training is performed for 100k steps with validation after every 4k steps. When decaying λ as per a schedule, it is fixed to the current value during validation. In (b),(d),(e), (f) λF denotes the λ value at the end of training. PPO asymptotic performance is reported as average reward of last 10 validation iterations. (g) shows the best validation reward at the end of training for different horizon values and MPPI trajectory samples (particles) using λ=1.0 and λ=0.8" }, { "heading": "5.1 ANALYSIS OF OVERALL PERFORMANCE", "text": "O 1. MPQ(λ) is able to overcome model-bias in MPC for a wide range of λ values. Fig. 2(a) shows a comparison of MPQ(λ) with MPPI using true and biased dynamics with b=−0.5 and H=64 for various settings of λ. There exists a wide range of λ values for which MPQ(λ) can efficiently trade-off model-bias with the bias in the learned Q-function and out-perform MPPI with biased dynamics. However, setting λ to a high value of 1.0 or 0.95, which weighs longer horizons heavily leads to poor performance as compounding effects of model-bias are not compensated byQθ. Performance also begins to drop as λ decreases below 0.6. MPQ(λ) outperforms MPPI with access to the true dynamics and reaches close to asymptotic performance of PPO. This is not surprising as the learned Q-function adds global information to the optimization and λ corrects for errors in optimizing longer horizons. O 2. Faster convergence can be achieved by decaying λ over time. As more data is collected on the system, we expect the bias inQθ to decrease, whereas model bias remains constant. A larger value of λ that favors longer horizons is better during initial steps of training as the effect of a randomly initialized Qθ is diminished due to discounting and better exploration is achieved by forward lookahead. Conversely, as Qθ gets more accurate, model-bias begins to hurt performance\nand a smaller λ is favorable. We test this by decaying λ in [1.0,λF ] using a fixed schedule and observe that indeed faster convergence is obtained by reducing the dependence on the model over training steps as shown in 2(b). Figures 2(d) and 2(e) present ablations that show that MPQ(λ) is robust to a wide range of decay rates with H=64 and 32 respectively. When provided with true dynamics, MPPI with H = 32 performs better than H = 64 due to optimization issues with long horizons. MPQ(λ) reaches performance comparable with MPPI H=32 and asymptotic performance of PPO in both cases showing robustness to horizon values which is important since in practice we wish to set the horizon as large as our computation budget permits. However, decaying λ too fast or too slow can have adverse effects on performance. An interesting question for future work is whether λ can be adapted in a state-dependent manner. Refer to Appendix A.2 for details on the decay schedule. O 3. MPQ(λ) is much more sample efficient compared to model-free RL on high-dimensional continuous control tasks, while maintaining asymptotic performance. Figures 2 and 3 show a comparison of MPQ(λ) with the model-free PPO baseline. In all cases, we observe that MPQ(λ), through its use of approximate models, learned value functions, and a dynamically-varying λ parameter to trade-off different sources of error, rapidly improves its performance and achieves average reward and success rate comparable to MPPI with access to ground truth dynamics and model-free RL in the limit. In INHANDMANIPULATION, PPO performance does not improve at all over 150k training steps. In SAWYERPEGINSERTION, the small magnitude of reward difference between MPPI with true and biased models is due to the fact that despite model bias, MPC is able to get the peg close to the table, but sensor noise inhibits precise control to consistently insert it in the hole. Here, the value function learned by MPQ(λ) can adapt to sensor noise and allow for fine-grained control near the table. O 4. MPQ(λ) is robust to large degree of model misspecification. Fig. 2(c) shows the effects of different values of the bias factor b used to vary the mass of the cart and pole for MPQ(λ) with a fixedλ decay rate of [1.0,0.75]. MPQ(λ) achieves performance better than MPPI (H=64) with true dynamics and comparable to model-free RL in the limit for a wide range of bias factors b, and convergence rate is generally faster for smaller bias. For large values of b, MPQ(λ) either fails to improve or\ndiverges as the compounding effects of model-bias hurt learning, making model-free RL the more favorable alternative. A similar trend is observed in Figures 3(a) and 3(b) where MPQ(λ) outperforms MPPI with corresponding bias in the mass, inertia and friction coefficients of the pen with atleast a margin of over 30% in terms of success rate. It also achieves performance comparable to MPPI with true dynamics and modelfree RL in the limit, but is unable to do so for b=1.0. We conclude that while MPQ(λ) is robust to large amount of model bias, if the model is extremely un-informative, relying on MPC can degrade performance.\nO 5. MPQ(λ) is robust to planning horizon and number of trajectory samples in sampling-based MPC. TD(λ) based approaches are used for bias-variance trade-off for value function estimation in model-free RL. In our framework, λ plays a similar role, but it trades-off bias due to the dynamics model and learned value function against variance due to long-horizon rollouts. We empirically quantify this on the CARTPOLESWINGUP task by training MPQ(λ) with different values of horizon and number of particles for λ=1.0 and λ=0.8 respectively. Results in Fig. 2(g) show that - (1) using λ can overcome effects of model-bias irrespective of the planning horizon (except for very small values ofH=1 or 2) and (2) using λ can overcome variance due to limited number of particles with long horizon rollouts. The ablative study in Fig. 2(f) lends evidence to the fact that is preferable to simply decay λ over time than trying to tune the discrete horizon value to balance model bias. Not only does decaying λ achieve a better convergence rate and asymptotic performance than tuning horizon, the performance is more robust to different decay rates (as evidenced from Fig. 2(d)), whereas the same does not hold for varying horizon." }, { "heading": "6 CONCLUSION", "text": "In this paper, we presented a general framework to mitigate model-bias in MPC by blending model-free value estimates using a parameter λ, to systemativally trade-off different sources of error. Our practical algorithm achieves performance close to MPC with access to the true dynamics and asymptotic performance of model-free methods, while being sample efficient. An interesting avenue for future research is to vary λ in a state-adaptive fashion. In particular, reasoning about the model and value function uncertainty may allow us to vary λ to rely more or less on our model in certain parts of the state space. Another promising direction is to extend the framework to explicitly incorporate constraints by leveraging different constrained MPC formulations." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by ARL SARA CRA W911NF-20-2-0095. The authors would like to thank Aravind Rajeswaran for help with code for the peg insertion task." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOFS", "text": "We present upper-bounds on performance of a greedy policy when using approximate value functions and models. We also analyze the case of finite horizon planning with an approximate dynamics model and terminal value function which can be seen as a generalization of (Sun et al., 2018). For simplicity, we switch to using V̂ (s) to the learnt model-free value function (instead of Q̂(s))\nLet V̂ (s) be an -approximation ∣∣∣∣∣∣V̂ (s)−V π∗M (s)∣∣∣∣∣∣∞≤ . Let MDP M̂ be an α-approximation ofM\nsuch that ∀(s,a), we have ∣∣∣∣∣∣P̂(s′|s,a)−P(s′|s,a)∣∣∣∣∣∣\n1 ≤α and |̂c(s,a)−c(s,a)|≤α." }, { "heading": "A.1.1 A GENTLE START: BOUND ON PERFORMANCE OF 1-STEP GREEDY POLICY", "text": "Theorem A.1. Let the one-step greedy policy be\nπ̂(s)=argmin a∈A\nĉ(s,a)+γΣs′P̂(s ′|s,a)V̂ (s′) (12)\nThe performance loss of π̂(s) w.r.t optimal policy π∗ on MDPM is bounded by\n∣∣∣∣∣∣V π̂M(s)−V π∗M (s)∣∣∣∣∣∣∞≤ 2 ( γ +α+γα ( Vmax−Vmin 2 )) 1−γ\n(13)\nProof. From (12) we have ∀s∈S\nĉ(s,π̂(s))+γ ∑ s′ P̂(s′|s,π̂(s))V̂ (s′)≤ ĉ(s,π∗(s))+γ ∑ s′ P̂(s′|s,π∗(s))V̂ (s′)\nĉ(s,π̂(s))−ĉ(s,π∗(s))≤γ (∑ s′ P̂(s′|s,π∗(s))V̂ (s′)− ∑ s′ P̂(s′|s,π̂(s))V̂ (s′) ) (using\n∣∣∣∣∣∣V̂ (s)−V π∗M (s)∣∣∣∣∣∣∞≤ ) ĉ(s,π̂(s))−ĉ(s,π∗(s))≤γ\n(∑ s′ P̂(s′|s,π∗(s))V π ∗ M (s ′)− ∑ s′ P̂(s′|s,π̂(s))V π ∗ M (s ′) ) +2γ\n(using |̂c(s,a)−c(s,a)|≤α)\nc(s,π̂(s))−c(s,π∗(s))≤2γ +2α+γ (∑ s′ P̂(s′|s,π∗(s))V π ∗ M (s ′)− ∑ s′ P̂(s′|s,π̂(s))V π ∗ M (s ′) ) (14)\nNow, let s be the state with the max loss V π̂M(s)−V π ∗ M (s),\nV π̂M(s)−V π ∗ M (s)=c(s,π̂)−c(s,π∗)+γ ∑ s′ ( P(s′|s,π̂)V π̂M(s′)−P(s′|s,π∗)V π ∗ M (s ′) )\nSubstituting from (14)\nV π ∗ M (s)−V π̂M(s)≤2γ +2α+γ ∑ s′ P̂(s′|s,π∗(s))V π ∗ M (s ′)−γ ∑ s′ P̂(s′|s,π̂(s))V π ∗ M (s ′)\n−γ ∑ s′ P(s′|s,π∗)V π ∗ M (s ′)+γ ∑ s′ P(s′|s,π̂)V π̂M(s′)\nAdd and subtract γ ∑ s′P(s ′|s,π̂)V π∗M (s′) and re-arrange\nV π ∗ M (s)−V π̂M(s)≤2γ +2α+γ ∑ s′ ( P̂(s′|s,π∗)−P(s′|s,π∗) ) V π ∗ M (s ′)\n−γ ∑ s′ ( P̂(s′|s,π̂)−P(s′|s,π̂) ) V π ∗ M (s ′)\n+γ ∑ s′ P(s′|s,π̂) ( V π̂M(s ′)−V π ∗ M (s ′) )\n≤2γ +2α+2γα ( Vmax−Vmin\n2\n) +γ ∑ s′ P(s′|s,π̂) ( V π̂M(s ′)−V π ∗ M (s ′) )\nSince s is the state with largest loss∣∣∣∣∣∣V π∗M (s)−V π̂M(s)∣∣∣∣∣∣∞≤2γ +2α+2γα ( Vmax−Vmin 2 ) +γ ∑ s′ P(s′|s,π̂) ∣∣∣∣∣∣V π∗M (s)−V π̂M(s)∣∣∣∣∣∣∞\n≤2γ +2α+2γα ( Vmax−Vmin\n2\n) +γ ∣∣∣∣∣∣V π∗M (s)−V π̂M(s)∣∣∣∣∣∣∞\nRe-arranging terms we get∣∣∣∣∣∣V π∗M (s)−V π̂M(s)∣∣∣∣∣∣∞≤ 2 ( γ +α+γα ( Vmax−Vmin 2 )) 1−γ\n(15)\nwhich concludes the proof." }, { "heading": "A.1.2 BOUND ON PERFORMANCE OF H-STEP GREEDY POLICY", "text": "Notation: For brevity let us define the following macro,\n〈V,π,M〉H=EµπM [ H−1∑ i=0 γic(si,ai)+γ HV (sH) ] (16)\nwhich represents the expected cost achieved when executing policy π onM using V as the terminal cost. We can substitute different policies, terminal costs and MDPs. For example, 〈 V̂ ,π̂,M̂ 〉 H is the expected cost obtained by running policy π̂ on simulator M̂ forH steps with approximate learned terminal value function V̂ . Lemma A.1. For a given policy π, the optimal value function V π ∗\nM and MDPsM,M̂ the following performance difference holds\n∣∣∣∣∣∣〈V π∗M ,π,M〉 H − 〈 V π ∗ M ,π,M̂ 〉 H ∣∣∣∣∣∣ ∞ ≤γ ( 1−γH−1 1−γ ) αH ( cmax−cmin 2 ) +γHαH ( Vmax−Vmin 2 ) + 1−γH 1−γ α\nProof. We temporarily introduce a new MDPM′ that has the same cost function as aM, but transition function of M̂ 〈\nV π ∗ M ,π,M 〉 H − 〈 V π ∗ M ,π,M̂ 〉 H = 〈 V π ∗ M ,π,M 〉 H − 〈 V π ∗ M ,π,M′ 〉 H\n+ 〈 V π ∗ M ,π,M′ 〉 H − 〈 V π ∗ M ,π,M̂ 〉 H\n(17)\nLet ∆P(s0...sH)=P(s0...sH)−P̂(s0...sH) represent the difference in distribution of states encountered by executing π onM and M̂ respectively starting from state s0. Expanding the RHS of (17)\n= ∑\ns0,...,sH\n∆P(s0...sH) ( H−1∑ i=0 γic(si,ai)+γ HV π ∗ M (sH) ) +Eµπ M̂ [ H−1∑ i=0 γi(c(si,ai)−ĉ(si,ai)) ] (18)\nSince the first state s1 is the same = ∑\ns1,...,sH\n∆P(s1...sH) ( H−1∑ i=1 γic(si,ai)+γ HV π ∗ M (sH) ) +Eµπ M̂ [ H−1∑ i=0 γi(c(si,ai)−ĉ(si,ai)) ]\n≤ ∣∣∣∣∣ ∣∣∣∣∣ ∑ s1,...,sH ∆P(s1...sH) ( H−1∑ i=1 γic(si,ai)+γ HV π ∗ M (sH) )∣∣∣∣∣ ∣∣∣∣∣ ∞ + ∣∣∣∣∣ ∣∣∣∣∣EµπM̂ [ H−1∑ i=0 γi(c(si,ai)−ĉ(si,ai)) ]∣∣∣∣∣ ∣∣∣∣∣ ∞\n≤ ∣∣∣∣∣ ∣∣∣∣∣ ∑ s1,...,sH ∆P(s1...sH) ( H−1∑ i=1 γic(si,ai)+γ HV π ∗ M (sH) )∣∣∣∣∣ ∣∣∣∣∣ ∞ + 1−γH 1−γ α (19) where the first inequality is obtained by applying the triangle inequality and the second one is obtained by applying triangle inequality followed by the upper bound on the error in cost-function.\n≤ ∣∣∣∣∣∣ ∣∣∣∣∣∣ ∑\ns2,...,sH+1\n∆P(s2...sH+1) ∣∣∣∣∣∣ ∣∣∣∣∣∣ ∞ sup( H−1∑ i=1 γic(si,ai)+γ HV π ∗ M (sH)−K)+ 1−γH 1−γ α (20)\nBy choosing K = ∑H−1 i=1 γ i(cmax +cmin)/2+γ H(Vmax +Vmin)/2 we can ensure that the term inside sup is upper-bounded by γ(1−γH−1)/(1−γ)((cmax−cmin)/2)+γH(Vmax−Vmin)/2\n≤γ ( 1−γH−1\n1−γ\n) αH ( cmax−cmin\n2\n) +γHαH ( Vmax−Vmin\n2\n) + 1−γH\n1−γ α (21)\nThe above lemma builds on similar results in (Kakade et al., 2003; Abbeel & Ng, 2005; Ross & Bagnell, 2012).\nWe are now ready to prove our main theorem, i.e. the performance bound of an MPC policy that uses an approximate model and approximate value function.\nProof of Theorem 3.1\nProof. Since, π̂ is the greedy policy when using M̂ and V̂ ,〈 V̂ ,π̂,M̂ 〉 H ≤ 〈 V̂ ,π∗,M̂ 〉 H〈\nV π ∗ M ,π̂,M̂ 〉 H ≤ 〈 V π ∗ M ,π ∗,M̂ 〉 H +2γH (using ∣∣∣∣∣∣V̂ −V π∗M ∣∣∣∣∣∣ 1 ≤ )\n(22)\nAlso, we have 〈 V π ∗ M ,π̂,M 〉 H − 〈 V π ∗ M ,π ∗,M 〉 H = (〈 V π ∗ M ,π̂,M 〉 H − 〈 V π ∗ M ,π̂,M̂ 〉 H ) − (〈 V π ∗\nM ,π ∗,M 〉 H − 〈 V π ∗ M ,π ∗,M̂ 〉 H ) + (〈 V π ∗ M ,π̂,M̂ 〉 H − 〈 V π ∗ M ,π ∗,M̂ 〉 H\n) (23) The first two terms can be bounded using Lemma A.1 and the third term using Eq. (22) to get\n〈 V π ∗ M ,π̂,M 〉 H − 〈 V π ∗ M ,π ∗,M 〉 H\n≤2 ( γ 1−γH−1\n1−γ αH\n( cmax−cmin\n2\n) +γHαH ( Vmax−Vmin\n2\n) + 1−γH\n1−γ α+γH ) (24) Now, let s be the state with max loss V π̂M(s)−V π ∗ M (s)\nV π̂M(s)−V π ∗ M (s)= 〈 V π̂M,π̂,M 〉 H − 〈 V π ∗ M ,π ∗,M 〉 H\n= (〈 V π̂M,π̂,M 〉 H − 〈 V π ∗ M ,π̂,M 〉 H ) + (〈 V π ∗ M ,π̂,M 〉 H − 〈 V π ∗ M ,π ∗,M 〉 H ) =γH ( V π̂M(sH+1)−V π ∗ M (sH+1) ) + (〈 V π ∗ M ,π̂,M 〉 H − 〈 V π ∗ M ,π ∗,M 〉 H\n) ≤γH ( V π̂M(s)−V π ∗ M (s) )\n+2\n( γ(1−γH−1)\n1−γ αH\n( cmax−cmin\n2\n) +γHαH ( Vmax−Vmin\n2\n) + 1−γH\n1−γ α+γH ) (25)\nwhere last inequality comes from applying Eq. (24) and the fact that s is the state with max loss. The final expression follows from simple algebraic manipulation." }, { "heading": "A.2 EXPERIMENT DETAILS", "text": "" }, { "heading": "A.2.1 TASK DETAILS", "text": "" }, { "heading": "CARTPOLESWINGUP", "text": "• Reward function: x2cart+θ2pole+0.01vcart+0.01ωpole+0.01a2\n• Observation: [xcart,θpole,vcart,ωpole] (4 dim)\nSAWYERPEGINSERTION We simulate sensor noise by placing a simulated position sensor at the target location in the MuJoCo physics engine that adds Gaussian noise with σ = 10cm to the observed 3D position vector. MPPI uses a deterministic model that does not take sensor noise into account for planning. Every episode lasts for 75 steps with a timestep of 0.02 seconds between steps\n• Reward function: −1.0 ∗ ||xee − xtarget||1 − 5.0 ∗ ||xee − xtarget||2 + 5 ∗ 1(||xee−xtarget||2<0.06)\n• Observation: [ qpos,qvel,xee,xtarget,xee−xtarget,||xee−xtarget||1,||xee−xtarget||2 ] (25 dim)\nAn episode is considered successful if the peg stays within the hole for atleast 5 steps.\nINHANDMANIPULATION This environment was used without modification from the accompanying codebase for Rajeswaran* et al. (2018) and is available at https://bit.ly/3f6MNBP\n• Reward function: −||xobj−xdes||2 +zTobjzdes+ Bonus for proximity to desired pos + orien, zTobjzdes represents dot product between object axis and target axis to measure orientation similarity.\n• Observation: [qpos,xobj,vobj,zobj,zdes,xobj−xdes,zobj−zdes] (45 dim)\nEvery episode lasts 75 steps with a timestep of 0.01 seconds between steps. An episode is considered successful if the orientation of the pen stays within a specified range of desired orientation for atleast 20 steps. The orientation similarity is measured by the dot product between the pen’s current longitudinal axis and desired with a threshold of 0.95." }, { "heading": "SAWYERPEGINSERTION", "text": "" }, { "heading": "A.2.2 LEARNING DETAILS", "text": "Validation: Validation is performed after every N training episodes during training for Neval episodes using a fixed set of start states that the environment is reset to. We ensure that the same start states are sampled at every validation iteration by setting the seed value to a pre-defined validation seed, which is kept constant across different runs of the algorithm with different training seeds. This helps ensure consistency in evaluating different runs of the algorithm. For all our experiments we setN=40 andNeval =30.\nMPQ(λ): For all tasks, we represent Q function using 2 layered fully-connected neural network with 100 units in each layer and ReLU activation. We use ADAM (Kingma & Ba, 2014) for optimization with a learning rate of 0.001 and discount factor γ = 0.99. Further, the buffer size is 1500 for CARTPOLESWINGUP and 3000 for the others, with batch size of 64 for all. We smoothly decay λ according to the following sublinear decay rate\nλt= λ0\n1+κ √ t\n(26)\nwhere the decay rate κ is calculate based on the desired final value of λ. For batch size we did a search from [16, 64] with a step size of 16 and buffer size was chosen from 1500, 3000, 5000. While batch size was tuned for cartpole and then fixed for the remaining two environments, the buffer size was chosen independently for all three.\nProximal Policy Optimization (PPO): Both policy and value functions are represented by feed forward networks with 2 layers each with 64 and 128 units for policy and value respectively. All other parameters are left to the default values. The number of trajectories collected per iteration is modified to correspond with the same number of samples collected between validation iterations for MPQ(λ). We collect 40 trajectories per iteration. Asymptotic performance is reported as average of last 10 validation iterations after 500 training iters in SAWYERPEGINSERTION and 2k in INHANDMANIPULATION.\nMPPI parameters Table 1 shows the MPPI parameters used for different experiments. In addition to the standard MPPI parameters, in certain cases we also use a step size parameter as introduced by Wagener et al. (2019). For INHANDMANIPULATION and SAWYERPEGINSERTION we also apply autoregressive filtering on the sampled MPPI trajectories to induce smoothness in the sampled actions, with tuned filter coefficients. This has been found to be useful in prior works (Summers et al., 2020; Lowrey et al., 2018) for getting MPQ(λ) to work on high dimensional control tasks. The temperature, initial covariance and step size parameters for MPPI were tuned using a grid search with true dynamics. Temperature and initial covariance were set within the range of [0.0,1.0] and step size from [0.5,1.0] with a discretization of 0.05. The number of particles were searched from [40,120] with a step size of 10 and the horizon was chosen from 4 different values [16,20,32,64]. The best performing parameters then chosen based on average reward over 30 episodes with a fixed seed value to ensure reproducibility. The same parameters were then used in the case of biased dynamics and MPQ(λ), to clearly demonstrate that MPQ(λ) can overcome sources of error in the base MPC implementation." } ]
2,021
BLENDING MPC & VALUE FUNCTION APPROXIMATION FOR EFFICIENT REINFORCEMENT LEARNING
SP:2227006cc52059641d5d7d2fca467d5e392bce65
[ "The authors leveraged and repurposed Noise Conditioned Score Network (NCSN) that was originally introduced by Song & Ermon (2019) for generative modeling to be used for detection out-of-distribution (OOD) images. The authors unfold the intuition and rationale behind score matching followed by the equivalence of denoising autoencoder (DAE) to derive NCSN as a score estimator and provide an analysis to demonstrate the value of multiscale score analysis. In an experimental analysis on SVHN and CIFAR datasets they demonstrate superiority of their method (MSMA) over previously reported findings in the literature using state-of-the-art models (ODIN, JEM, Likelihood Ratios) on OOD task." ]
We present a new methodology for detecting out-of-distribution (OOD) images by utilizing norms of the score estimates at multiple noise scales. A score is defined to be the gradient of the log density with respect to the input data. Our methodology is completely unsupervised and follows a straight forward training scheme. First, we train a deep network to estimate scores for L levels of noise. Once trained, we calculate the noisy score estimates for N in-distribution samples and take the L2norms across the input dimensions (resulting in anNxLmatrix). Then we train an auxiliary model (such as a Gaussian Mixture Model) to learn the in-distribution spatial regions in this L-dimensional space. This auxiliary model can now be used to identify points that reside outside the learned space. Despite its simplicity, our experiments show that this methodology significantly outperforms the stateof-the-art in detecting out-of-distribution images. For example, our method can effectively separate CIFAR-10 (inlier) and SVHN (OOD) images, a setting which has been previously shown to be difficult for deep likelihood models. We make our code and results publicly available on Github 1.
[ { "affiliations": [], "name": "Ahsan Mahmood" }, { "affiliations": [], "name": "Junier Oliva" }, { "affiliations": [], "name": "Martin Styner" } ]
[ { "authors": [ "He" ], "title": "2016)) to classify between inliers and 1 year olds and tested", "venue": null, "year": 2016 }, { "authors": [ "REFERENCES Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete problems in ai safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "BJ Casey", "Tariq Cannonier", "May I Conley", "Alexandra O Cohen", "Deanna M Barch", "Mary M Heitzeg", "Mary E Soules", "Theresa Teslovich", "Danielle V Dellarco", "Hugh Garavan" ], "title": "The adolescent brain cognitive development (abcd) study: imaging acquisition across 21 sites", "venue": "Developmental cognitive neuroscience,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Terrance DeVries", "Graham W. Taylor" ], "title": "Learning confidence for out-of-distribution detection in neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "John H Gilmore", "Benjamin Langworthy", "Jessica B Girault", "Jason Fine", "Shaili C Jha", "Sun Hyung Kim", "Emil Cornea", "Martin Styner" ], "title": "Individual Variation of Human Cortical Structure Is Established in the First Year of Life. Biological psychiatry", "venue": "Cognitive neuroscience and neuroimaging,", "year": 2020 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jorn-Henrik Jacobsen", "D. Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like", "venue": "one. ArXiv,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings. International Conference on Learning Representations, ICLR,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "arXiv preprint arXiv:1812.04606,", "year": 2018 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "In 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings. International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R. Srikant" ], "title": "Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks", "venue": "6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings,", "year": 2018 }, { "authors": [ "Antonio Matosevic", "Eliisabet Hein", "Francesco Nuzzo" ], "title": "Reproducibility challenge–generative modeling by estimating gradients of the data distribution", "venue": null, "year": 2019 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t", "venue": null, "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "George Papamakarios" ], "title": "Neural density estimation and likelihood-free inference", "venue": "arXiv preprint arXiv:1910.13233,", "year": 2019 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "George Papamakarios", "Eric Nalisnick", "Danilo Jimenez Rezende", "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Normalizing flows for probabilistic modeling and inference", "venue": null, "year": 1912 }, { "authors": [ "J. Ren", "Peter J. Liu", "E. Fertig", "Jasper Snoek", "Ryan Poplin", "Mark A. DePristo", "Joshua V. Dillon", "Balaji Lakshminarayanan" ], "title": "Likelihood ratios for out-of-distribution detection", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M Waldstein", "Georg Langs", "Ursula" ], "title": "SchmidtErfurth. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks", "venue": "Medical image analysis,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved techniques for training score-based generative models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Rebecca L Stephens", "Benjamin W Langworthy", "Sarah J Short", "Jessica B Girault", "Martin Styner", "John H Gilmore" ], "title": "White Matter Development from Birth to 6 Years of Age: A Longitudinal Study. Cerebral cortex", "venue": null, "year": 1991 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Pingmei Xu", "Krista A Ehinger", "Yinda Zhang", "Adam Finkelstein", "Sanjeev R Kulkarni", "Jianxiong Xiao" ], "title": "Turkergaze: Crowdsourcing saliency with webcam based eye tracking", "venue": "arXiv preprint arXiv:1504.06755,", "year": 2015 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Shuangfei Zhai", "Yu Cheng", "W. Lu", "Zhongfei Zhang" ], "title": "Deep structured energy based models for anomaly detection", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Zongwei Zhou", "Vatsal Sodha", "Md Mahfuzur Rahman Siddiquee", "Ruibin Feng", "Nima Tajbakhsh", "Michael B Gotway", "Jianming Liang" ], "title": "Models genesis: Generic autodidactic models for 3d medical image analysis", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern neural networks do not tend to generalize well to out-of-distribution samples. This phenomenon has been observed in both classifier networks (Hendrycks & Gimpel (2017); Nguyen et al. (2015); Szegedy et al. (2013)) and deep likelihood models (Nalisnick et al. (2018); Hendrycks et al. (2018); Ren et al. (2019)). This certainly has implications for AI safety (Amodei et al. (2016)), as models need to be aware of uncertainty when presented with unseen examples. Moreover, an out-ofdistribution detector can be applied as an anomaly detector. Ultimately, our research is motivated by the need for a sensitive outlier detector that can be used in a medical setting. Particularly, we want to identify atypical morphometry in early brain development. This requires a method that is generalizable to highly variable, high resolution, unlabeled real-world data while being sensitive enough to detect an unspecified, heterogeneous set of atypicalities. To that end, we propose multiscale score matching to effectively detect out-of-distribution samples.\nHyvärinen (2005) introduced score matching as a method to learn the parameters of a nonnormalized probability density model, where a score is defined as the gradient of the log density with respect to the data. Conceptually, a score is a vector field that points in the direction where the log density grows the most. The authors mention the possibility of matching scores via a nonparametric model but circumvent this by using gradients of the score estimate itself. However, Vincent (2011) later showed that the objective function of a denoising autoencoder (DAE) is equivalent to matching the score of a non-parametric Parzen density estimator of the data. Thus, DAEs provide a methodology for learning score estimates via the objective:\n1 2 Ex̃∼qσ(x̃|x)pdata(x)[||sθ(x̃)−∇x̃ log qσ(x̃|x)||] (1)\nHere sθ(x) is the score network being trained to estimate the true score ∇x log pdata(x), and qσ(x̃) = ∫ qσ(x̃|x)pdata(x)dx. It should be noted that the score of the estimator only matches\n1https://github.com/ahsanMah/msma\nthe true score when the noise perturbation is minimal i.e qσ(x̃) ≈ pdata(x). Recently, Song & Ermon (2019) employed multiple noise levels to develop a deep generative model based on score matching, called Noise Conditioned Score Network (NCSN). Let {σi}Li=1 be a positive geometric sequence that satisfies σ1σ2 = ... = σL−1 σL\n> 1. NCSN is a conditional network, sθ(x, σ), trained to jointly estimate scores for various levels of noise σi such that ∀σ ∈ {σi}Li=1 : sθ(x, σ) ≈ ∇x log qσ(x). In practice, the network is explicitly provided a one-hot vector denoting the noise level used to perturb the data. The network is then trained via a denoising score matching loss. They choose their noise distribution to beN (x̃|x, σ2I); therefore∇x̃ log qσ(x̃|x) = −(x̃−x/σ2). Thus the objective function is:\n1\nL L∑ i=1 λ(σi)\n[ 1\n2 Ex̃∼qσi (x̃|x)pdata(x) [∣∣∣∣∣∣∣∣sθ(x̃, σi) + ( x̃− xσ2i ) ∣∣∣∣∣∣∣∣2 2 ]] (2)\nSong & Ermon (2019) set λ(σi) = σ2 after empirically observing that ||σsθ(x, σ)||2 ∝ 1. We similarly scaled our score norms for all our experiments. Our work directly utilizes the training objective proposed by Song & Ermon (2019) i.e. we use an NCSN as our score estimator. However, we use the score outputs for out-of-distribution (OOD) detection rather than for generative modeling. We demonstrate how the space of multiscale score estimates can separate in-distribution samples from outliers, outperforming state-of-the-art methods. We also apply our method on real-world medical imaging data of brain MRI scans." }, { "heading": "2 MULTISCALE SCORE ANALYSIS", "text": "Consider taking the L2-norm of the score function: ||s(x)|| = ||∇x log p(x)|| = ∣∣∣∣∣∣∇x p(x)p(x) ∣∣∣∣∣∣.\nSince the data density term appears in the denominator, a high likelihood will correspond to a low norm. Since out-of-distribution samples should have a low likelihood with respect to the indistribution log density (i.e. p(x) is small), we can expect them to have high score norms. However, if these outlier points reside in “flat” regions with very small gradients (e.g. in a small local mode), then their score norms can be low despite the point belonging to a low density region. This is our first indicator informing us that a true score norm may not be sufficient for detecting outliers. We empirically validate our intuition by considering score estimates for a relatively simple toy dataset: FashionMNIST. Following the denoising score matching objective (Equation 2), we can obtain multiple estimates of the true score by using different noise distributions qσ(x̃|x). Like Song & Ermon (2019), we choose the noise distributions to be zero-centered Gaussian scaled according to σi. Recall that the scores for samples perturbed by the lowest σ noise should be closest to the true score. Our analyses show that this alone was inadequate at separating inliers from OOD samples.\nWe trained a score network sFM(x, σL) on FashionMNIST and used it to estimate scores of FashionMNIST (x ∼ DFM ), MNIST (x ∼ DM ) and CIFAR-10 (x ∼ DC) test sets. Figure 1a shows the distribution of the score norms corresponding to the lowest noise level used. Note that CIFAR10 samples are appropriately given a high score by the model. However, the model is unable to\ndistinguish FashionMNIST from MNIST, giving MNIST roughly the same scores as in-distribution samples. Though far from ideal, this result is still a considerable improvement on existing likelihood methods, which have been shown to assign higher likelihoods to OOD samples (Nalisnick et al. (2018)). Our next line of inquiry was to utilize multiple noise levels. That is instead of simply considering sFM(x, σL), we analyze the L-dimensional space [||sFM(x, σ1)|| , ..., ||sFM(x, σL)||] for x ∼ {DFM , DM , DC}. Our observations showed that datasets did tend to be separable in the Ldimensional space of score norms. Figure 1b visualizes the UMAP embeddings of scores calculated via a network trained to estimate L = 10 scales of σs, with the lowest σ being the same as one in Figure 1a." }, { "heading": "2.1 SCALES AND NEIGHBORHOODS", "text": "To our knowledge multiscale score analysis has not been explored in the context of OOD detection. In this section, we present an analysis in order to give an intuition for why multiple scales can be beneficial. Consider the toy distribution shown in Figure 2. We have three regions of interest: an inlier region with high density centered around x = 10, an outlier region with low density around x = 30, and a second outlier region with a local mode centered around x = 50. Recall that adding Gaussian noise to a distribution is equivalent to convolving it with the Gaussian distribution. This not only allows us to visualize perturbations of our toy distribution, but also analytically compute the score estimates given any σi. Initially with no perturbation, both a point in the low-density region and one very close to (or at) the local-mode will have small gradients. As we perturb the samples we smooth the original density, causing it to widen. The relative change in density at each point is dependent on neighboring modes. A large scale perturbation will proportionally take a larger neighborhood into account at each point of the convolution. Therefore, at a sufficiently large scale,\nnearby outlier points gain context from in-distribution modes. This results in an increased gradient signal in the direction of inliers.\nFigure 3 plots the score norms of samples generated from the original density along with markers indicating our key regions. Note how even a small scale perturbation (σL = 0.1) is enough to bias the density of the Low-Density outliers towards the nearby in-distribution mode. A medium scale (σM = 10) Gaussian perturbation is still not wide enough to reach the inlier region from the LocalMode outlier densities, causing them to simply smooth away into flat nothingness. It is only after we perform a large scale (σH = 20) perturbation that the in-distribution mode gets taken into account, resulting in a higher gradient norm. This analysis allows us to intuit that larger noise levels account for a larger neighborhood context. We surmise that given a sufficiently large scale, we can capture gradient signals from distant outliers.\nIt is imperative to note that one large scale is not guaranteed to work for all outliers. Consider outliers close to inlier modes such as the samples between Low-Density outliers and Inliers in Figure 2. σH results in an overlap in the score distribution of inliers and Low-Density outliers. This makes it difficult to differentiate the aforementioned “in-between” outliers from the in-distribution samples. However, this large scale was necessary to get a big enough neighborhood context in order to capture the more distant Local-Mode outliers. Therefore, we postulate that a range of scales is necessary for separating outliers. Admittedly, selecting this range according to the dataset is not a trivial problem. In a very recent work, Song & Ermon (2020) outlined some techniques for selecting {σi}Li=1 for NCSNs from the perspective of generative modelling. Perhaps there is a similar analog to OOD detection. We leave such analyses for future work and use the default range for NCSN in all our experiments. However, we observed that our defaults are surprisingly generalizable, evident by the fact that all our experiments in Section 5 were performed with the same scale range. In Section 5.5, we further analyze how varying the scale range effects downstream accuracy and observe that our defaults already provide near optimal performance." }, { "heading": "2.2 PROPOSED TRAINING SCHEME", "text": "In this work, we propound the inclusion of all noisy score estimates for the task of separating in- and out-of-distribution points, allowing for a Multiscale Score Matching Analysis (MSMA). Concretely, given L noise levels, we calculate the L2-norms of per-sample scores for each level, resulting in an L-dimensional vector for each input sample. Motivated by our observations, we posit that indistribution data points occupy distinct and dense regions in the L-dimensional score space. The cluster assumption states that decision boundaries should not pass high density regions, but instead lie in low density regions. This implies that any auxiliary method trained to learn in-distribution regions should be able to identify OOD data points that reside outside the learned space. Thus, we propose a two step unsupervised training scheme. First, we train a NCSN model sIN(x, σL) to estimate scores for inlier samples, given {σi}Li=1 levels of noise. Once trained, we calculate all L noisy score estimates for the N training samples and take the L2-norms across the input dimensions: [||sIN(X,σ1)||22 , ..., ||sIN(X,σL)|| 2 2]. This results in an NxL matrix. We now train an auxiliary model (such as a Gaussian Mixture Model) on this matrix to learn the spatial regions of in-distribution samples in the L-dimensional space." }, { "heading": "3 LEARNING CONCENTRATION IN THE SCORE SPACE", "text": "We posit that learning the “density” of the inlier data in the L-dimensional score (norm) space is sufficient for detecting out-of-distribution samples. The term “density” can be interpreted in a myriad of ways. We primarily focus on models that fall under three related but distinct notions of denseness: spatial clustering, probability density, and nearest (inlier) neighbor graphs. All three allow us to threshold the associated metric to best separate OOD samples.\nSpatial clustering is conceptually the closest to our canonical understanding of denseness: points are tightly packed under some metric (usually Euclidean distance). Ideally OOD data should not occupy the same cluster as the inliers. We train Gaussian Mixture Models (GMMs) to learn clusters in the inlier data. GMMs work under the assumption that the data is composed of k-components whose shapes can be described by a (multivariate) Gaussian distribution. Thus for a given datum, we can calculate the joint probability of it belonging to any of the k-components.\nProbability density estimation techniques aim to learn the underlying probability density function pdata(x) which describes the population. Normalizing flows are a family of flexible methods that can learn tractable density functions (Papamakarios et al. (2019)). They transform complex distributions into a simpler one (such as a Gaussian) through a series of invertible, differential mappings. The simpler base distribution can then be used to infer the density of a given sample. We use Masked Autoregressive Flows introduced by Papamakarios et al. (2017), which allows us to use neural networks as the transformation functions. Once trained, we can use the likelihood of the inliers to determine a threshold after which samples will be considered outliers.\nFinally, we consider building k-nearest neighbor (k-NN) graphs to allow for yet another thresholding metric. Conceptually, the idea is to sort all samples according to distances to their k-closest (inlier) neighbor. Presumably, samples from the same distribution as the inliers will have very short distances to training data points. Despite its simplicity, this method works surprisingly well. Practically, k-NN distances can be computed quickly compute by using efficient data structures (such as KD Trees)." }, { "heading": "4 RELATED WORK", "text": "Hendrycks & Gimpel (2017) should be commended for creating an OOD baseline and establishing an experimental test-bed which has served as a template for all OOD work since. Their purported method was thresholding of softmax probabilities of a well-trained classifier. Their results have since been beaten by more recent work. Liang et al. (2017) propose ODIN as a post-hoc method that utilizes a pretrained network to reliably separate OOD samples from the inlier distribution. They achieve this via i) perturbing the input image in the gradient direction of the highest (inlier) softmax probability and ii) scaling the temperature of the softmax outputs of the network for the best OOD separation. They follow the setting from Hendrycks & Gimpel (2017) and show very promising results for the time. However, ODIN heavily depends on careful tuning of its hyperparameters\nDeVries & Taylor (2018) train their networks to predict confidence estimates in addition to softmax probabilities, which can then be used to threshold outliers. They show significant improvements over Hendrycks & Gimpel (2017) and some improvements over ODIN. Another concurrent work by Lee et al. (2018) jointly trained a GAN alongside the classifier network to generate ‘realistic’ OOD examples, requiring an additional OOD set during training time. The final trained network is also unable to generalize to other unseen datasets. It is important to note that our method is trained completely unsupervised while the baselines are not, potentially giving them additional information about the idiosyncrasies of the inlier distribution.\nRen et al. (2019) proposed to jointly train deep likelihood models alongside a “background” likelihood model that learns the population level background statistics, taking the ratio of the two resulting likelihoods to produce a “contrastive score”. They saw very good results for grayscale images (FashionMnist vs MNIST) and saw a considerable improvement in separating CIFAR and SVHN compared to Nalisnick et al. (2018). Some prior work has indeed used gradients of the log likelihoods for OOD but they do not frame it in the context of score matching. Grathwohl et al. (2020) posits that a discriminative model can be reinterpreted as a joint energy (negative loglikelihood) based model (JEM). One of their evaluation experiments used the energy norms (which they dub ‘Approximate Mass JEM’) for OOD detection. Even though they saw improvements over only using log-likelihoods, their reported AUCs did not beat ODIN or other competitors. Peculiarly, they also observed that for tractable likelihood models, scores were anti-correlated with the model’s likelihood and that neither were reliable for OOD detection. Zhai et al. (2016) also used energy (negative log probability) gradient norms, but their experiments were limited to intra-dataset anomalies. To our knowledge, no prior work has explicitly used score matching for OOD detection." }, { "heading": "5 EXPERIMENTS", "text": "In this section we demonstrate MSMA’s potential as an effective OOD detector. We first train a NCSN model as our score estimator, and then an auxiliary model on the score estimates of the training set. Following Liang et al. (2017) and DeVries & Taylor (2018), we use CIFAR-10 and SVHN as our “inlier” datasets alongside a collection of natural images as ”outlier” datasets. We\nretrieve the natural image from ODIN’s publicly available GitHub repo2. This helps maintain a fair comparison (e.g. it ensures we test on the same random crops as ODIN). We will denote Liang et al. (2017) as ODIN and DeVries & Taylor (2018) as Confidence in our tables. We also distinguish between CIFAR and SVHN and compare our results to the state-of-the-art. Additionally, we report our results for FashionMNIST vs MNIST in Section A.6." }, { "heading": "5.1 DATASETS AND EVALUATION METRICS", "text": "We consider CIFAR-10 (Krizhevsky et al. (2009)) and SVHN (Netzer et al. (2011)) as our inlier datasets. For out-of-distribution datasets, we choose the same as Liang et al. (2017): TinyImageNet3, LSUN (Yu et al. (2015)), iSUN (Xu et al. (2015)). Similar to DeVries & Taylor (2018), in our main experiments we report only resized versions of LSUN and TinyImageNet. We also leave out the synthetic Uniform and Gaussian noise samples from our main experiments as we performed extremely well in all of those experiments. We refer the reader to A.4 for the full report including all datasets. Lastly following DeVries & Taylor (2018), we consider All Images: a combination of all non-synthetic OOD datasets outlined above (including cropped versions). Note that this collection effectively requires a single threshold for all datasets, thus arguably reflecting a real world out-of-distribution setting. To measure thresholding performance we use the metrics established by previous baselines (Hendrycks & Gimpel (2017), Liang et al. (2017)). FPR at 95% TPR is the False Positive Rate (FPR) when the True Positive Rate (TPR) is 95%. Detection Error is the minimum possible misclassification probability over all thresholds. AUROC is Area Under the ROC Curve. AUPR is Area Under the Precision Recall Curve. More details are given in A.3." }, { "heading": "5.2 COMPARISON AGAINST PREVIOUS OOD METHODS", "text": "We compare our work against Confidence Thresholding (DeVries & Taylor (2018)) and ODIN (Liang et al. (2017)). For all experiments we report the results for the in-distribution testset vs the out-of-distribution datasets. Note that All Images* is a version of All Images where both ODIN and Confidence Thresholding perform input preprocessing. Particularly, they perturb the samples in the direction of the softmax gradient of the classifier: x̃ = x − sign(−∇xlogSy(x;T )). They then perform a grid search over ranges, selecting the value that achieves best separation on 1000 samples randomly held for each out-of-distribution set. ODIN performs an additional search over T ranges, while Confidence Thresholding uses a default of T = 1000. We do not perform any such input modification. Note that ODIN uses input thresholding for individual OOD datsets as well, while Confidence Thresholding does not. Finally, for the sake of brevity we only report FPR (95% TPR) and AUROC. All other metric comparisons are available in the appendix (A.4).\n2https://github.com/facebookresearch/odin 3https://tiny-imagenet.herokuapp.com/" }, { "heading": "5.3 SEPARATING CIFAR-10 FROM SVHN", "text": "Since this setting (CIFAR-10 as in-distribution and SVHN as out-of-distribution) is not considered in the testbed used by ODIN or Confidence Thresholding, we consider these results separately and evaluate them in the context of likelihood methods. This experiment has recently gained attention following Nalisnick et al. (2018) showing how deep generative models are particularly inept at separating high dimensional complex datasets such as these two. We describe our results for each auxiliary model in Table 2. Here we note that all three methods definitively outperform the previous state of the art (see Table 3), with KD-Trees preforming the best. Likelihood Ratios Ren et al. (2019) JEM (Grathwohl et al. (2020)) are two unsupervised methods that have tackled this problem and have reported current state-of-the-art results. Table 3 summarizes the results that were reported by these papers. Both report AUROCs, with Ren et al. (2019) additionally reporting AUPR(In) and FPR at 80% TPR. Since each method proposes a different detection function, we also provide them for reference." }, { "heading": "5.4 AGE BASED OOD FROM BRAIN MRI SCANS", "text": "In this section we report our method’s performance on a real world dataset. Here the task is to detect brain Magnetic Resonance Images (MRIs) from pediatric subjects at an age (1 - 6 years) that is younger than the inlier data (9 - 11 years of age). We expect visible differences in image contrast and local brain morphometry between the brains of a toddler and an adolescent. As a\nchild grows, their brain matures and the corresponding scans appear more like the prototypical adult brain. This provides an interesting gradation of samples being considered out-of-distribution with respect to age. We employ 3500 high resolution T1-weighted MR images obtained through the NIH large scale ABCD study (Casey et al. (2018)), which represent data from the general adolescent population (9-11 years of age). This implies that our in-distribution dataset will have high variation. After standard preprocessing, we extracted for each dataset three mid-axial slices and resized them to be 90x110 pixels, resulting in ∼11k axial images (10k training, 1k testing). For our outlier data, we employ MRI datasets of children aged 1, 2, 4 and 6 years (500 each) from the UNC EBDS database Stephens et al. (2020); Gilmore et al. (2020). Our methodology was effectively able to identify younger age groups as out-of-distribution. Table 4 reports the results for GMMs trained for this task. As expected, the separation performance decreases as age increases. Note that we kept the same hyperparameters for our auxiliary methods as in the previous experiments despite this being a higher resolution scenario. We also note that our Flow model and KD Tree perform equally as well and refer the reader to A.5. Following Liang et al. (2017) and DeVries & Taylor (2018), we compare our results to the baseline methodology proposed by Hendrycks & Gimpel (2017). We trained a standard ResNet-50 (He et al. (2016)) to classify between inliers and 1 year olds and tested its performance on unseen outliers. We observed that MSMA generally outperforms the baseline. Note that the baseline classifier is biased towards the out-of-distribution detection task since it was trained to separate 1 year olds, wheres MSMA is trained completely unsupervised. Lastly, in Section A.7 we also report results of f-AnoGAN Schlegl et al. (2019) applied on this out-of-distribution task." }, { "heading": "5.5 HYPERPARAMETER ANALYSIS", "text": "We performed experiments to analyze how sensitive MSMA is to two main hyperparameters utilized in NCSN: the number of noise levels (L) and the largest scale used (σH ). We opted out of varying the smallest noise scale and kept it the same across experiments (σL = 0.01). Our rationale is that noise perturbations to images at such a small scale are imperceptible to the human eye and are adequate to give an estimate of the true score. For all experiments, we train on CIFAR-10 and evaluate on the All Images dataset described in Section 5.1 and plot AUROC in Figure 5. In order\nto reduce GPU memory usage and computation time, we had to reduce the batch size from 128 to 64, which is why we see a performance dip from the main experiment in Section 5.2. We keep the same hyperparameters for our auxiliary models as in the main experiment.\nFor the number of scales L, we test the values 1, 3, 10, 15, and 20, with σH fixed at the default value (σH = 1) . Recall that we follow the original NCSN schema by (Song & Ermon, 2019), and utilizes a geometric sequence of sigma scales from σH to σL with L steps (endpoint inclusive). Thus, changing L changes the intermediate noise scales. Our results in Figure 5a show that MSMA is optimal near the default (L = 10). Increasing L does not significantly vary the performance, while small values such as L = 3 are not be adequate at providing enough intermediate scales. Note that L = 1 is the degenerate case where only the largest noise scale is used. This highlights the need for a range of scales as argued in Section 2.1 and empirically shows that simply using one large scale is not enough. Figure 5b plots the affect of varying the largest noise scale σH . We test the values 0.5, 1.0, 2.0 and 10, with the default number of scales L = 10. Again, we observe that our default σH = 1 performs the best and we do not notice any improvement form varying it. Considering how images are rescaled to [0,1] before they are passed to the network, we posit that σH = 1.0 already introduces large noise and increasing it further seems to degrade results to varying degrees.\nLastly, we would like to emphasize that all our main out-of-distribution experiments in Section5 were performed with the same default hyperparameters, without any tuning. Despite this disadvantage, MSMA still outperforms its competitors ODIN (Liang et al. (2017)), Confidence Thresholding (DeVries & Taylor (2018)), and Likelihood Ratios (Ren et al. (2019)), all of which need some fine-tuning of hyperparameters. Recognize that tuning requires apriori knowledge of the type of out-of-distribution samples the user expects. From the analysis in this section and our main experiment, we can confidently advocate the use of our defaults as they seem to generalize well across datasets and do not require such apriori knowledge. Admittedly, if the form of anomalies are known at training time then it would indeed be possible to tune MSMA’s hyperparameters to fit a particular need. However, we leave such an analysis for future work as it starts to encroach the domain of anomaly detection whereas the work presented in this paper is mainly concerned with introducing MSMA as a performant, easy to implement, and generalizable out-of-distribution detector." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "We introduced a methodology based on multiscale score matching and showed that it outperformed state-of-the-art methods, with minimal hyperparameter tuning. Our methodology is easy to implement, completely unsupervised and generalizable to many OOD tasks. Even though we only reported two metrics in the main comparison, we emphasize that we outperform previous state-ofthe-art in every metric for all benchmark experiments. Next, it is noteworthy that in our real-world experiment, the brain MR images are unlabeled. Since our model is trained completely unsupervised, we have to make very few inductive biases pertaining to the data.\nNote that we plan to apply our method to high resolution 3D MRIs, which can reach upto 256x256x256. Consider how likelihood methods such as Glow Kingma & Dhariwal (2018) already struggle to identify out-of-distribution samples for lower resolution 2D images ( Nalisnick et al. (2018)). It is unclear whether they would preform adequately at this task when they are scaled up to the dimensionality of 3D MRIs. Furthermore, there is evidence that deep likelihood models are difficult to train in such high resolution regimes (Papamakarios (2019)), especially given low sample sizes. Our model’s objective function is based on a denoising (autoencoding), and can be easily extended to any network architecture. For example, our preliminary results show that MSMA was able to achieve similar performance to Section 5.4 when applied to their full 3D MRI counterparts by utilizing a publicly available 3D U-Net (Zhou et al. (2019)). We plan to explore these high resolution scenarios as our next future direction.\nOur excellent results highlight the possibility of using MSMA as a fast, general purpose anomaly detector which could be used for tasks ranging from detection of medical pathologies to data cleansing and fault detection. From an application perspective, we plan to apply this methodology to the task of detecting images of atypically maturing children from a database of typical inliers. Lastly, our observations have uncovered a peculiar phenomenon exhibited by multiscale score estimates, warranting a closer look to understand the theoretical underpinnings of the relationship between low density points and their gradient estimates." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the UNC Early Brain Development Study (PI John Gilmore) and the Adolescent Brain Cognitive Development (ABCD) Study (https://abcdstudy.org), held in the NIMH Data Archive (NDA). The ABCD data repository grows and changes over time. The ABCD data used in this report came from DOI 10.15154/1503209 (v2.1). Other funding was provided through NIH grants HD053000, MH070890, MH111944, HD079124, EB021391, and MH115046." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DATASET DETAILS", "text": "All the datsets considered are described below.\nCIFAR-10: The CIFAR-10 dataset (Krizhevsky et al. (2009)) consists of 60,000 32x32 colour images in 10 classes, such as horse, automobile, cat etc. There are 50,000 training images and 10,000 test images.\nSVHN: The Street View Housing Numbers (SVHN) dataset (Netzer et al. (2011)) consists of 32x32 images depicting house numbers ranging from 0 through 9. We use the official splits: 73,257 digits for training, 26,032 digits for testing.\nTinyImageNet: This dataset4 is a subset of the ImageNet dataset (Deng et al. (2009)). The test set has 10,000 images with 50 samples for each of the 200 classes. Liang et al. (2017) created two 32x32 pixel versions of this dataset: TinyImageNet (crop) which contains random crops of the original test samples and TinyImageNet (resize) which contains downscaled test images.\nLSUN: The Large Scene UNderstanding (LSUN) produced by (Yu et al. (2015)) consists of 10,000 test images belonging to one of 10 different scene classes such as bedroom, kitchen etc. Liang et al. (2017) created two 32x32 pixel versions of this dataset as well: a randomly cropped LSUN (crop) and a downsampled LSUN (resize).\niSUN: This dataset was procured by (Xu et al. (2015)) and is a subsample of the SUN image database. We use 32x32 pixel downscaled versions of the original 8,925 test images.\nUniform: This dataset consists of 10,000 synthetically generated 32x32 RGB images produced by sampling each pixel from an i.i.d uniform distribution in the range [0,1].\nGaussian: These are 10,000 synthetic 32x32 RGB images where each pixel is sampled from an i.i.d Gaussian distribution centered at 0.5 with a standard deviation of 1. The pixel values are clipped to be within [0, 1] to keep the values within the expected range of (normalized) images.\nAll Images: Following DeVries & Taylor (2018), this dataset is a combination of all non-synthetic OOD datasets outlined above: TinyImageNet (crop), TinyImageNet (resize), LSUN (crop), LSUN (resize) and iSUN. Therefore this contains 48,925 images from a variety of data distributions. Note that this collection effectively requires a single threshold for all datasets, thus arguably reflecting a real world out-of-distribution setting." }, { "heading": "A.2 EXPERIMENTAL DETAILS", "text": "We use the NCSN model provided by Song & Ermon (2019). In particular, we use the Tensorflow implementation provided through a NeurIPS reproducibilty challenge submission, Matosevic et al. (2019). The model architecture used is a RefineNet with 128 filters. The batch size is also fixed to 128. We train for 200k iterations using the Adam optimizer. Following Song & Ermon (2019), we use L = 10 standard deviations for our Gaussian noise perturbation such that {σi}Li=1 is a geometric sequence with σ1 = 1 and σ10 = 0.01. We use the same hyperparameters for training on both CIFAR and SVHN. For our experiment on brain MRI images (Section 5.4), we trained our model with 64 filters and a batch size of 32 due to memory constraints caused by the higher resolution images. For the baseline network in Section5.4, we used the standard ResNet50 available in Keras and added a global average pooling layer followed by a dense layer of 1024 nodes followed by a softmax layer with binary output. We trained for 10 epochs using Adam with default learning rate and momentum. For unseen classes, we follow Hendrycks et al. (2018) and pick the maximum prediction probability as our outlier score. Since the network was trained to classify one year olds, we report the 1 year metrics using the logits corresponding to the outlier class only in lieu of taking the max across both classes.\nWe train our auxiliary models on the same training set that was used to train the NCSN model, thereby circumventing the need for a separate held out tuning set. For our Gaussian Mixture Models, we mean normalize the data and perform a grid search over the number of components (ranging\n4https://tiny-imagenet.herokuapp.com/\nfrom 2 to 20), using 10-fold cross validation. Our normalizing flow model is constructed with a MAF using two hidden layers with 128 units each, and a Standard Normal as the base distribution. It is trained for a 1000 epochs with a batch size of 128. Finally for our nearest neighbor model, we train a KD Tree to store (k=5)-nearest neighbor distances of the in-distribution training set. We keep the same hyperparameter settings for all experiments.\nNote that in Section 5.2, as ODIN and Confidence Thresholding were trained with a number of different architectures, we report the ones that performed the best for each respective method. Specifically, we use the results of VGG13 for Confidence Thresholding and DenseNet-BC for ODIN." }, { "heading": "A.3 EVALUATION METRIC DETAILS", "text": "To measure thresholding performance we use the metrics established by previous baselines (Hendrycks & Gimpel (2017), Liang et al. (2017)). These include:\nFPR at 95% TPR: This is the False Positive Rate (FPR) when the True Positive Rate (TPR) is 95%. This metric can be interpreted as the probability of misclassifying an outlier sample to be indistribution when the TPR is as high as 95%. Let TP, FP, TN, and FN represent true positives, false positives, true negatives and false negatives respectively. FPR=FP/(FP+TN), TPR=TP/(FN+TP).\nDetection Error: This measures the minimum possible misclassification probability over all thresholds. Practically this can be calculated as minδ 0.5(1 − TPR) + 0.5FPR, where it is assumed that we have an equal probability of seeing both positive and negative examples in the test set.\nAUROC: This measures area under (AU) the Receiver Operating Curve (ROC) which plots the relationship between FPR and TPR. It is commonly interpreted as the probability of a positive sample (in-distribution) having a higher score than a negative sample (out-of-distribution). It is a threshold independent, summary metric.\nAUPR: Area Under the Precision Recall Curve (AUPR) is another threshold independent metric that considers the PR curve, which plots Precision(= TP/(TP+FP) ) versus Recall(= TP/(TP+FN) ). AUPR-In and AUPR-Out consider the in-distribution samples and out-of-distribution samples as the positive class, respectively. This helps take mismatch in sample sizes into account." }, { "heading": "A.4 COMPLETE RESULTS FOR EXPERIMENTS IN SECTION5.2", "text": "" }, { "heading": "A.5 PERFORMANCE ON BRAIN MRI", "text": "" }, { "heading": "A.6 SEPARATING FASHION-MNIST FROM MNIST", "text": "Table 14 summarizes the results for our FashionMNIST (inlier) and MNIST outlier experiments. We report the same metrics as the original We observe that our results (with default hyperparameters) are not as good as their colored image counter parts. However, we are able to beat ODIN and raw likelihoods. While it may be possible to tune the hyperprameters like Likelihood Ratios, we leave that analysis for future work. Furthermore, we tried to run Likelihood Ratios using the provided code but were unable to reproduce the results presented in the original paper. Thus, we compare against their reported metrics for completeness but cannot make conclusions about their validity." }, { "heading": "A.7 PERFORMANCE OF F-ANOGAN ON BRAIN MRI", "text": "We also compared our methodology to f-AnoGAN due to its promising results as an anomaly detector and its popularity in the medical community. We use the hyperparameters suggested in the original paper and trained till convergence. Our results show that f-AnoGAN does not outperform MSMA in any experiment, and is unable to match the classifier baseline reported in Section 5.4. However, we acknowledge that it is a fast method, both in terms of training and inference, and may be useful for some cases where pixel-wise anomalies are required (a current limitation of MSMA)." } ]
2,021
MULTISCALE SCORE MATCHING FOR OUT-OF- DISTRIBUTION DETECTION
SP:9f5792697be57f9be662cebfb28e46f123d96682
[ "This paper aims at proposing Dynamic Recurrent Network to understand the underlying system properties of RNNs. By first showing five basic linear transfer functions in dynamic systems theory, the paper formulates DYRNN units. To solve the increasing number of layers issue, they concatenate inputs and intermediate results before passing into an FC layer. It is interesting to see how adjusting" ]
While dynamic systems can be modeled as sequence-to-sequence tasks by deep learning using different network architectures like DNN, CNN, RNNs or neural ODEs, the resulting models often provide poor understanding of the underlying system properties. We propose a new recurrent network architecture, the Dynamic Recurrent Network (DYRNN), where the computation function is based on the discrete difference equations of basic linear system transfer functions known from dynamic system identification. This results in a more explainable model, since the learnt weights can provide insight on a system’s time dependent behaviour. It also introduces the sequences’ sampling rate as an additional model parameter, which can be leveraged, for example, for time series data augmentation and model robustness checks. The network is trained using traditional gradient descent optimization and can be used in combination with other state of the art neural network layers. We show that our new layer type yields results comparable to or better than other recurrent layer types on several system identification tasks.
[]
[ { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural Ordinary Differential Equations, 2018", "venue": "URL http://arxiv.org/pdf/1806.07366v5", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation: GRU", "venue": "URL http://arxiv.org/pdf/1406", "year": 2014 }, { "authors": [ "Otto Föllinger", "Ulrich Konigorski", "Boris Lohmann", "Günter Roppenecker", "Ansgar Trächtler" ], "title": "Regelungstechnik: Einführung in die Methoden und ihre Anwendung", "venue": "Lehrbuch Studium. VDE Verlag GmbH, Berlin and Offenbach,", "year": 2016 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Rolf Isermann", "Marco Münchhof" ], "title": "Identification of Dynamic Systems", "venue": "ISBN 978-3-540-78878-2", "year": 2011 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization, 2014. URL http://arxiv.org/pdf/1412.6980v9", "venue": null, "year": 2014 }, { "authors": [ "Viktoriya Krakovna", "Finale Doshi-Velez" ], "title": "Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models, 2016", "venue": "URL http://arxiv.org/pdf/1611", "year": 2016 }, { "authors": [ "Paul Tucker", "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-Scale Machine", "venue": "Learning on Heterogeneous Systems,", "year": 2015 }, { "authors": [ "M. Raissi", "P. Perdikaris", "G.E. Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2199 }, { "authors": [ "Yu Wang" ], "title": "A new concept using LSTM Neural Networks for dynamic system identification", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Dynamic systems occur in many different areas of life (Isermann & Münchhof (2011)). From biology, engineering, medicine to economics and more: Often, if a system changes its state based on a external input, this system can be viewed as a dynamic system. Dynamic system identification is the process of modelling the system’s properties. Such models can be used, for example, for anomaly detection, controller design or outcome prediction. For linear systems, this identification task is already well understood and state of the art methods exist.\nHowever, if a system exhibits non-linear behaviour, for example slip-stick-effects due to mechanical friction, the applicability of these methods is limited. In this case different approaches implemented in the state of the art range from white-box to black-box models. Generally, increasing system complexity raises the need for more powerful and often less understandable model architectures in order to produce satisfactory results: White box (based on differential equations or numerical simulations of the physical system components), black box systems (like Gaussian processes, deep neural networks, Support Vector Machines) and grey box models, which often employ a mix of linear and non-linear building blocks.\nOne example of a tool used in engineering are Hammerstein-Wiener models which are a combination of linear and (prior known) non-linear equations (shown in Figure 1). The linear model parameters are determined based on the training data. The non-linear behaviour of models is modeled using lookup tables or user defined non-linear functions.\nIn this work we present a new type of recurrent neural network layer called the Dynamic Recurrent Neural Network (DYRNN). It is designed for data based modelling of dynamic systems in a sequence-to-sequence manner based on input (x(t)) and output (y(t)) data. With it, we intend to bridge the gap between dynamic systems theory and recurrent neural networks. The layer’s internal computation is based on elemental transfer blocks from linear system identification. By combining it with non-linear neural networks, a Hammerstein-Wiener style model is emulated. This way, the\nmodel can offer additional knowledge about the examined system’s internal properties. Furthermore, while the model is trained on sampled data of one sampling rate it can be applied to data of the same system at a different sampling rate. This can be used to check the robustness of the model or to save time during training. We show that our network produces results which are better than or comparable to other recurrent networks (RNN, LSTM, GRU) on three different problem datasets. Since the layer can be implemented to be compatible to current deep learning frameworks, it can be combined with state of the art neural network layers (like convolutional or fully connected layers) and training techniques." }, { "heading": "2 RELATED WORK", "text": "Dynamic system identification can be viewed as a sequence-to-sequence task of the modelling of a systems’ output based on certain inputs. Isermann & Münchhof (2011), for example, list several different tools like ARIMA processes for linear systems and multiple neural network architectures for non-linear systems. Examples for the latter are locally recurrent locally feedforward networks (LRGF), Multi Layer Perceptrons (MLP) and Radial Basis Function (RBF) networks of different types of dynamics. These model structures are generalized, however, and as we will show, further theoretical background on linear systems theory could be leveraged.\nGenerally, deep learning offers multiple neural network layer types that can be employed when dealing with sequence-to-sequence problems, like fully connected (FC) networks, convolutional networks (CNN) or recurrent networks. Recurrent networks are also known as sequential models (like RNN, LSTM by Hochreiter & Schmidhuber (1997) and GRU by Cho et al. (2014)) and have been used successfully for text based sequence-to-sequence problems like machine translation or text processing. Wang (2017) demonstrates a concept of LSTM for dynamic system identification by using several parallel LSTM layers which predict the systems behaviour based on its input and prior predictions and their derivatives. A different approach of modelling dynamic systems are neural ordinary differential equations (ODEs) by Chen et al. (2018). These networks learn dy/dt of a function f with y(t) = f(x(t)) and the resulting ODE model is used an numerical integrator/solver (like Runge-Kutta) to compute y(t). This has the advantage of a varying sampling step size which is determined by the solver, but these methods are agnostic of dynamic systems theory knowledge. Similarly, Raissi et al. (2019) use deep learning to learn Partial Differential Equations (PDE) of physical systems in a FC model combined with a numerical integrator. Furthermore, since the evaluation of ODE/PDE models is done using a numerical integrator, the model is difficult to apply in combination with other neural network layers like for example convolutional or recurrent layers. In terms of sampling frequency of the measurement data, recurrent network architectures can only be trained on one specific data frequency, and do not provide the functionality to generalize to other sampling rates of the same system. In such a case one would have to resample the new data to the frequency of the training set.\nExplainability approaches for sequential models in text processing deduce which parts of the sentences are relevant for the model’s prediction based on the activation of the internal gates (as shown by e.g.Krakovna & Doshi-Velez (2016)). Interpretability of RNN/LSTM or GRU models for continuous measurement data has not been explored yet to our knowledge." }, { "heading": "3 DYNAMIC RECURRENT NETWORK", "text": "The complexity of the modelling of dynamic systems does not only result from the potential nonlinearity, but also from the fact that the model has to keep track of the system’s current and past states in order to predict the output based on new input. We intend to model a dynamic system in\na sequence-to-sequence task, which is why we chose a recurrent network architecture. Recurrent neural networks can be seen as units which iterate over a given sequence and predict an output. During this computation, a hidden state is computed which is leveraged for the prediction of the next time step. Our network differs from other state of the art recurrent networks in its computation function, which is derived from linear dynamic systems theory.\nIn the following we explain the theoretical background used in this work. Then we describe the structure of our DYRNN. Finally, we show different advantages that result from this structure like training and prediction at different signal sampling rates and interpretability of the models." }, { "heading": "3.1 LAYER STRUCTURE", "text": "The background knowledge in this section is covered by Föllinger et al. (2016) and Yarlagadda (2010). In dynamic systems theory a linear system with the external input u(t) and resulting system output y(t) is expressed as the differential equation\ny(t) + a1 · ẏ(t) + . . .+ an · y(n)(t) = b0 · u(t) + b1 · u̇(t) + . . .+ bn · u(n)(t). (1)\nTherefore, a linear system acts as transformation of the input u(t) based to a convolution (∗) with the system’s linear transfer function g(t) to produce the system output y(t) with\ny(t) = u(t) ∗ g(t) = ∫ u(τ)g(t− τ)dτ. (2)\nThe transfer function g(t) is used in engineering for, amongst others, controller design, system stability analysis or frequency response estimation. Larger, more complicated transfer functions can be seen as several basic functions which are interconnected in a circuit type fashion in parallel or in series (see Figure 3).\nDynamic systems theory identifies five basic linear transfer functions from which all linear dynamic systems can be modeled, which we use in the construction of the DYRNN: P, I, D, PT1 and PD. For further information and and visualization see Appendix Section A). They have the following functionalities:\n• P: Proportional gain of the input, implemented as a multiplication with a constant • I: Integrating component, meaning the step-wise summation of an input over time. • D: Differential component acting as a high-pass filter\n• PT1: Proportional multiplication of input with time delay, used to model e.g. voltage in capacitors in RC circuits. This function also acts as a low-pass filter\n• PD: Proportional increase with a differential component\nIn the following equations K stands for a constant with influence on the output amplitude, while T is a time constant which influences the speed of the system’s reaction. K and T act as the trainable weights in a DYRNN layer. For usage in a recurrent layer, these differential equations are discretized with – in our case – first degree forward Euler. This is common in many engineering applications, and means replacing ẏ(t) with\nẏ(t) = y(k)− y(k − 1)\n∆t . (3)\nThis results in discrete recurrence equations, with the sample number k and the time distance between samples ∆t. The respective equations of the basic transfer functions are as follows:\np(k) = KP · x(k) (4)\ni(k) = i(k − 1) + ∆t KI · x(k) (5)\nd(k) = KD ∆t · (x(k)− x(k − 1)) (6)\npt1(k) = pt1(k − 1) + (KPT1 · x(k)− pt1(k − 1)) · ∆t\n∆t+ TPT1 (7)\npd(k) = KPD · (x(k) + TPD ∆t · (x(k)− x(k − 1))), (8)\nwith the input x(k) and all K,T > 0.\nThe equations above are implemented as the computation function of a recurrent network layer as described in Appendix Section A.1. K and T in the equations become trainable weights of the layer, while the hidden state consists of x(k − 1), i(k − 1) and pt(k − 1). In the following we refer to these internal computations as subcomponents. We explore two different network variants in our work: The DYRNN5 with all five known subcomponents and the DYRNN3 with just P, PD and PT1. The reason for this is, that a D subcomponent can be approximated by PD and I by a PT. Since integrators can also cause model instabilites, we explore both variants in our experiments.\nWe formulate one unit’s computations based on the number of input channels nic and the number of outputs per subcomponent type noc. The number of output channels per layer nlayer amounts to (subcomponent count)* noc = 3 * noc for DYRNN3 and 5*noc for DYRNN5 (see Figure 4a).\nThe fact that DYRNN can be implemented with an interface compatible with other recurrent networks allows the modelling of a systems properties by training in a sequence-to-sequence fashion\nusing out of the box backpropagation through time, gradient descent and optimizer functions as implemented in state of the art frameworks like Pytorch (by Paszke et al. (2019)) and Tensorflow (by Martı́n Abadi et al. (2015)).\nSince each input of the layer is connected to five basic components, stacking DYRNN layers results in an increasing number of output channels, as shown in Figure 4b. In our experiments, we achieved good results with two cascading DYRNN layers followed by a per-time step linear FC layer with one neuron. This final layer can be enhanced using non-linear activation functions and more layers/neurons, which would result in a similar structure to Hammerstein-Wiener models for modelling of static non-linearities in the data. In case of noisy input data, for example due to sensor noise, additional CNN layers could be used as well." }, { "heading": "3.2 GENERALIZATION FOR DIFFERENT SIGNAL SAMPLING RATES", "text": "The new parameter ∆t means that a model can be used on datasets with a different sampling rates by adjusting the ∆t when interacting with the model. This can be leveraged as a new means to perform time series data augmentation. Training on different rates and testing on another sampling rate can also provide insight about a model’s robustness. Additionally, training time can be reduced by subsampling the dataset during training, and using the original sampling rate while inferencing for greater prediction accuracy (see Figure 5)." }, { "heading": "3.3 INTERPRETABILITY OF RESULTING MODELS", "text": "Based on our layer, it is possible to interpret the significance of a subcomponent towards the final dataset as well as the properties of its influence. While the DYRNN layers learn the time dependant system behaviour, a per-timestep fully connected network selects the relevant dynamics of the model. The higher the FC’s learnt weight connected to a specific subunit, the more significant it is for the modelling of the dynamic system (see Figure 6). As an example, assume a network of one DYRNN5 layer and a fully connected (FC) layer as shown in Figure 6. Since the subcomponents P, I and PT1 are weighted with 0.5, 1.5 and 0.8 as opposed to the D and PD with 0.1 and 0.2, the modeled system mostly displays P, I and PT1 properties.\nAdditionally, we can analyse the model by evaluating its equation in the Laplace and the Fourier domain. This means replacing the discrete subcomponents with their respective Laplace formulas as described in the Appendix Section A, while keeping the structure of the network as well as the trained values for K and T. This yields a new equation depending on the parameter s, which for the example network above results in:\nY (s) = U(s)(0.5 + 1.5KI s + 0.1sKD + 0.8KPT (1 + sTPT ) + 0.2KPD(1 + sTPD)) (9)\nY (s) = U(s) 0.1s3KDTPT + s 2(0.1KD + ...) + s(1.5KITPT + ...) + 1.5KI 1.0s2TPT + 1.0s\n(10)\nThe Laplace form has the advantage that a convolution with the transfer function y(t) = u(t) ∗ g(t) can be replaced with a multiplication with Y (s) = G(s) · U(s). It allows an engineer to analyse the model for stability, and to visualize different kinds of information on the model, e.g. in a polezero plot. The transfer function G(s) can be examined for system stability by replacing s with iω (ω = 2π ∗ f , with f as the signal frequency) yields the Fourier from of the transfer function. This can be used for frequency response and root locus analysis (Föllinger et al. (2016)), as we show in our experiments.\nThe transformation of G(s) for our example network into the time domain yields the following differential equation:\ny′′(t)TPT + y ′(t) = u′′′(t)0.1KDTPT + u ′′(t)(0.1KD + · · · ) + u′(t)(1.5KITPT + · · · ) + u(t)1.5KI (11)\nTransforming larger networks – for example the one used in our experiments – works along the same lines. Such extracted differential equations can further be leveraged to analyze the model." }, { "heading": "4 EXPERIMENTS", "text": "We evaluated our network on three different datasets. The first is an electrical RC-circuit simulated using Matlab Simulink, as shown in the Appendix in Figure 13. The other two datasets are from the Database for Identification of Systems ”DaISy”, namely the Heating System De Moor B.L.R. (b) and the Thermic Resistance De Moor B.L.R. (a)) datasets. In the following we will refer to the datasets as Circuit, Heating and Thermic. In these experiments we focus on learning the correct dynamics independent from the initial value of y(t). Therefore, we subtract the mean time offset of the prediction sequence towards the label before the final evaluation with Mean Squared Error.\nThe first goal of the experiments is to evaluate our models (DYRNN3 and DYRNN5) compared to RNN, LSTM and GRU as implemented in Tensorflow. The network architecture is shown in Figure 7. The first part of the network is built in a cascaded fashion of two recurrent layers which are concatenated with the original input. Its results are passed to a linear fully-connected layer with one unit. The number of units in the other recurrent layers were chosen to result in a similar parameter distribution to DYRNN5, as shown in Section A.2. For experiment a similar count of parameters in the recurrent part of the network and the same layer constellations were used (see Table 4,5 and 6). Due to the design of the DYRNN5 network, its FC part contains more parameters than the other networks.\nAdditionally, a DYRNN5 model was trained for different sampling rates for the Circuit dataset to examine its ability to generalize to other sampling rates. For this, the dataset was subsampled at different frequencies using linear interpolation. The constellation of sampling rates in these three runs are described in Table 1. All networks were trained with the Adam Optimizer by Kingma & Ba (2014) and the Huber Loss function with standard parameters defined in Tensorflow. The amount of training iterations was 3000 epochs. Evaluation of results is done on a separate testing dataset.\nAfter training the DYRNN models are transformed into Laplace transfer functions G(s) using Sympy (Meurer et al. (2017)) and three different views of this function are generated: G(s) over s, the frequency response |G(iω)| over ω and the root locus curve ofG(iω) over ω to compare the different model dynamics per dataset." }, { "heading": "5 RESULTS", "text": "After 10 runs each, the different recurrent networks’ performance on the testing set is shown in Figure 8. Since an activation function in the FC layer was omitted, it amounts to a linear combination of all channels from the cascaded part. Therefore, it can be assumed that a better performance of the DYRNN stems from its computation functions. In this case a different network configuration and more parameters may enable the other layer types to achieve better results.\nSection A.3 in the Appendix shows the best and worst runs on the testing set for all datasets and layer types as well as the transfer functions learnt by DYRNN3 rounded to two significant digits in Section A.4. The transfer function analysis plots in Figures 10, 11 and 12 were generated for the runs which performed best on the testing datasets.\nOn the Circuit dataset, both DYRNN5 and DYRNN3 perform consistently better than each of the other recurrent networks. This is to be expected since an idealized capacitor acts as a PT1 element, so the DYRNN is predestined for this modelling task. Nonetheless, all layer types perform well on the dataset if training was successful. The RNN network, for example, was unable to fit the data in a meaningful way on some runs (Appendix Figure 14). Figure 10 shows that G(s) for DYRNN5 has more poles. This is to be expected because of the structure of the network, but can be an indicator for instability in some frequency areas. G(s) of DYRNN3 follows the given plots better. The Frequency responses of both DYRNN networks are similar to the actual system, but the root locus curve of DYRNN5 differs significantly from the actual one: its starting point is far outside of the plotted region.\nOn the Heating dataset, the GRU network performed slightly better than DYRNN3 and DYRNN5. It is noticeable that, while the datasets is visually similar to a PT1 element, the DYRNN3 model displays a similar frequency response plot and root locus curve in Figure 11 b) and c) as seen in the actual model in the PT1 system in the Circuit dataset (Figure 10).\nOn the Thermic dataset, DYRNN3 performed similarly to GRU. The dataset consists of two inputs and one output, yielding two transfer functions G1(s) and G2(s). These can be analysed separately as shown in Figure 12 for DYRNN3. WhileG(s) over s is similar to the one in Circuit, the frequency response and root locus of both G1(s) and G2(s) are very different from Circuit and Heating.\nIn total, DYRNN5 performed worse than DYRNN3 on the non-simulated datasets. We assume that the reason for this is that Heating and Thermic do not have a strict zero-level at the start and the end of the datasets’ input and output. That might enforce a more stable model.\nOur results concerning resampling to different sampling rates (in 9) show that training for 0.7 ∆t and predicting to 1.0 ∆t performs worse than the other two variants. From this, it can be concluded that\nthe total amount of time covered by the training data is more important than the training at different sampling rates for this dataset. It also shows that the model is able to extrapolate for each of the different sampling rates to another. Our implementation, which allows the per-batch sampling rate during training is computationally more expensive than the one which relies upon a fixed sampling rate, so the inexpensive version is a possible alternative for larger datasets.\nWe have shown a new type of recurrent layer designed for the modelling of dynamic systems. Based on this, we see several potential areas for future work. One point that was not part of this work was the in depth analysis of the learnt transfer functions, and how the transfer function is to be interpreted in case of non-linear activation functions in the network. The plots shown of the transfer functions are accessible and interpretable mainly for engineers. Another area of research would be on how to make these results interpretable for scientists without an engineering background. Our experiment showed competitive results on three datasets of system identification. A different area of application can be model based reinforcement learning tasks, since the layers’ computation blocks are also commonly used in control engineering." }, { "heading": "7 CONCLUDING REMARKS", "text": "In this work we present a new recurrent network architecture which is based on dynamic systems theory. We show that the learnt system dynamics can produce models which can extrapolate to different sampling rates in the data. Due to the specific meaning of the cells’ computation function, the\nlearnt model can be leveraged to gain insight on the underlying system dynamics and a differential equation model of the system can be extracted." }, { "heading": "A APPENDIX", "text": "BASE FUNCTIONS IN DYNAMICAL SYSTEMS THEORY\nIn dynamic systems theory, the basic elements are visualized using their response towards an input unit step function σ(t) starting at t=0 with the amplitude of 1. In Table 3, we show a short summary over the blocks used in our network layer. Blocks display a specific behaviour, which is visualized by looking at the response towards the Unit step function.\nA.1 IMPLEMENTATION DETAILS\nSequential models keep a hidden state while iterating over an input sequence, to encode information necessary for the computation of the next step’s result. There are two ways to implement the model: one where the sampling rate is kept constant between batches and one where the sampling rate can be set per batch. While the first version can be implemented using trivial matrix multiplication, the per-batch version is implemented as follows:\np(k) = u(k) ◦ |KP | (22) i(k) = i(k − 1) + ( i(k) • 1\n∆t ) ◦ 1|KI |\n(23)\nd(k) = (u(k)− u(k − 1)) • 1 ∆t ◦ |KD| (24)\npt1(k) = pt1(k − 1) + (u(k) ◦ |KPT1| − pt1(k − 1)) ∗ 1\n1 ∆t ◦ |TPT1|+ 1\n(25)\npd(k) = ( u(k) + ((u(k)− u(k − 1)) • 1\n∆t ) ∗ |TPD|\n) ◦ |KPD|. (26)\n(27) With u(t)’s dimensionality of [batch × sample × input channels] and ∆t(t) as [batch × sample × 1] or ∆t as [batch × 1 × 1] , the multiplications expressed in Einstein’s sum formation (which can be implemented as described in e.g. https://www.tensorflow.org/api docs/python/tf/einsum ) are: ◦ as ’bi, ij→ bij’, • as ’bi, b→ bi’ and ∗ as ’bij, bij→bij’, i.e. the Hadamard product operator. The dimensions of the trainable parameters result to\nKP ∈ Ric×oc\nKI ∈ Ric×oc\nKD ∈ Ric×oc\nKPT1 ∈ Ric×oc, TPT1 ∈ Ric×oc\nKPD ∈ Ric×oc, TPD ∈ Ric\ngiven this setup. The weights of P,D,PT1 and PD were initialized in a random uniform distribution, with a range [0.1, 0.2]. Its important to initialize I such that the first iteration does not become unstable. For datasets with around 500 elements, the initialization used in the experiments was in the range of [110.0∆t, 110.0∆t].\nA.2 FURTHER EXPERIMENT DOCUMENTATION\nWe evaluated our network on three different datasets. One is our own dataset from a Matlab (MATLAB (2018)) simulation as shown in Figure 13. The input of our model is the input voltage of the\ncircuit, while the output value is the voltage measured over the capacitor. The simulation’s sampling rate is 0.005s. We simulated three time series for training, validation and testing with length of 2 seconds. The input consists of a square signal with randomly changing amplitudes from 0.5s to 1.5s. The remainder of the time, the input voltage is 0V.\nThe other two are the ”Heat flow density through a two layer wall” (De Moor B.L.R. (a)) (in short Thermic) and the ”Heating system” (De Moor B.L.R. (b)) (in short Heating) datasets from the DaISy system identification benchmarks:\n• Heating System Benchmark: Prediction of halogen lamp temperature based on input voltage\n• Two Walls Benchmark: Prediction of heat flow through two walls based on measurements before and between the walls\nAfter training, the models were evaluated on a testing set split from the complete dataset. The DaISy Datasets do not have a zero-level starting interval like in our circuit dataset. For simplicity’s sake, our chosen network architecture does not take the initial value of the output time series into account. Therefore an offset could be seen in predictions on the testing set for all layer types. In a post processing step this offset was computed using the median difference between prediction and label data, and subtracted from the prediction prior to the final MSE evaluation.\nThe amount of units and parameters for all experiments are shown in the Tables 4, 5, 6.\nA.3 BEST AND WORST RESULTS ON DATASETS\nThe Figures 14, 15 and 16 show the best and worst testing results of all 10 runs per network type.\nA.4 RESULTING TRANSFER FUNCTIONS\nBelow the transfer functions of the best runs are documented. The transformation to differential equations is trivial. Notice that due to the rounding to 2 significant digits, the differential equations listed here are unlikely to yield accurate results. Additionally, simplification of these transfer functions is possible if the poles and zeros of the function match, but this is outside of the scope of this work. The transfer functions of the best runs of DYRNN3 are as follows:\nCircuit:\nG(s) = 9.2E − 10s6 − 3.2E − 7s5 + 3.6E − 5s4 − 0.0021s3 + 0.75s2 + 3.9s+ 5.0\n1.6e− 5s4 + 0.03s3 + 0.3s2 + 0.98s+ 1.0 (28)\nHeating:\nG(s) = −2.9E − 9s7 − 0.04s6 + 0.85s5 + 3.8s4 + 4.5s3 + 2.5s2 + 0.68s+ 0.073\n3.6s5 + 14.0s4 + 14.0s3 + 6.2s2 + 1.3s+ 0.099 (29)\nThermic Resistance:\nG(s) = G1(s) +G2(s) (30) Y1(s) = X1(s) ·G1(s) (31) Y2(s) = X2(s) ·G2(s) (32)\nG1(s) = −0.0025s6 + 0.12s5 − 2.1s4 + 31.0s3 + 40.0s2 + 7.9s− 1.8\n2.3s4 + 13.0s3 + 16.0s2 + 7.3s+ 1.0 (33)\nG2(s) = 2.2E − 8s6 + 0.0013s5 + 0.01s4 + 0.15s3 − 0.4s2 − 2.0s− 1.5\n7.6E − 7s4 + 0.045s3 + 0.06s2 − 0.69s− 1.0 (34)\nThis sum of transfer functions can be used to simulate y(t) by first simulating y1(t) and y2(t) separatly and then building the sum of the results." } ]
2,020
RECURRENT NEURAL NETWORK ARCHITECTURE
SP:e2d78e6eba2bc0e6273a6ce65549866bc3a29fe7
[ "This paper considers the problem of learning how to control restless bandits. When all parameters of the system are known, Whittle index policy usually offers a good performance. The main contribution of the paper is to propose an algorithm, NeurWIN, that uses a neural network architecture to learn the Whittle indices. Most of the paper is devoted to the description and the derivation of the algorithm. At the end of the paper, the authors present four illustrations of how the algorithm works. The algorithm is compared to an index-based policy and an (old?) RL algorithm REINFORCE for the small systems. The learning behavior of NeurWIN is very good in all tested cases.", "The authors develop a reinforcement learning based method for computing the Whittle Index heuristic policy for solving Restless Bandit problems. This is a novel methodological contribution in the space of restless bandits. Several experimental results are provided demonstrating the good performance and general applicability of the method.", " Whittle index are used to construct a powerful heuristics for restless bandit problem. This paper proposes to estimate Whittle Index in discounted Restless bandit problem via a deep reinforcement learning algorithm. The authors define a notion of strong indexability that they use to construct a deep RL algorithm that learns Whittle indices. The authors argue that when a problem is strongly indexability and the neural network is precise enough, the deep RL algorithm will learn Whittle index (although their theoretical result is not fully convincing to me). The authors present extensive experimental results that show that their algorithm compute Whittle indices and provide a very good performance compared to existing work. ", "The paper proposes a method to automatically learn the Whittle Indices for a Restless Multi-Armed Bandit (RMAB) problem. Contributions: The Algorithm: The algorithm tries to learn a neural network that takes as input the state of a given arm and provides as output the whittle index of that state. It does this by showing a connection between the whittle indices for an arm and the indices of an optimal index policy for a family of MDPs (Env(𝝀)) based on that arm’s MDP (Env). It then learns a good index policy for this family of MDPs using a REINFORCE-type method. Theoretical Justification: Towards substantiating the claim that learning a good policy for Env(𝝀) is equivalent to learning a good whittle index, the paper provides a proof about an epsilon-delta relationship between learning a good policy and a good whittle index. Empirical Justification: The paper shows that the proposed algorithm works well on 3 previously published RMAB instances, when compared with 1 similar ‘learning a whittle index’-type baseline and 2 reinforcement learning baselines. " ]
Whittle index policy is a powerful tool to obtain asymptotically optimal solutions for the notoriously intractable problem of restless bandits. However, finding the Whittle indices remains a difficult problem for many practical restless bandits with convoluted transition kernels. This paper proposes NeurWIN, a neural Whittle index network that seeks to learn the Whittle indices for any restless bandits by leveraging mathematical properties of the Whittle indices. We show that a neural network that produces the Whittle index is also one that produces the optimal control for a set of Markov decision problems. This property motivates using deep reinforcement learning for the training of NeurWIN. We demonstrate the utility of NeurWIN by evaluating its performance for three recently studied restless bandit problems. Our experiment results show that the performance of NeurWIN is either better than, or as good as, state-of-the-art policies for all three problems.
[ { "affiliations": [], "name": "RESTLESS BANDITS" } ]
[ { "authors": [ "Samuli Aalto", "Pasi Lassila", "Prajwal Osti" ], "title": "Whittle index approach to size-aware scheduling with time-varying channels", "venue": "In Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems,", "year": 2015 }, { "authors": [ "K. Avrachenkov", "V.S. Borkar" ], "title": "A learning algorithm for the whittle index policy for scheduling web crawlers", "venue": "In 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton),", "year": 2019 }, { "authors": [ "V.S. Borkar", "K. Chadha" ], "title": "A reinforcement learning algorithm for restless bandits", "venue": "Indian Control Conference (ICC),", "year": 2018 }, { "authors": [ "Christopher R Dance", "Tomi Silander" ], "title": "When are kalman-filter restless bandits indexable", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Christoph Dann", "Tor Lattimore", "Emma Brunskill" ], "title": "Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "J. Fu", "Y. Nazarathy", "S. Moka", "P.G. Taylor" ], "title": "Towards q-learning the whittle index for restless bandits", "venue": "Australian New Zealand Control Conference (ANZCC),", "year": 2019 }, { "authors": [ "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford", "Robert E. Schapire" ], "title": "Contextual decision processes with low Bellman rank are PAC-learnable", "venue": "Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Diederick P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "J. Le Ny", "M. Dahleh", "E. Feron" ], "title": "Multi-uav dynamic routing with partial observations using restless bandit allocation indices", "venue": "American Control Conference,", "year": 2008 }, { "authors": [ "Alex S. Leong", "Arunselvan Ramaswamy", "Daniel E. Quevedo", "Holger Karl", "Ling Shi" ], "title": "Deep reinforcement learning for wireless sensor scheduling in cyber–physical", "venue": "systems. Automatica,", "year": 2020 }, { "authors": [ "R. Meshram", "D. Manjunath", "A. Gopalan" ], "title": "On the whittle index for restless multiarmed hidden markov bandits", "venue": "IEEE Transactions on Automatic Control,", "year": 2018 }, { "authors": [ "José Niño-Mora" ], "title": "Dynamic priority allocation via restless bandit marginal productivity", "venue": "indices. Top,", "year": 2007 }, { "authors": [ "Christos H. Papadimitriou", "John N. Tsitsiklis" ], "title": "The complexity of optimal queuing network control", "venue": "Mathematics of Operations Research,", "year": 1999 }, { "authors": [ "Ciara Pike-Burke", "Steffen Grunewalder" ], "title": "Recovering bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Carlos Riquelme", "George Tucker", "Jasper Snoek" ], "title": "Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Julien Seznec", "Pierre Menard", "Alessandro Lazaric", "Michal Valko" ], "title": "A single algorithm for both restless and rested rotting bandits", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Aleksandrs Slivkins", "Eli Upfal" ], "title": "Adapting to a changing environment: the brownian restless bandits", "venue": "In COLT, pp", "year": 2008 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018", "venue": "URL http://incompleteideas.net/book/the-book2nd.html", "year": 2018 }, { "authors": [ "V. Tripathi", "E. Modiano" ], "title": "A whittle index approach to minimizing functions of age of information", "venue": "57th Annual Allerton Conference on Communication, Control, and Computing (Allerton),", "year": 2019 }, { "authors": [ "S. Wang", "H. Liu", "P.H. Gomes", "B. Krishnamachari" ], "title": "Deep reinforcement learning for dynamic multichannel access in wireless networks", "venue": "IEEE Transactions on Cognitive Communications and Networking,", "year": 2018 }, { "authors": [ "T. Wei", "Yanzhi Wang", "Q. Zhu" ], "title": "Deep reinforcement learning for building hvac control", "venue": "54th ACM/EDAC/IEEE Design Automation Conference (DAC),", "year": 2017 }, { "authors": [ "Peter Whittle" ], "title": "Restless bandits: Activity allocation in a changing world", "venue": "Journal of applied probability,", "year": 1988 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Mach. Learn.,", "year": 1992 }, { "authors": [ "Huizhen Yu", "Dimitri P Bertsekas" ], "title": "Convergence results for some temporal difference methods based on least squares", "venue": "IEEE Transactions on Automatic Control,", "year": 2009 }, { "authors": [ "Z. Yu", "Y. Xu", "L. Tong" ], "title": "Deadline scheduling as restless bandits", "venue": "IEEE Transactions on Automatic Control,", "year": 2018 }, { "authors": [ "Andrea Zanette", "Emma Brunskill" ], "title": "Problem dependent reinforcement learning bounds which can identify bandit structure in MDPs", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Qing Zhao" ], "title": "Multi-armed bandits: Theory and applications to online learning in networks", "venue": "Synthesis Lectures on Communication Networks,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many sequential decision problems can be modeled as multi-armed bandit problems. A bandit problem models each potential decision as an arm. In each round, we play M arms out of a total of N arms by choosing the corresponding decisions. We then receive a reward from the played arms. The goal is to maximize the long-term total discounted reward. Consider, for example, displaying advertisements on an online platform with the goal to maximize the long-term discounted clickthrough rates. This can be modeled as a bandit problem where each arm is a piece of advertisement and we choose which advertisements to be displayed every time a particular user visits the platform. It should be noted that the reward, i.e., click-through rate, of an arm is not stationary, but depends on our actions in the past. For example, a user that just clicked on a particular advertisement may be much less likely to click on the same advertisement in the near future. Such a problem is a classic case of the restless bandit problem, where the reward distribution of an arm depends on its state, which changes over time based on our past actions.\nThe restless bandit problem is notoriously intractable (Papadimitriou & Tsitsiklis, 1999). Most recent efforts, such as recovering bandits (Pike-Burke & Grunewalder, 2019), rotting bandits (Seznec et al., 2020), and Brownian bandits (Slivkins & Upfal, 2008), only study some special instances of the restless bandit problem. The fundamental challenge of the restless bandit problem lies in the explosion of state space, as the state of the entire system is the Cartesian product of the states of individual arms. A powerful tool to address the explosion of state space is the Whittle index policy (Whittle, 1988). In a nutshell, the Whittle index policy calculates a Whittle index for each arm based on the arm’s current state, where the index loosely corresponds to the amount of cost that we are willing to pay to play the arm, and then plays the arm with the highest index. It has been shown that the Whittle index policy is either optimal or asymptotically optimal in many settings.\nIn this paper, we present Neural Whittle Index Network (NeurWIN), a principled machine learning approach that finds the Whittle indices for virtually all restless bandit problems. We note that the Whittle index is an artificial construct that cannot be directly measured. Finding the Whittle index is typically intractable. As a result, the Whittle indices of many practical problems remain unknown except for a few special cases.\nWe are able to circumvent the challenges of finding the Whittle indices by leveraging an important mathematical property of the Whittle index: Consider an alternative problem where there is only one arm and we decide whether to play the arm in each time instance. In this problem, we need to pay a\nconstant cost of λ every time we play the arm. The goal is to maximize the long-term discounted net reward, defined as the difference between the rewards we obtain from the arm and the costs we pay to play it. Then, the optimal policy is to play the arm whenever the Whittle index becomes larger than λ. Based on this property, a neural network that produces the Whittle index can be viewed as one that finds the optimal policy for the alternative problem for any λ.\nUsing this observation, we propose a deep reinforcement learning method to train NeurWIN. To demonstrate the power of NeurWIN, we employ NeurWIN for three recently studied restless bandit problems, namely, recovering bandit (Pike-Burke & Grunewalder, 2019), wireless scheduling (Aalto et al., 2015), and stochastic deadline scheduling (Yu et al., 2018). There is no known Whittle index for the first problem, and there is only an approximation of the Whittle index under some relaxations for the second problem. Only the third problem has a precise characterization of the Whittle index. For the first two problems, the index policy using our NeurWIN achieves better performance than existing studies. For the third problem, the index policy using our NeurWIN has virtually the same performance as the Whittle index policy.\nThe rest of the paper is organized as follows: Section 2 reviews related literature. Section 3 provides formal definitions of the Whittle index and our problem statement. Section 4 introduces our training algorithm for NeurWIN. Section 5 demonstrates the utility of NeurWIN by evaluating its performance under three recently studied restless bandit problems. Finally, Section 6 concludes the paper." }, { "heading": "2 RELATED WORK", "text": "Restless bandit problems were first introduced in (Whittle, 1988). They are known to be intractable, and are in general PSPACE hard (Papadimitriou & Tsitsiklis, 1999). As a result, many studies focus on finding the Whittle index policy for restless bandit problems, such as in (Le Ny et al., 2008; Meshram et al., 2018; Tripathi & Modiano, 2019; Dance & Silander, 2015). However, these studies are only able to find the Whittle indices under various specific assumptions about the bandit problems.\nThere has been a lot of studies on applying RL methods for bandit problems. (Dann et al., 2017) proposed a tool called Uniform-PAC for contextual bandits. (Zanette & Brunskill, 2018) described a framework-agnostic approach towards guaranteeing RL algorithms’ performance. (Jiang et al., 2017) introduced contextual decision processes (CDPs) that encompass contextual bandits for RL exploration with function approximation. (Riquelme et al., 2018) compared deep neural networks with Bayesian linear regression against other posterior sampling methods. However, none of these studies are applicable to restless bandits, where the state of an arm can change over time.\nDeep RL algorithms have been utilized in problems that resemble restless bandit problems, including HVAC control (Wei et al., 2017), cyber-physical systems (Leong et al., 2020), and dynamic multichannel access (Wang et al., 2018). In all these cases, a major limitation for deep RL is scalability. As the state spaces grows exponentially with the number of arms, these studies can only be applied to small-scale systems, and their evaluations are limited to cases when there are at most 5 zones, 6 sensors, and 8 channels, respectively.\nAn emerging research direction is applying machine learning algorithms to learn Whittle indices. (Borkar & Chadha, 2018) proposed employing the LSPE(0) algorithm (Yu & Bertsekas, 2009) coupled with a polynomial function approximator. The approach was applied in (Avrachenkov & Borkar, 2019) for scheduling web crawlers. However, this work can only be applied to restless bandits whose states can be represented by a single number, and it only uses a polynomial function approximator, which may have low representational power (Sutton & Barto, 2018). (Fu et al., 2019) proposed a Q-learning based heuristic to find Whittle indices. However, as shown in its experiment results, the heuristic may not produce Whittle indices even when the training converges." }, { "heading": "3 PROBLEM SETTING", "text": "In this section, we provide a brief overview of restless bandit problems and the Whittle index. We then formally define the problem statement." }, { "heading": "3.1 RESTLESS BANDIT PROBLEMS", "text": "A restless bandit problem consists of N restless arms. In each round t, a control policy observes the state of each arm i, denoted by si[t], and selects M arms to activate. We call the selected arms as active and the others as passive. We use ai[t] to denote the policy’s decision on each arm i, where ai[t] = 1 if the arm is active and ai[t] = 0 if it is passive at round t. Each arm i generates a stochastic reward ri[t] with distribution Ri,act(si[t]) if it is active, and with distribution Ri,pass(si[t]) if it is passive. The state of each arm i in the next round evolves by the transition kernel of either Pi,act(si[t]) or Pi,pass(si[t]), depending on whether the arm is active. The goal of the control policy is to maximize the total discounted reward, which can be expressed as ∑∞ t=1 ∑N i=1 β\ntri[t] with β being the discount factor.\nA control policy is effectively a function that takes the vector (s1[t], s2[t], . . . , sN [t]) as the input and produces the vector (a1[t], a2[t], . . . , aN [t]) as the output. It should be noted that the space of input is exponential inN . If each arm can be in one ofK possible states, then the number of possible inputs isKN . This feature, which is usually referred to as the curse of dimensionality, makes finding the optimal control policy intractable." }, { "heading": "3.2 THE WHITTLE INDEX", "text": "An index policy seeks to address the curse of dimensionality through decomposition. In each round, it calculates an index, denoted by Wi(si[t]), for each arm i based on its current state. The index policy then selects the M arms with the highest indices to activate. It should be noted that the index of an arm i is independent from the states of any other arms.\nObviously, the performance of an index policy depends on the design of the index functionWi(·). A popular index with solid theoretical foundation is the Whittle index, which is defined below. Since we only consider one arm at a time, we drop the subscript i for the rest of the paper.\nConsider a system with only one arm, and a control policy that determines whether to activate the arm in each round t. Suppose that the policy needs to pay an activation cost of λ every time it chooses to activate the arm. The goal of the control policy is to maximize the total discounted net reward, ∑∞ t=1 β\nt(r[t] − λa[t]). The optimal control policy can be expressed by the set of states in which it would activate this arm for a particular λ, and we denote this set by A(λ). Intuitively, the higher the cost, the less likely the optimal control policy would activate the arm in a given state, and hence the set A(λ) should decrease monotonically. When an arm satisfies this intuition, we say that the arm is indexable.\nDefinition 1 (Indexability). An arm is said to be indexable if A(λ) decreases monotonically from the set of all states to the empty set as λ increases from −∞ to∞. A restless bandit problem is said to be indexable if all arms are indexable.\nDefinition 2 (The Whittle Index). If an arm is indexable, then its Whittle index of each state s is defined as W (s) := supλ{λ : s ∈ A(λ)}.\nEven when an arm is indexable, finding its Whittle index can still be intractable, especially when the transition kernel of the arm is convoluted1. Our NeurWIN finds the Whittle index by leveraging the following property of the Whittle index: Consider the single-armed bandit problem. Suppose the initial state of an indexable arm is s at round one. Consider two possibilities: The first is that the control policy activates the arm at round one, and then uses the optimal policy starting from round two; and the second is that the control policy does not activate the arm at round one, and then uses the optimal policy starting from round two. Let Qλ,act(s) and Qλ,pass(s) be the expected discounted net reward for these two possibilities, respectively, and let Ds(λ) := ( Qλ,act(s) − Qλ,pass(s) ) be their difference. Clearly, the optimal policy should activate an arm under state s and activation cost λ if Ds(λ) ≥ 0. We then have the following: Theorem 1. (Zhao, 2019, Thm 3.14) If an arm is indexable, then, for every state s, Ds(λ) ≥ 0 if and only if λ ≤W (s).\n1Niño-Mora (2007) described a generic approach for finding the Whittle index. The complexity of this approach is at least exponential to the number of states.\nOur NeurWIN uses Thm. 1 to train neural networks that predict the Whittle index for any indexable arms. From Def. 1, a sufficient condition for indexability is when Ds(λ) is a decreasing function. Thus, we define the concept of strong indexability as follows:\nDefinition 3 (Strong Indexability). An arm is said to be strongly indexable if Ds(λ) is strictly decreasing in λ for every state s." }, { "heading": "3.3 PROBLEM STATEMENT", "text": "We now formally describe the objective of this paper. We assume that we are given a simulator of one single restless arm as a black box. The simulator provides two functionalities: First, it allows us to set the initial state of the arm to any arbitrary state s. Second, in each round t, the simulator takes a[t], the indicator function that the arm is activated, as the input and produces the next state s[t+ 1] and the reward r[t] as the outputs.\nOur goal is to derive low-complexity index algorithms for restless bandit problems by training a neural network that approximates the Whittle index of each restless arm using its simulator. A neural network takes the state s as the input and produces a real number fθ(s) as the output, where θ is the vector containing all weights and biases of the neural network. Recall thatW (s) is the Whittle index of the arm. We aim to find appropriate θ that makes |fθ(s) −W (s)| small for all s. Such a neural network is said to be Whittle-accurate.\nDefinition 4 (Whittle-accurate). A neural network with parameters θ is said to be γ-Whittleaccurate if |fθ(s)−W (s)| ≤ γ, for all s." }, { "heading": "4 NEURWIN ALGORITHM: NEURAL WHITTLE INDEX NETWORK", "text": "In this section, we present NeurWIN, a deep-RL algorithms that trains neural networks to predict the Whittle indices. Since the Whittle index of an arm is independent from other arms, NeurWIN trains one neural network for each arm independently. In this section, we discuss how NeurWIN trains the Whittle index for one single arm.\n4.1 CONDITIONS FOR WHITTLE-ACCURATE\nBefore presenting NeurWIN, we first discuss the conditions for a neural network to be γ-Whittleaccurate.\nSuppose we are given a simulator of an arm and a neural network with parameters θ. We can then construct an environment of the arm along with an activation cost λ as shown in Fig. 1. In each round t, the environment takes the real number fθ(s[t]) as the input. The input is first fed into a step function to produce a[t] = 1 ( fθ(s[t]) ≥ λ ) , where 1(·) is the indicator function. Then, a(t) is fed into the simulator of the arm to produce r[t] and s[t + 1]. Finally, the environment outputs the net reward r[t] − λa[t] and the next state s[t + 1]. We call this environment Env(λ). Thus, the neural network can be viewed as a controller for Env(λ). The following corollary is a direct result from Thm. 1.\nCorollary 1. If fθ(s) = W (s),∀s, then the neural network with parameters θ is the optimal controller for Env(λ), for any λ and initial state s[1]. Moreover, given λ and s[1], the optimal discounted net reward is max{Qλ,act(s[1]), Qλ,pass(s[1])}.\nCorollary 1 can be viewed as a necessary condition for a neural network to be 0-Whittle-accurate. Below, we establish a sufficient condition for γ-Whittle-accuracy.\nTheorem 2. If the arm is strongly indexable, then for any γ > 0 and an arbitrarily small positive constant δ, there exists a positive such that the following statement holds: If, for any states s0, s1 and any activation cost λ ∈ [fθ(s0)− δ, fθ(s0) + δ], the discounted net reward of applying a neural network toEnv(λ) with initial state s1 is at least max{Qλ,act(s1), Qλ,pass(s1)}− , then the neural network is γ-Whittle-accurate.\nProof. For a given γ, let = mins{min{QW (s)+γ,pass(s) − QW (s)+γ,act(s), QW (s)−γ,act(s) − QW (s)−γ,pass(s)}}/2. Since the arm is strongly indexable and W (s) is its Whittle index, we have > 0.\nWe prove the theorem by establishing the following equivalent statement: If the neural network is not γ-Whittle-accurate, then there exists states s0, s1, activation cost λ ∈ [fθ(s0) − δ, fθ(s0) + δ], such that the discounted net reward of applying a neural network to Env(λ) with initial state s1 is strictly less than max{Qλ,act(s1), Qλ,pass(s1)} − . Suppose the neural network is not γ-Whittle-accurate, then there exists a state s′ such that |fθ(s′)− W (s′)| > γ. We set s0 = s1 = s′. For the case fθ(s′) > W (s′) + γ, we set λ = fθ(s′) + δ. Since λ > W (s′) + γ, we have max{Qλ,act(s′), Qλ,pass(s′)} = Qλ,pass(s′) and Qλ,pass(s′) − Qλ,act(s\n′) ≥ 2 . On the other hand, since fθ(s′) > λ, the neural network would activate the arm in the first round and its discounted reward is at most\nQλ,act(s ′) < Qλ,pass(s ′)− 2 < max{Qλ,act(s′), Qλ,pass(s′)} − .\nFor the case fθ(s′) < W (s′)−γ, a similar argument shows that the discounted reward for the neural network when λ = fθ(s′)− δ is smaller than max{Qλ,act(s′), Qλ,pass(s′)}− . This completes the proof." }, { "heading": "4.2 TRAINING PROCEDURES FOR NEURWIN", "text": "Thm. 2 states that a neural network that yields near-optimal net reward for any environmentsEnv(λ) is also Whittle-accurate. This observation motivates the usage of deep reinforcement learning to find Whittle-accurate neural networks. To make the output of the environments differentiable with respect to the input fθ(s[t]), we replace the step function in Fig. 1 with a sigmoid function σm(fθ(s[t]) − λ) := ( 1 + exp(−m(fθ(s[t]) − λ)) )−1 , where m is a sensitivity parameter. The environment then chooses a[t] = 1 with probability σm(fθ(s[t])−λ), and a[t] = 0 with probability 1− σm(fθ(s[t])− λ). We call this differentiable environment Env∗(λ). Our training procedure consists of multiple mini-batches, where each mini-batch is composed of a fixed number of episodes. At the beginning of each mini-batch, we randomly select two states s0 and s1. Motivated by the condition in Thm. 2, we consider the environment Env∗(fθ(s0)) with initial state s1 and aim to improve the empirical discounted net reward of applying the neural network to such an environment.\nOur approach is based on the REINFORCE algorithm (Williams, 1992). In each episode e, we set λ = fθ(s0) and initial state to be s1. We then apply the neural network with parameters θ to Env∗(λ) and observe the sequences of actions ( a[1], a[2], . . . ) and states ( s[1], s[2], . . . ) . We can use these sequences to calculate their gradients with respect to θ through backward propagation, which we denote by he. We also observe the discounted net reward and denote it by Ge. After all episodes in the mini-batch finish, we calculate the average of all Ge as a bootstrapped baseline and denote it by Ḡb. Finally, we do a weighted gradient ascent with the weight for episode e being its offset net reward, Ge − Ḡb. When the step size is chosen appropriately, the neural network will be more likely to follow the sequences of actions of episodes with largerGe after the weighted gradient ascent, and thus will have a better empirical discounted net reward. The complete algorithm is described in Alg. 1.\nObviously, the choice of s0 and s1 can have significant impact on the convergence speed of Alg. 1. In our implementation, we choose s0 uniformly at random in each mini-batch. The choice of s1 depends on the bandit problems. Some bandit problems naturally visit certain states far less frequently than other states. For such problems, we choose s1 to be those less-frequently-visited states with higher probabilities, so as to ensure that Alg. 1 is able to learn the optimal control for these states. For other problems, we simply choose s1 = s0.\nAlgorithm 1: NeurWIN Training Input: Parameters θ, discount factor β ∈ (0, 1), learning rate L, sigmoid parameter m Output: Trained neural network parameters θ+ for each mini-batch b do\nRandomly choose s0 and s1, and set λ← fθ(s0) ; for each episode e in the mini-batch do\nSet the arm to state s1, and set he ← 0 ; for each round t in the episode do\nChoose a[t] = 1 w.p. σm(fθ(s[t])− λ), and a[t] = 0 w.p. 1− σm(fθ(s[t])− λ); if a[t] = 1 then\nhe ← he +∇θ ln(σm(fθ(s[t])− λ)) ; else\nhe ← he +∇θ ln(1− σm(fθ(s[t])− λ)) ; end\nend Ge ← empirical discounted net reward in episode e;\nend Lb ← learning rate in mini-batch b; Ḡb ← the average of Ge for all episodes in the mini-batch; Update parameters through gradient ascent θ ← θ + Lb ∑ e(Ge − Ḡb)he ;\nend" }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 OVERVIEW", "text": "In this section, we demonstrate NeurWIN’s utility by evaluating it under three recently studied applications of restless bandit problems. In each application, we consider that there are N arms and a controller can play M of them in each round. We evaluate three different pairs of (N,M): (4, 1), (100, 10), and (100, 25), and average the results of 200 independent runs when the problems are stochastic. Some applications consider that different arms can have different behaviors. For such scenarios, we consider that there are multiple types of arms and train a separate NeurWIN for each type. During testing, the controller calculates the index of each arm based on the arm’s state and schedules the M arms with the highest indices.\nThe performance of NeurWIN is compared against the proposed policies in the respective recent studies. In addition, we also implement and evaluate the REINFORCE algorithm (Williams, 1992) and the QWIC algorithm (Fu et al., 2019). The REINFORCE algorithm aims to find the optimal control by viewing a restless bandit problem as a Markov decision problem. Under this view, the number of states is exponential in N and the number of possible actions is ( N M ) . Thus, we are only able to evaluate REINFORCE for the case N = 4 and M = 1. The QWIC algorithm aims to find the Whittle index through Q-learning. It is a tabular method and does not scale well as the state space increases. Thus, we only evaluate QWIC when the size of the state space is small.\nWe use the same neural network architecture for NeurWIN in all three applications. The neural network is a fully connected one that consists of one input layer, one output layer, and two hidden layers. There are 16 and 32 neurons in the two hidden layers. The output layer has one neuron, and the input layer size is the same as the dimension of the state of one single arm. As for the REINFORCE algorithm, we choose the neural network architecture so that the total number of parameters is slightly more than N times as the number of parameters in NeurWIN to make a fair\ncomparison. ReLU activation function is used for the two hidden layers. An initial learning rate L = 0.001 is set for all cases, with the Adam optimizer (Kingma & Ba, 2015) employed for the gradient ascent step. The discount factor is β = 0.999 and each mini-batch consists of five episodes.\nFor all cases, we implement the NeurWIN algorithm using PyTorch (Paszke et al., 2019), and train the agent on a single arm modelled after OpenAI’s Gym API (Brockman et al., 2016). We provide a brief overview of each application and the experiment setting in the following sections. We refer readers to the appendices for detailed discussions on experiment settings." }, { "heading": "5.2 RECOVERING BANDITS", "text": "The recovering bandits (Pike-Burke & Grunewalder, 2019) aim to model the time-varying behaviors of consumers. In particular, it considers that a consumer who has just bought a certain product, say, a television, would be much less interested in advertisements of the same product in the near future. However, the consumer’s interest in these advertisements may recover over time. Thus, the recovering bandit models the reward of playing an arm, i.e., displaying an advertisement, by a function f(min{z, zmax}), where z is the time since the arm was last played and zmax is a constant specified by the arm. There is no known Whittle index or optimal control policy for this problem.\nThe recent study (Pike-Burke & Grunewalder, 2019) on recovering bandit focuses on learning the function f(·) for each arm. Once it obtains an estimate of f(·), it uses a heuristic called d-lookahead to determine which arms to play. The d-lookahead policy enumerates all possible actions in the next d rounds, and then pick the sequence of actions that yield that highest reward. Since the controller can choose M arms out of N arms to activate, with ( N M ) different possibilities, in each round, the\ncomplexity of the heuristic isO( ( N M )d ) when d > 1. Thus, we are only able to evaluate 1-lookahead when N = 100. When N = 4 and M = 1, we evaluate 1-lookahead and 3-lookahead.\nIn our experiment, we consider that there are four types of arms and there are N4 arms for each type. Different types of arms have different functions f(·). The state of each arm is its value of min{z, zmax} and we set zmax = 20 for all arms. Experiment results are shown in Fig. 2. It can be observed that NeurWIN is able to outperform 1-lookahead in all settings with just a few thousands of training episodes. In contrast, for the case N = 4 and M = 1, REINFORCE only sees slight performance improvement over 50,000 training episodes and remains far worse than NeurWIN. This may be due to the explosion of state space. Even though N is only 4, the total number of possible states is 204 = 160, 000, making it difficult for REINFORCE to learn the optimal control in just 50, 000 episodes. In contrast, since NeurWIN learns the Whittle index of each arm separately, its size of state space is only 20. QWIC performs poorly. This suggests that it does not learn a good approximation to the Whittle index." }, { "heading": "5.3 WIRELESS SCHEDULING", "text": "A recent paper (Aalto et al., 2015) studies the problem of wireless scheduling over fading channels. In this problem, each arm corresponds to a wireless client. Each wireless client has some data to be transmitted and it suffers from a holding cost of 1 unit per round until it has finished transmitting all its data. The channel quality of a wireless client, which determines the amount of data can be\ntransmitted if the wireless client is scheduled, changes over time. The goal is to minimize the sum of holding costs of all wireless clients. Equivalently, we view the reward of the system as the negative of the total holding cost.\nFinding the Whittle index through theoretical analysis is difficult. Even for the simplified case when the channel quality is i.i.d. over time and can only be in one of two possible states, the recent paper (Aalto et al., 2015) can only derive the Whittle index under some approximations. It then proposes a size-aware index policy using its approximated index.\nIn the experiment, we adopt the settings of channel qualities of the recent paper. The channel of a wireless client can be in either a good state or a bad state. The amount of data that can be transmitted in a round is 33.6kb in a good state, and 8.4kb in a bad state. Initially, the amount of load is uniformly between 0 and 1Mb. The state of each arm is its channel state and the amount of remaining load. The size of state space is 2 × 106 for each arm. We consider that there are two types of arms, and different types of arms have different probabilities of being in the good state. We train a NeurWIN for each type. During testing, there are N2 arms of each type.\nExperiment results are shown in Fig. 3. It can be observed that NeurWIN is able to outperform the size-aware index policy with about 100, 000 training episodes. This result is significant when one considers the fact that the size-aware index is itself an approximation to the Whittle index. The experiment results thus suggest that NeurWIN is able to find a more accurate approximation to the Whittle index than the best known theoretical result. It can also be observed that REINFORCE performs poorly." }, { "heading": "5.4 DEADLINE SCHEDULING", "text": "A recent study (Yu et al., 2018) proposes a deadline scheduling problem for the scheduling of electrical vehicle charging stations. In this problem, a charging station hasN charging spots and enough power to charge M vehicles in each round. When a charging spot is available, a new vehicle may join the system and occupy the spot. Upon occupying the spot, the vehicle announces the time that it will leave the station and the amount of electricity that it needs to be charged. The charging station obtains a reward for each unit of electricity that it provides to a vehicle. However, if the station cannot fully charge the vehicle by the time it leaves, then the station needs to pay a penalty. The goal of the station is to maximize its net reward, defined as the difference between the amount of reward and the amount of penalty. Under an i.i.d. arrival assumption, the recent study has derived the precise characterization of the Whittle index, which we refer to as the deadline Whittle index. We further prove that this problem is strongly indexable in the appendix.\nWe use exactly the same setting as in the recent study (Yu et al., 2018) for our experiment. In this problem, the state of an arm is denoted by a pair of integers (D,B), where B is the amount of electricity that the vehicle still needs and D is the time until the vehicle leaves the station. When a charging spot is available, its state is (0, 0). B is upper-bounded by 9 and D is upper-bounded by 12. Hence, the size of state space is 109 for each arm.\nThe experiment results are shown in Fig. 4. It can be observed that the performance of NeurWIN converges to that of the deadline Whittle index in less than 500 training episodes." }, { "heading": "5.5 EXPERIMENT RESULTS WITH NOISY SIMULATORS", "text": "The training of NeurWIN requires a simulator for each arm. In this section, we evaluate the performance of NeurWIN when the simulator is not perfectly precise. In particular, let Ract(s) and Rpass(s) be the rewards of an arm in state s when it is activated and not activated, respectively. Then, the simulator estimates that the rewards are R′act(s) = (1 + Gact,s)Ract(s) and R′pass(s) = (1 +Gpass,s)Ri,pass(s), respectively, where Gact,s and Gpass,s are independent Gaussian random variables with mean 0 and variance 0.05. In other words, the simulator has an average 5% error in its reward estimation.\nWe train NeurWIN using the noisy simulators for the recovering bandits problem and the deadline scheduling problem. For each problem, we compare the performance of NeurWIN against the respective baseline policies. Unlike NeurWIN, the baseline policies make decisions based on the true reward functions rather than the estimated ones. The results for the case N = 100 and M = 25 are shown in Fig. 5. It can be observed that NeurWIN is still able to achieve superior performance." }, { "heading": "6 CONCLUSION", "text": "This paper introduced NeurWIN: a deep RL method for estimating the Whittle index for restless bandit problems. The performance of NeurWIN is evaluated by three different restless bandit problems. In each of them, NeurWIN significantly outperforms state-of-the-art control policies in terms of the total discounted reward.\nNeurWIN can have important implications for restless bandit problems. There are many problems where the environments are well-defined, but the optimal control is not known. NeurWIN can obviously be used for such problems. For problems where the environments are not known a priori, NeurWIN nicely compliments existing studies that aim to learn the environments through online learning but fail to find the optimal control policy." }, { "heading": "A RECOVERING BANDITS’ TRAINING AND INFERENCE DETAILS", "text": "" }, { "heading": "A.1 FORMULATED RESTLESS BANDIT FOR THE RECOVERING BANDITS’ CASE", "text": "We list here the terms that describes one restless arm in the recovering bandits’ case:\nState s[t]: The state is a single value s[t] = z[t] called the waiting time. The waiting time z[t] indicates the time since the arm was last played. The arm state space is determined by the maximum allowed waiting time zmax, giving a state space S := [1, zmax]. Action a[t]: As with all other considered cases, the agent can either activate the arm a[t] = 1, or not select it a[t] = 0. The action space is then A := {0, 1}. Reward r[t]: The reward is provided by the recovering function f(z[t]), where z[t] is the time since the arm was last played at time t. If the arm is activated, the function value at z[t] is the earned reward. A reward of zero if given if the arm is left passive a[t] = 0. Figure 6 shows the four recovering functions used in this work. The recovering functions are generated from,\nf(z[t]) = θ0(1− e−θ1·z[t]) (1)\nWhere the Θ = [θ0, θ1] values specify the recovering function. The Θ values for each class are given in table 1." }, { "heading": "Class θ0 Value θ1 Value", "text": "Next state s[t + 1]: The state evolves based on the selected action. If a[t] = 1, the state is reset to s[t + 1] = 1, meaning that bandit’s reward decayed to the initial waiting time z[t + 1] = 1. If the arm is left passive a[t] = 0, the next state becomes s[t+ 1] = min{z[t] + 1, zmax}." }, { "heading": "A.2 TRAINING SETTING", "text": "The general training procedure for the NeurWIN algorithm is outlined in its pseudo code in section 4. Here we discuss the parameter selection and details specific to the recovering bandits’ case. We train the neural network using NeurWIN for 50, 000 episode, and save the trained parameters at an episode interval of 100 episodes. The purpose of saving the parameters is to infer their control policies, and compare it with the 1-lookahead policy. In total, for 50, 000 training episodes, we end up with 500 models for inference. The selected neural network has 609 trainable parameters given as {1, 16, 32, 1} layer neurons. For training parameters, we select the sigmoid value m = 5, the episode’s time horizon T = 100 timesteps, the mini-batch size to 5 episodes, and the discount factor β = 0.999. As with all other cases, each mini-batch of episodes has the same initial state s[t = 1] which is provided by the arm. To ensure the agent experiences as many states in [1, zmax] as possible, we set an initial state sampling distribution given as Pr{s[t = 1] = z} = 2 z\n21+22+...+2zmax . Hence, the probability of selecting the initial state to be s[t = 1] = zmax is 0.5. This initialization distribution allows the agent to experience the recovery function’s awards at higher z values.\nAt the agent side, we set the activation cost λ at the beginning of each mini-batch. λ is chosen to be the estimate index value fθ(s ′ ) of a randomly selected state in s\n′ ∈ [1, zmax]. The training continues as described in NeurWIN’s pseudo code: the agent receives the state, and selects an action a[t]. If the agent activates the arm a[t] = 1, it receives a reward equal to the recovery function’s value at z, and subtracts λ from it. Otherwise, the reward r[t] is kept the same for a[t] = 0. We note that no noise was added with the clean simulator, and the agent discounts the original reward value βtr[t] = βtf(z[t]). The process continues for all timesteps in the episode up to T = 100, and for remaining mini-batch episodes. A gradient ascent step is taken on the bootstrapped mini-batch return as described in section 4.\nA.3 INFERENCE SETTING The inference setup measures NeurWIN’s control policy for several ( N M ) settings. We test, for a single run, the control policies of NeurWIN and 1-lookahead over a time horizon T = 3000 timesteps. We set N arms such that a quarter have one recovering function class from table 1. For example, when N = 100, 25 arms would have recovering function A that generates their rewards.\nAt each timestep, the 1-lookahead policy ranks the recovering functions reward values, and selects theM arms with the highest reward values for activation. The incurred discounted reward at time t is the discounted sum of all activated arms’ rewards. The total discounted reward is then the discounted rewards over time horizon T = 3000. For inferring NeurWIN’s control policy, we record the total discounted reward for each of the 500 models. An example testing procedure is as follows: we instantiate N arms each having a neural network trained to 10, 000 episodes. At each timestep t, the neural networks provide the estimated index fi,θ(si[t]) for i = 1, 2, . . . , N . The control policy activates the M arms with the highest index values. The incurred discounted reward at time t is the discounted sum of all activated arm’s rewards βtR[t] = βt ∑M j=1 fj(z[t]). The same process continues for all timesteps in the horizon T = 3000. We then load the model parameters trained on 10, 100 episodes, and repeat the aforementioned testing process using the same seed values." }, { "heading": "A.4 REINFORCE TRAINING AND INFERENCE SETTING ON RECOVERING BANDITS", "text": "The REINFORCE algorithm was applied only the ( N M ) case where N = 4, and M = 1. For training, REINFORCE had four arms each with one of the recovery functions detailed in table 1. The training parameters are: initial learning rate L = 0.001, mini-batch size is 5 episodes, and a training episode time horizon T = 100 timesteps. Training was done up to 50, 000 episodes, where the trained parameters were saved at an interval of 100 episodes. The selected neural network had 2504 trainable parameters. This neural network size is larger than 609 × 4 = 2436 parameters of four NeurWIN neural networks.\nFor testing, the same procedure is followed as in A.3. The trained REINFORCE models were loaded, and each tested on the same arms as NeurWIN and deadline Whittle index policies. The testing was\nmade for all 500 trained model (each being trained up to a different episode count). The final control policy result was plotted along with NeurWIN and the 1-lookahead policies for ( 4 1 ) arms." }, { "heading": "A.5 QWIC TRAINING AND INFERENCE SETTING ON RECOVERING BANDITS", "text": "The Q-learning Whittle Index Controller (QWIC) pseudo-code is given in (Fu et al., 2019). QWIC was trained in an offline setting using fixedN restless arms withM activations (i.e. ( 4 1 ) ( 100 10 ) ( 100 25 ) ). QWIC selects from a set of candidate threshold values λ ∈ Λ as index for each state. The algorithm learns Q function Q ∈ RΛ×S×{0,1}. The estimated index λ̃[s] per state s is determined during training as,\nλ̃[s] = arg min λ∈Λ\n|Q(λ, s, 1)−Q(λ, s, 0)| (2)\nHence, the converged index values and control performance depends on the initial set of candidate values Λ. We select Λ to be 100 values evenly spaced in the interval [0, 10]. We note the set selection was based on NeurWIN’s learned index values, which provides an advantage to QWIC training.\nThe exploration-exploitation trade-off is steered by parameter . is initialized to max = 1, and decays with factor α = 0.01 to min = 0.01. is updated at each timestep during training until it settles at min.\nOther training parameters were selected as: initial learning rate L = 0.001, training episode time horizon of T = 100 timesteps, discount factor β = 0.999, . Training was done up to 50, 000 episodes, where the Q-learned indices Λ̄ were saved at an interval of 100 episodes.\nFor testing, we use the same testing setting as in NeurWIN and REINFORCE. The learned indices are loaded for each training interval. In total, 500 estimated index mappings were tested for 200 independent runs, each trained up to a certain episode limit." }, { "heading": "B WIRELESS SCHEDULING TRAINING AND INFERENCE DETAILS", "text": "" }, { "heading": "B.1 RESTLESS ARM DEFINITION FOR THE WIRELESS SCHEDULING CASE", "text": "As with the recovering bandits’ case, we first list the state s[t], action a[t], reward r[t], and next state s[t+ 1] that forms one restless arm:\nState s[t]: The state is a vector (y[t], v[t]), where y[t] is the arm’s remaining load in bits, and v[t] is the wireless channel’s state indicator. v[t] = 1 means a good channel state and a higher transmission rate r2, while v[t] = 0 is a bad channel state with a lower transmission rate r1.\nAction a[t]: The agent either activates the arm a[t] = 1, or keeps it passive a[t] = 0. The reward and next state depend on the chosen action.\nReward r[t]: The arm’s reward is the negative of holding cost ψ, which is a cost incurred at each timestep for not completing the job. If the selected action a[t] = 1, then the reward at time t is r[t] = −ψ − λ. Otherwise, reward is just r[t] = −ψ.\nNext state s[t+ 1]: The next state evolves differently as given below,\ns[t+ 1] = (y[t]− r2, 1) if q(v[t]) = 1, a[t] = 1 (y[t]− r1, 0) if q(v[t]) = 0, a[t] = 1\n(y[t], q(v[t])) otherwise\n(3)\nWhere q(v[t]) is the probability of a good channel state." }, { "heading": "B.2 TRAINING SETTING", "text": "We again emphasize that NeurWIN training happens only on one restless arm. The general training procedure was described in NeurWIN’s pseudo code. This discussion pertains only to the wireless scheduling case.\nThe neural network has 625 trainable parameters given as {2, 16, 32, 1} neuron layers. The training happens for 1, 000, 000 episodes, and we save the model parameters at each 1000 episodes. Hence, the training results in 1000 models trained up to different episode limit.\nFor the wireless scheduling case, we set the sigmoid value m = 0.01, mini-batch size to 5 episodes, and the discount factor to β = 0.999. Episode time horizon is dependent on the remaining job size y[t]. The episode terminates either if y[t] = 0 or t = 3000. The holding cost is set to c = 1, which is incurred for each timestep the job is not completed. We also set the good transmission rate r2 = 33.6 kb, and the bad channel transmission rate r1 = 8.4 kb. During training, the good channel probability is q(v[t]) = 0.5.\nThe episode defines one job size sampled uniformly from the range y[t = 1] ∼ (0, 1 Mb]. All episodes in one mini-batch have the same initial state, as well as the same sequence of good channel states [v[t = 1], v[t = 2], . . . , v[t = T ]].\nAt the agent side, NeurWIN receives the initial state s[t = 1], and sets the activation cost λ = fθ(s[t = 1]) for all timesteps of all mini-batch episodes. As mentioned before, we save the trained model at an interval of 1000 episodes. For 1, 000, 000 episodes, this results in 1000 models trained up to their respective episode limit.\nB.3 INFERENCE SETTING\nFor testing, the aim is to measure the trained models’ control performance against the size-aware index. We instantiate N arms and activate M arms at each timestep t until all users’ jobs terminate. We average the total discounted reward for all control policies over 200 independent inference runs. Half of the arms have a good channel probability q(v[t]) = 0.75. The other half has a good channel probability q(v[t]) = 0.1.\nWe compare NeurWIN’s control policy at different training episodes’ limits with the size-aware index policy. The size-aware index is defined as follows: at each timestep, the policy prioritizes arms in the good channel state, and calculates their secondary index. The secondary index v̂i of arm i state (yi[t], vi[t]) is defined as,\nv̂i(yi[t], vi[t]) = ciri,2 yi[t]\n(4)\nThe size-aware policy then activates the highest M indexed arms. In case the number of good channel arms is below M , the policy also calculate the primary index of all remaining arms. The primary index vi of arm i state (yi[t], vi[t]) is defined as,\nvi(yi[t], vi[t]) = ci\nqi[t](ri,2/ri,1)− 1 (5)\nRewards received from all arms are summed, and discounted using β = 0.999. The inference phase proceeds until all jobs have been completed.\nFor NeurWIN’s control policy, we record the total discounted reward for the offline-trained models. For example, we set N arms each coupled with a model trained on 10, 000 episodes. The models output their arms’ indices, and the topM indexed arms are activated. In case the remaining arms are less than M , we activate all remaining arms at timestep t. timestep reward βtR[t] = βt ∑N i=1 r[t] is the sum of all arms’ rewards. Once testing for the current model is finished, we load the next model 11, 000 for each arm, and repeat the process. We note that the arms’ initial loads are the same across runs, and that the sequence of good channel states is random." }, { "heading": "B.4 REINFORCE TRAINING AND INFERENCE SETTING ON WIRELESS SCHEDULING", "text": "The REINFORCE algorithm was applied only the (\n4 1\n) case. The four arms have the same training\nsetting as described in section B.2. The training parameters are: initial learning rate L = 0.001,\nmini-batch size is 5 episodes, and good channel probability for all four arms q(v[t]) = 0.5. The episode time horizon has a hard limit of T̄ = 3000 timesteps. However, an episode can terminate if all arms’ loads were fully processed (i.e. ∑4 i=1 yi[t] = 0). Training was done up to 100, 000 episodes, where the trained parameters were saved at an interval of 1000 episodes. The selected neural network had 2532 trainable parameters so to have slightly more parameters than four NeurWIN neural networks.\nFor testing, the same procedure is followed as in B.3. The trained REINFORCE models were loaded, and each tested on the same arms as NeurWIN and size-aware index. The final control policy result was plotted along with NeurWIN and Whittle index policy for the ( 4 1 ) testing setup." }, { "heading": "C DEADLINE SCHEDULING TRAINING AND INFERENCE DETAILS", "text": "" }, { "heading": "C.1 FORMULATED RESTLESS BANDIT FOR THE DEADLINE SCHEDULING CASE", "text": "The state s[t], action a[t], reward r[t], and next state s[t+ 1] of one arm are listed below:\nState s[t]: The state is a vector (D,B). B denotes the job size (i.e. amount of electricity needed for an electric vehicle), and D is the job’s time until the hard drop deadline d is reached (i.e. time until an electric vehicle leaves).\nAction a[t]: The agent can either activate the arm a[t] = 1, or leave it passive a[t] = 0. The next state changes based on two different transition kernels depending on the selected action. The reward is also dependent on the action at time t.\nReward r[t]: The agent, at time t, receives a reward r[t] from the arm,\nr[t] = (1− c)a[t] if B[t] > 0, D[t] > 1 (1− c)a[t]− F (B[t]− a[t]) if B[t] > 0, D[t] = 1\n0 otherwise\n(6)\nWhere c is a constant processing cost incurred when activating the arm, F (B[t]−a[t]) is the penalty function for failing to complete the job before D = 1. The penalty function was chosen to be F (B[t]− a[t]) = 0.2(B[t]− a[t])2. Next state s[t + 1]: The next state D[t + 1] decreases by one, while the job size B depends on the selected action as,\ns[t+ 1] = (D[t]− 1, B[t]− a[t]) if D[t] > 1\n(D,B) with prob. Q(D,B) if D[t] ≤ 1 (7)\nWhere Q(D,B) is the arrival probability of a new job (i.e. a new electric vehicle arriving at a charging station) if the position is empty. For training and inference, we set Q(D,B) = 0.7." }, { "heading": "C.2 STRONG INDEXABILITY PROOF FOR THE DEADLINE SCHEDULING CASE", "text": "It has been shown that the Whittle index for this problem is,\nv(D,B) := 0 if B = 0 1− c if 1 ≤ B ≤ D − 1\nβD−1F (B −D + 1) −βD−1F (B −D) + 1− c if D ≤ B\n(8)\nWe further demonstrate that this problem is strongly indexable.\nTheorem 3. The restless bandit for the deadline scheduling problem is strongly indexable.\nProof. Fix a state s = (D,B), the function Ds(λ) := (Qλ,act(s) − Qλ,pass(s)) is a continuous and piece-wise linear function since the number of states is finite. Thus, it is sufficient to prove that Ds(λ) is strictly decreasing at all points of λ where Ds(λ) is differentiable. Let Lλ,act(s) be the sequence of actions taken by a policy that activates the arm at round 1, and then uses the optimal policy starting from round 2. Let Lλ,pass(s) be the sequence of actions taken by a policy that does not activate the arm at round 1, and then uses the optimal policy starting from round 2. We prove this theorem by comparing Lλ,act(s) and Lλ,pass(s) on every sample path. We consider the following two scenarios:\nIn the first scenario, Lλ,act(s) and Lλ,pass(s) are the same starting from round 2. Let b be the remaining job size when the current deadline expires under Lλ,act(s). Since Lλ,pass(s) is the same as Lλ,act(s) starting from round 2, its remaining job size when the current deadline expires is b+ 1. Thus, Ds(λ) = 1 − c − λ + βD−1(F (b + 1) − F (b)), which is strictly decreasing in λ whenever Ds(λ) is differentiable.\nIn the second scenario, Lλ,act(s) and Lλ,pass(s) are not the same after round 2. Let τ be the first time after round 2 that they are different. Since they are the same between round 2 and round τ , the remaining job size under Lλ,act(s) is no larger than that under Lλ,pass(s). Moreover, the Whittle index is increasing in job size. Hence, we can conclude that, on round τ , Lλ,pass(s) activates the arm and Lλ,act(s) does not activate the arm. After round τ , Lλ,act(s) and Lλ,pass(s) are in the same state and will choose the same actions for all following rounds. Thus, the two sequences only see different rewards on round 1 and round τ , and we have Ds(λ) = (1 − c − λ)(1 − βτ−1), which is strictly decreasing in λ whenever Ds(λ) is differentiable.\nCombining the two scenarios, the proof is complete." }, { "heading": "C.3 TRAINING SETTING", "text": "NeurWIN training is made for 1000 episodes on the deadline scheduling case. We save the trained model parameters at an interval of 5 episodes for inferring the control policy after training. Hence, the training produces 200 different set of parameters that output the estimated index given their respective training limit. The neural network had 625 trainable parameters given as {2, 16, 32, 1}, where the input layer matches the state size.\nFor the deadline scheduling training, we set the sigmoid value m = 1, episode’s time horizon T = 3000 timesteps, mini-batch size to 5 episodes, and the discount factor β = 0.999. The processing cost c = 0.5, with the job arrival rate Q(D,B) = 0.7. Training procedure follows section 4.2 from the main text. The arm randomly picks an initial state s[t = 1] = (D,B), with a maximum D̄ = 12, and maximum B̄ = 9. The arm fixes the initial states across episodes in the same minibatch for proper return comparison. The sequence of job arrivals in an episode’s horizon is also fixed across a mini-batch. For example, one episode in mini-batch 1 would have the sequence [(11, 5), (6, 2), (8, 4), . . . , (3, 5)], then all other episodes in the same mini-batch would pass the same sequence. This way, the actions taken by the agent would be the critical factor in comparing a mini-batch return, and ultimately in tuning the estimated index value fθ(·). At the agent side, NeurWIN receives the initial state s[t = 1], sets the activation cost λ = fθ(s[t = 1]). This activation cost λ selection method hence depends on the current network parameters θ, which are modified after every gradient ascent step. Training follows as described in NeurWIN’s pseudo code.\nIn figure 7, we plot the trained NeurWIN index for all possible state enumerations of B̄ = 9 and D ∈ {1, 2, 3}. The output index from the untrained neural network is also plotted for convergence comparison.\nIn figure 8, the trained restless bandit indices for noisy reward function is given. All possible states in B̄ = 9 for D ∈ {1, 2, 3}. ForN (0, 0.05) added noise per timestep, the learned indices still match the state ordering found when trained with the true reward function.\nC.4 INFERENCE SETTING\nIn order to infer the resultant control policy, we are required to test the performance on models saved at different episodes’ intervals. In other words, the trained models’ parameters are tested at an interval of episodes, and their discounted rewards are plotted for comparison.\nFrom the trained models described in C.3, we instantiate N arms, and activate M arms at each timestep. The inference step compares the resultant control policy with the deadline Whittle index v(D,B).\nThe testing is done for a time horizon of T = 3000 timesteps. The queue, modelled as N restless arms, has M positions activated at each timestep. Each arm has a unique sequence of job arrivals from other arms that differentiates its index value. For the deadline Whittle index, we calculate the indices according to 8, and activate the highestM indices-associated arms. The accumulated reward from all arm (activated and passive) is then discounted with β.\nFor NeurWIN control policy, we instantiate N arms, and test the trained models up to a given episode. For example, we load a NeurWIN model trained for 100 episodes on one arm, and set N arms each with its own trained agent on 100 episodes. Once the testing is complete, we load the next model trained at 105 episodes, and repeat the process for 105 episodes. The final result is NeurWIN’s control policy’s performance on N arms given the models’ training.\nWe perform the testing over 200 independent runs up to 1000 episodes, where each run the arms are seeded differently. We stress that both the deadline Whittle index and NeurWIN policies were applied on identical seeded arms across the 200 runs. Meaning the sequence of arrivals and rewards experienced was fixed for each arm in each run. Results were provided in the main text for this setting." }, { "heading": "C.5 REINFORCE TRAINING AND INFERENCE SETTING ON DEADLINE SCHEDULING", "text": "The REINFORCE algorithm was applied on the (\n4 1\n) testing case. For training, REINFORCE was\ntrained on the same training setting as described in C.3 with the same parameters when appropriate.\nThe four restless arms were seeded differently to give unique job sequences. Training was made until 1000 episodes, where the trained parameters were saved at an interval of 5 episodes. The selected neural network had 2532 trainable parameters. The REINFORCE parameters’ count are purposefully slightly larger than 625× 4 = 2500 parameters of four NeurWIN neural networks.\nFor testing, the same procedure is followed as explained in C.4. The trained REINFORCE models were loaded, and each tested on the same arms as NeurWIN and deadline Whittle index policies. The testing was made for all 200 trained model (each being trained up to a different episode count). The final control policy result was plotted along with NeurWIN and Whittle index policy for ( 4 1 ) arms." }, { "heading": "C.6 QWIC TRAINING AND INFERENCE SETTING ON DEADLINE SCHEDULING", "text": "QWIC was trained in an offline setting for the sets (\n4 1 ) ( 100 10 ) ( 100 25 ) . We select the same candidate\nset Λ as in the recovering bandits case, which is 100 values evenly spaced in the interval [0, 10]. was initialized to max = 1, and decays with factor α = 0.01 to min = 0.01. is updated at each timestep during training until it decays to min.\nOther training parameters: initial learning rate L = 0.001, training episode time horizon of T = 3000 timesteps, discount factor β = 0.999, . Training was done up to 1, 000 episodes, where the select q-learned indices Λ̄ were saved at an interval of 5 episodes. We test the Q-learning indices using the same setting as NeurWIN and REINFORCE. The estimated index mappings were tested for 200 independent runs.\nWe refer the reader to the code for further implementation details." } ]
2,020
NEURWIN: NEURAL WHITTLE INDEX NETWORK FOR
SP:14f1bc469eb56dec5dd691c4a4865aa607fa344e
[ "In this paper, a neural control method is proposed with stability guarantees. The control is assumed to be from a neural network that takes in the state. Stability is guaranteed by projecting the control to the set that satisfies the Lyapunov stability condition for the LQR problem. In particular, minimizing the cost of LQR cost subject to stability constraints can be cast as an SDP for norm-bouned linear differential inclusions. Through making use of the convex optimization layers proposed in Agrawal et al. (2019), the SDP can be added as a layer after the neural policy and efficient projections can be derived such that implicit function theorem can be utilized to differentiate through the fixed point (the optimal conditions of the SDP), such that end to end learning is possible. The proposed approach is compared with the unconstrained method on various tasks. Both model-based and model-free RL algorithms are used as the neural policy for comparison. The stability-guaranteed approach is able to remain stable even under bounded adversarial dynamics. In comparison, the non-robust methods fail to maintain stability." ]
When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance. While robust control methods provide rigorous guarantees on system stability under certain worst-case disturbances, they often yield simple controllers that perform poorly in the average (non-worst) case. In contrast, nonlinear control methods trained using deep learning have achieved state-of-the-art performance on many control tasks, but often lack robustness guarantees. In this paper, we propose a technique that combines the strengths of these two approaches: constructing a generic nonlinear control policy class, parameterized by neural networks, that nonetheless enforces the same provable robustness criteria as robust control. Specifically, our approach entails integrating custom convex-optimization-based projection layers into a neural network-based policy. We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
[ { "affiliations": [], "name": "Priya L. Donti" }, { "affiliations": [], "name": "Melrose Roderick" }, { "affiliations": [], "name": "Mahyar Fazlyab" }, { "affiliations": [], "name": "J. Zico Kolter" } ]
[ { "authors": [ "Kemin Zhou", "John Comstock Doyle" ], "title": "Essentials of Robust Control, volume 104", "venue": "Prentice hall Upper Saddle River, NJ,", "year": 1998 }, { "authors": [ "Tamer Başar", "Pierre Bernhard" ], "title": "H∞-Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Stephen Boyd", "Laurent El Ghaoui", "Eric Feron", "Venkataramanan Balakrishnan" ], "title": "Linear Matrix Inequalities in System and Control Theory, volume", "venue": null, "year": 1994 }, { "authors": [ "Mayuresh V Kothare", "Venkataramanan Balakrishnan", "Manfred Morari" ], "title": "Robust constrained model predictive control using linear matrix inequalities", "venue": null, "year": 1996 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Ilge Akkaya", "Marcin Andrychowicz", "Maciek Chociej", "Mateusz Litwin", "Bob McGrew", "Arthur Petron", "Alex Paino", "Matthias Plappert", "Glenn Powell", "Raphael Ribas" ], "title": "Solving Rubik’s Cube with a Robot Hand", "venue": null, "year": 1910 }, { "authors": [ "Lucian Buşoniu", "Tim de Bruin", "Domagoj Tolić", "Jens Kober", "Ivana Palunko" ], "title": "Reinforcement learning for control: Performance, stability, and deep approximators", "venue": "Annual Reviews in Control,", "year": 2018 }, { "authors": [ "Jun Morimoto", "Kenji Doya" ], "title": "Robust Reinforcement Learning", "venue": "Neural Computation,", "year": 2005 }, { "authors": [ "Murad Abu-Khalaf", "Frank L Lewis", "Jie Huang" ], "title": "Policy Iterations on the Hamilton–Jacobi–Isaacs Equation forH∞ State Feedback Control With Input Saturation", "venue": "IEEE Transactions on Automatic Control,", "year": 1989 }, { "authors": [ "Yantao Feng", "Brian DO Anderson", "Michael Rotkowitz" ], "title": "A game theoretic algorithm to compute local stabilizing solutions to HJBI equations in nonlinear H∞ control", "venue": null, "year": 2009 }, { "authors": [ "Derong Liu", "Hongliang Li", "Ding Wang" ], "title": "Neural-network-based zero-sum game for discrete-time nonlinear systems via iterative adaptive dynamic programming", "venue": "algorithm. Neurocomputing,", "year": 2013 }, { "authors": [ "Huai-Ning Wu", "Biao Luo" ], "title": "Simultaneous policy update algorithms for learning the solution of linear continuous-time H∞ state feedback control", "venue": "Information Sciences,", "year": 2013 }, { "authors": [ "Stefan R Friedrich", "Martin Buss" ], "title": "A robust stability approach to robot reinforcement learning based on a parameterization of stabilizing controllers", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Lerrel Pinto", "James Davidson", "Rahul Sukthankar", "Abhinav Gupta" ], "title": "Robust Adversarial Reinforcement Learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ming Jin", "Javad Lavaei" ], "title": "Stability-certified reinforcement learning: A control-theoretic perspective", "venue": "arXiv preprint arXiv:1810.11505,", "year": 2018 }, { "authors": [ "Ya-Chien Chang", "Nima Roohi", "Sicun Gao" ], "title": "Neural Lyapunov Control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Minghao Han", "Yuan Tian", "Lixian Zhang", "Jun Wang", "Wei Pan" ], "title": "H∞ Model-free Reinforcement Learning with Robust Stability Guarantee", "venue": null, "year": 2019 }, { "authors": [ "Matteo Turchetta", "Felix Berkenkamp", "Andreas Krause" ], "title": "Safe Exploration in Finite Markov Decision Processes with Gaussian Processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Anayo K. Akametalu", "Shahab Kaynama", "Jaime F. Fisac", "Melanie Nicole Zeilinger", "Jeremy H. Gillula", "Claire J. Tomlin" ], "title": "Reachability-based safe learning with Gaussian processes", "venue": "In 53rd IEEE Conference on Decision and Control,", "year": 2014 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela P. Schoellig", "Andreas Krause" ], "title": "Safe Model-based Reinforcement Learning with Stability Guarantees", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Akifumi Wachi", "Yanan Sui", "Yisong Yue", "Masahiro Ono" ], "title": "Safe Exploration and Optimization of Constrained MDPs Using Gaussian Processes", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov Decision Processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained Policy Optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Majid Alkaee Taleghan", "Thomas G. Dietterich" ], "title": "Efficient Exploration for Constrained MDPs", "venue": "AAAI Spring Symposia,", "year": 2018 }, { "authors": [ "Tsung-Yen Yang", "Justinian Rosca", "Karthik Narasimhan", "Peter J Ramadge" ], "title": "Projection-Based Constrained Policy Optimization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Brandon Amos", "J Zico Kolter" ], "title": "OptNet: Differentiable Optimization as a Layer in Neural Networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Po-Wei Wang", "Priya Donti", "Bryan Wilder", "Zico Kolter" ], "title": "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Josip Djolonga", "Andreas Krause" ], "title": "Differentiable Learning of Submodular Models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sebastian Tschiatschek", "Aytunc Sahin", "Andreas Krause" ], "title": "Differentiable Submodular Maximization", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "J Zico Kolter" ], "title": "Differentiable Convex Optimization Layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Stephen Gould", "Richard Hartley", "Dylan Campbell" ], "title": "Deep Declarative Networks: A New Hope", "venue": "arXiv preprint arXiv:1909.04866,", "year": 2019 }, { "authors": [ "Wassim M Haddad", "VijaySekhar Chellaboina" ], "title": "Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach", "venue": null, "year": 2011 }, { "authors": [ "David D Yao", "Shuzhong Zhang", "Xun Yu Zhou" ], "title": "A primal-dual semi-definite programming approach to linear quadratic control", "venue": "IEEE Transactions on Automatic Control,", "year": 2001 }, { "authors": [ "Quang Linh Lam", "Antoneta Iuliana Bratcu", "Delphine Riu" ], "title": "Frequency Robust Control in Standalone Microgrids with PV Sources: Design and Sensitivity Analysis. 2016", "venue": null, "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Hassan K Khalil", "Jessy W Grizzle" ], "title": "Nonlinear Systems, volume", "venue": null, "year": 2002 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory Lectures on Convex Optimization: A Basic Course, volume 87", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Heinz H Bauschke" ], "title": "Projection algorithms and monotone operators", "venue": "PhD thesis, Dept. of Mathematics and Statistics, Simon Fraser University,", "year": 1996 }, { "authors": [ "Russ Tedrake" ], "title": "Underactuated robotics: Learning, planning, and control for efficient and agile machines course notes for MIT 6.832", "venue": "Working draft edition,", "year": 2009 }, { "authors": [ "Sumeet Singh", "Spencer M Richards", "Vikas Sindhwani", "Jean-Jacques E Slotine", "Marco Pavone" ], "title": "Learning stabilizable nonlinear dynamics with contraction-based regularization", "venue": "The International Journal of Robotics Research,", "year": 2020 }, { "authors": [ "Roman Kuiava", "Rodrigo A Ramos", "Hemanshu R Pota" ], "title": "A New Method to Design Robust Power Oscillation Dampers for Distributed Synchronous Generation Systems", "venue": "Journal of Dynamic Systems, Measurement, and Control,", "year": 2013 }, { "authors": [ "K = Y S", "P = S" ], "title": "The derivation of this LMI follows similarly to that for exponential stability in NLDIs, and is well-described in Boyd et al", "venue": null, "year": 1994 }, { "authors": [ "e.g. Amos", "Kolter" ], "title": "2017), we can then use these equations to form the Jacobian of μ with respect to any of the problem variables by setting the differential of the relevant problem variable to I and of all other problem variables to 0; solving the resulting equation for dμ", "venue": null, "year": 2017 }, { "authors": [ "R ∂y" ], "title": "1×n. We do this via a similar method as presented in Amos and Kolter (2017), and refer the reader there for a more in-depth explanation of the method described below", "venue": null, "year": 2017 }, { "authors": [ "Singh" ], "title": "ur, ul] T from right and left thrusters. We assume that our action u is additional to a baseline force of [mg/2 mg/2] provided by the thrusters by default to prevent the quadcopter from falling. For a quadrotor with mass m, moment-arm ` for the thrusters, and moment of inertia J about the roll axis, the dynamics of this system", "venue": null, "year": 2020 }, { "authors": [ "Lam" ], "title": "In this system, the state x ∈ R3 captures voltage deviations, frequency deviations, and the amount of power generated by a diesel generator connected to the grid; the action u ∈ R2 describes the current associated with a storage device and a solar PV inverter; and the disturbance w ∈ R describes the difference between the amount of power demanded and the amount of power produced by solar panels on", "venue": null, "year": 2016 }, { "authors": [ "B A", "G matrices given in Lam" ], "title": "To construct an NLDI of the form", "venue": null, "year": 2016 }, { "authors": [ "Boyd" ], "title": "PLDI more concisely as an NLDI. More precisely, we would like to find matrices A,B,C parameterizing the form of NLDI below, which is equivalent to that presented in Equation", "venue": null, "year": 1994 } ]
[ { "heading": "1 INTRODUCTION", "text": "The field of robust control, dating back many decades, has been able to provide rigorous guarantees on when controllers will succeed or fail in controlling a system of interest. In particular, if the uncertainties in the underlying dynamics can be bounded in specific ways, these techniques can produce controllers that are provably robust even under worst-case conditions. However, as the resulting policies tend to be simple (i.e., often linear), this can limit their performance in typical (rather than worst-case) scenarios. In contrast, recent high-profile advances in deep reinforcement learning have yielded state-of-the-art performance on many control tasks, due to their ability to capture complex, nonlinear policies. However, due to a lack of robustness guarantees, these techniques have still found limited application in safety-critical domains where an incorrect action (either during training or at runtime) can substantially impact the controlled system.\nIn this paper, we propose a method that combines the guarantees of robust control with the flexibility of deep reinforcement learning (RL). Specifically, we consider the setting of nonlinear, time-varying systems with unknown dynamics, but where (as common in robust control) the uncertainty on these dynamics can be bounded in ways amenable to obtaining provable performance guarantees. Building upon specifications provided by traditional robust control methods in these settings, we construct a new class of nonlinear policies that are parameterized by neural networks, but that are nonetheless provably robust. In particular, we project the outputs of a nominal (deep neural network-based) controller onto a space of stabilizing actions characterized by the robust control specifications. The resulting nonlinear control policies are trainable using standard approaches in deep RL, yet are guaranteed to be stable under the same worst-case conditions as the original robust controller.\nWe describe our proposed deep nonlinear control policy class and derive efficient, differentiable projections for this class under various models of system uncertainty common in robust control. We demonstrate our approach on several different domains, including synthetic linear differential inclusion (LDI) settings, the cart-pole task, a quadrotor domain, and a microgrid domain. Although these domains are simple by modern RL standards, we show that purely RL-based methods often produce unstable policies in the presence of system disturbances, both during and after training. In contrast, we show that our method remains stable even when worst-case disturbances are present, while improving upon the performance of traditional robust control methods." }, { "heading": "2 RELATED WORK", "text": "We employ techniques from robust control, (deep) RL, and differentiable optimization to learn provably robust nonlinear controllers. We discuss these areas of work in connection to our approach.\nRobust control. Robust control is concerned with the design of feedback controllers for dynamical systems with modeling uncertainties and/or external disturbances (Zhou and Doyle, 1998; Başar and Bernhard, 2008), specifically controllers with guaranteed performance under worst-case conditions. Many classes of robust control problems in both the time and frequency domains can be formulated using linear matrix inequalities (LMIs) (Boyd et al., 1994; Kothare et al., 1996); for reasonably-sized problems, these LMIs can be solved using off-the-shelf numerical solvers based on interior-point or first-order (gradient-based) methods. However, providing stability guarantees often requires the use of simple (linear) controllers, which greatly limits average-case performance. Our work seeks to improve performance via nonlinear controllers that nonetheless retain the same stability guarantees.\nReinforcement learning (RL). In contrast, RL (and specifically, deep RL) is not restricted to simple controllers or problems with uncertainty bounds on the dynamics. Instead, deep RL seeks to learn an optimal control policy, represented by a neural network, by directly interacting with an unknown environment. These methods have shown impressive results in a variety of complex control tasks (e.g., Mnih et al. (2015); Akkaya et al. (2019)); see Buşoniu et al. (2018) for a survey. However, due to its lack of safety guarantees, deep RL has been predominantly applied to simulated environments or highly-controlled real-world problems, where system failures are either not costly or not possible.\nEfforts to address the lack of safety and stability in RL fall into several main categories. The first tries to combine control-theoretic ideas, predominantly robust control, with the nonlinear control policy benefits of RL (e.g., Morimoto and Doya (2005); Abu-Khalaf et al. (2006); Feng et al. (2009); Liu et al. (2013); Wu and Luo (2013); Luo et al. (2014); Friedrich and Buss (2017); Pinto et al. (2017); Jin and Lavaei (2018); Chang et al. (2019); Han et al. (2019); Zhang et al. (2020)). For example, RL has been used to address stochastic stability in H∞ control synthesis settings by jointly learning Lyapunov functions and policies in these settings (Han et al., 2019). As another example, RL has been used to address H∞ control for continuous-time systems via min-max differential games, in which the controller and disturbance are the “minimizer” and “maximizer” (Morimoto and Doya, 2005). We view our approach as thematically aligned with this previous work, though our method is able to capture not only H∞ settings, but also a much broader class of robust control settings.\nAnother category of methods addressing this challenge is safe RL, which aims to learn control policies while maintaining some notion of safety during or after learning. Typically, these methods attempt to restrict the RL algorithm to a safe region of the state space by making strong assumptions about the smoothness of the underlying dynamics, e.g., that the dynamics can be modeled as a Gaussian process (GP) (Turchetta et al., 2016; Akametalu et al., 2014) or are Lipschitz continuous (Berkenkamp et al., 2017; Wachi et al., 2018). This framework is in theory more general than our approach, which requires using stringent uncertainty bounds (e.g. state-control norm bounds) from robust control. However, there are two key benefits to our approach. First, norm bounds or polytopic uncertainty can accommodate sharp discontinuities in the continuous-time dynamics. Second, convex projections (as used in our method) scale polynomially with the state-action size, whereas GPs in particular scale exponentially (and are therefore difficult to extend to high-dimensional problems).\nA third category of methods uses Constrained Markov Decision Processes (C-MDPs). These methods seek to maximize a discounted reward while bounding some discounted cost function (Altman, 1999; Achiam et al., 2017; Taleghan and Dietterich, 2018; Yang et al., 2020). While these methods do not require knowledge of the cost functions a-priori, they only guarantee the cost constraints hold during test time. Additionally, using C-MDPs can yield other complications, such as optimal policies being stochastic and the constraints only holding for a subset of states.\nDifferentiable optimization layers. A great deal of recent work has studied differentiable optimization layers for neural networks: e.g., layers for quadratic programming (Amos and Kolter, 2017), SAT solving (Wang et al., 2019), submodular optimization (Djolonga and Krause, 2017; Tschiatschek et al., 2018), cone programs (Agrawal et al., 2019), and other classes of optimization problems (Gould et al., 2019). These layers can be used to construct neural networks with useful inductive bias for particular domains or to enforce that networks obey hard constraints dictated by the settings in which they are used. We create fast, custom differentiable optimization layers for the latter purpose, namely, to project neural network outputs into a set of certifiably stabilizing actions." }, { "heading": "3 BACKGROUND ON LQR AND ROBUST CONTROL SPECIFICATIONS", "text": "In this paper, our aim is to control nonlinear (continuous-time) dynamical systems of the form\nẋ(t) ∈ A(t)x(t) +B(t)u(t) +G(t)w(t), (1)\nwhere x(t) ∈ Rs denotes the state at time t; u(t) ∈ Ra is the control input; w(t) ∈ Rd captures both external (possibly stochastic) disturbances and any modeling discrepancies; ẋ(t) denotes the time derivative of the state x at time t; and A(t) ∈ Rs×s, B(t) ∈ Rs×a, G(t) ∈ Rs×d. This class of models is referred to as linear differential inclusions (LDIs); however, we note that despite the name, this class does indeed characterize nonlinear systems, as, e.g., w(t) can depend arbitrarily on x(t) and u(t) (though we omit this dependence in the notation for brevity). Within this class of models, it is often possible to construct robust control specifications certifying system stability. Given such specifications, our proposal is to learn nonlinear (deep neural network-based) policies that provably satisfy these specifications while optimizing some objective of interest. We start by giving background on the robust control specifications and objectives considered in this work." }, { "heading": "3.1 ROBUST CONTROL SPECIFICATIONS", "text": "In the continuous-time, infinite-horizon settings we consider here, the goal of robust control is often to construct a time-invariant control policy u(t) = π(x(t)), alongside some certification that guarantees that the controlled system will be stable (i.e., that trajectories of the system will converge to an equilibrium state, usually x = 0 by convention; see Haddad and Chellaboina (2011) for a more formal definition). For many classes of systems,1 this certification is typically in the form of a positive definite Lyapunov function V : Rs → R, with V (0) = 0 and V (x) > 0 for all x 6= 0, such that the function is decreasing along trajectories – for instance,\nV̇ (x(t)) ≤ −αV (x(t)) (2)\nfor some design parameter α > 0. (This particular condition implies exponential stability with a rate of convergence α.2) For certain classes of bounded dynamical systems, time-invariant linear control policies u(t) = Kx(t), and quadratic Lyapunov functions V (x) = xTPx, it is possible to construct such guarantees using semidefinite programming. For instance, consider the class of norm-bounded LDIs (NLDIs)\nẋ = Ax(t) +Bu(t) +Gw(t), ‖w(t)‖2 ≤ ‖Cx(t) +Du(t)‖2, (3)\nwhere A ∈ Rs×s, B ∈ Rs×a, G ∈ Rs×d, C ∈ Rk×s, and D ∈ Rk×a are time-invariant and known, and the disturbance w(t) is arbitrary (and unknown) but obeys the norm bounds above.3 For these systems, it is possible to specify a set of stabilizing policies via a set of linear matrix inequalities (LMIs, Boyd et al. (1994)):[\nAS + SAT + µGGT +BY + Y TBT + αS SCT + Y TDT\nCS +DY −µI\n] 0, S 0, µ > 0, (4)\nwhere S ∈ Rs×s and Y ∈ Ra×s. For matrices S and Y satisfying (4), K = Y S−1 and P = S−1 are then a stabilizing linear controller gain and Lyapunov matrix, respectively. While the LMI above is specific to NLDI systems, this general paradigm of constructing stability specifications using LMIs applies to many settings commonly considered in robust control (e.g., settings with normbounded disturbances or polytopic uncertainty, or H∞ control settings). More details about these types of formulations are given in, e.g., Boyd et al. (1994); in addition, we provide the relevant LMI constraints for the settings we consider in this work in Appendix A.\n1In this work, we consider sub-classes of system (1) that may indeed be stochastic (e.g., due to a stochastic external disturbance w(t)), but that can be bounded so as to be amenable to deterministic stability analysis. However, other settings may require stochastic stability analysis; please see Astrom (1971).\n2See, e.g., Haddad and Chellaboina (2011) for a more rigorous definition of (local and global) exponential stability. Condition (2) comes from Lyapunov’s Theorem, which characterizes various notions of stability using Lyapunov functions.\n3A slightly more complex formulation involves an additional term in the norm bound, i.e.,Cx(t)+Du(t)+ Hw(t), which creates a quadratic inequality in w. The mechanics of obtaining robustness specifications in this setting are largely the same as presented here, though with some additional terms in the equations. As such, as is often done, we assume that H = 0 for simplicity." }, { "heading": "3.2 LQR CONTROL OBJECTIVES", "text": "In addition to designing for stability, it is often desirable to optimize some objective characterizing controller performance. While our method can optimize performance with respect to any arbitrary cost or reward function, to make comparisons with existing methods easier, for this paper we consider the well-known infinite-horizon “linear-quadratic regulator” (LQR) cost, defined as∫ ∞\n0\n( x(t)TQx(t) + u(t)TRu(t) ) dt, (5)\nfor some Q ∈ Ss×s 0 and R ∈ Sa×a 0. If the control policy is assumed to be time-invariant and linear as described above (i.e., u(t) = Kx(t)), minimizing the LQR cost subject to stability constraints can be cast as an SDP (see, e.g., Yao et al. (2001)) and solved using off-the-shelf numerical solvers – a fact that we exploit in our work. For example, to obtain an optimal linear time-invariant controller for the NLDI systems described above, we can solve\nminimize S,Y\ntr(QS) + tr(R1/2Y S−1Y TR1/2) s. t. Equation (4) holds. (6)" }, { "heading": "4 ENFORCING ROBUST CONTROL GUARANTEES WITHIN NEURAL NETWORKS", "text": "We now present the main contribution of our paper: A class of nonlinear control policies, potentially parameterized by deep neural networks, that is guaranteed to obey the same stability conditions enforced by the robustness specifications described above. The key insight of our approach is as follows: While it is difficult to derive specifications that globally characterize the stability of a generic nonlinear controller, if we are given known robustness specifications, we can create a sufficient condition for stability by simply enforcing that our policy satisfies these specifications at all t. For instance, given a known Lyapunov function, we can enforce exponential stability by ensuring that our policy sufficiently decreases this function (e.g., satisfies Equation (2)) at any given x(t).\nIn the following sections, we present our nonlinear policy class, as well as our general framework for learning provably robust policies using this policy class. We then derive the instantiation of this framework for various settings of interest. In particular, this involves constructing (custom) differentiable projections that can be used to adjust the output of a nominal neural network to satisfy desired robustness criteria. For simplicity of notation, we will often suppress the t-dependence of x, u, and w, but we note that these are continuous-time quantities as before." }, { "heading": "4.1 A PROVABLY ROBUST NONLINEAR POLICY CLASS", "text": "Given a dynamical system of the form (1) and a quadratic Lyapunov function V (x) = xTPx, let\nC(x) := {u ∈ Ra | V̇ (x) ≤ −αV (x) ∀ẋ ∈ A(t)x+B(t)u+G(t)w} (7) denote a set of actions that, for a fixed state x ∈ Rs, are guaranteed to satisfy the exponential stability condition (2) (even under worst-case realizations of the disturbance w). We note that this “safe” set is non-empty if P satisfies the relevant LMI constraints (e.g., system (4) for NLDIs) characterizing robust linear time-invariant controllers, as there is then some K corresponding to P such that Kx ∈ C(x) for all states x. Using this set of actions, we then construct a robust nonlinear policy class that projects the output of some neural network onto this set. More formally, consider an arbitrary nonlinear (neural networkbased) policy class π̂θ : Rs → Ra parameterized by θ, and let P(·) denote the projection operator for some set (·). We then define our robust policy class as πθ : Rs → Ra, where\nπθ(x) = PC(x)(π̂θ(x)). (8) We note that this policy class is differentiable if the projections can be implemented in a differentiable manner (e.g., using convex optimization layers (Agrawal et al., 2019), though we construct efficient custom solvers for our purposes). Importantly, as all policies in this class satisfy the stability condition (2) for all states x and at all times t, these policies are certifiably robust under the same conditions as the original (linear) controller for which the Lyapunov function V (x) was constructed.\nGiven this policy class and some performance objective ` (e.g., LQR cost), our goal is to then find parameters θ such that the corresponding policy optimizes this objective – i.e., to solve\nminimize θ ∫ ∞ 0 ` (x, πθ(x) ) dt s. t. ẋ ∈ A(t)x+B(t)πθ(x) +G(t)w. (9)\nAlgorithm 1 Learning provably robust controllers with deep RL 1: input performance objective ` // e.g., LQR cost 2: input stability requirement // e.g., V̇ (x) ≤ −αV (x) 3: input policy optimizer A // e.g., a planning or RL algorithm 4: compute P , K satisfying LMI constraints // e.g., by optimizing (6) 5: construct specifications C(x) using P // as defined in Equation (7) 6: construct robust policy class πθ using C // as defined in Equation (8) 7: train πθ via A to optimize Equation (9) 8: return πθ\nSince πθ is differentiable, we can solve this problem via a variety of approaches, e.g., a modelbased planning algorithm if the true dynamics are known, or virtually any (deep) RL algorithm if the dynamics are unknown.4\nThis general procedure for constructing stabilizing controllers is summarized in Algorithm 1. While seemingly simple, this formulation presents a powerful paradigm: by simply transforming the output of a neural network, we can employ an expressive policy class to optimize an objective of interest while ensuring the resultant policy will stabilize the system during both training and testing.\nWe instantiate our framework by constructing “safe” sets C(x) and their associated (differentiable) projections PC(x) for three settings of interest: NLDIs, polytopic linear differential inclusions (PLDIs), andH∞ control settings. As an example, we describe this procedure below for NLDIs, and refer readers to Appendix B for corresponding formulations for the additional settings we consider." }, { "heading": "4.2 EXAMPLE: NLDIS", "text": "In order to apply our framework to the NLDI setting (3), we first compute a quadratic Lyapunov function V (x) = xTPx by solving the optimization problem (6) for the given system via semidefinite programming. We then use the resultant Lyapunov function to compute the system-specific “safe” set C(x), and then create a fast, custom differentiable solver to project onto this set." }, { "heading": "4.2.1 COMPUTING SETS OF STABILIZING ACTIONS", "text": "Given P , we compute CNLDI(x) as the set of actions u ∈ Ra that, for each state x ∈ Rs, satisfy the stability condition (2) at that state under even a worst-case realization of the dynamics (i.e., in this case, even under a worst-case disturbance w). The form of the resultant set is given below.\nTheorem 1. Consider the NLDI system (3), some stability parameter α > 0, and a Lyapunov function V (x) = xTPx with P satisfying Equation (4). Assuming P exists, define\nCNLDI(x) := { u ∈ Ra | ‖Cx+Du‖2 ≤\n−xTPB ‖GTPx‖2 u− x T (2PA+ αP )x 2‖GTPx‖2 } for all states x ∈ Rs. For all x, CNLDI(x) is a non-empty set of actions that satisfy the exponential stability condition (2). Further, CNLDI(x) is a convex set in u. Proof. We seek to find a set of actions such that the condition (2) is satisfied along all possible trajectories of (3). A set of actions satisfying this condition at a given x is given by\nCNLDI(x) := { u ∈ Ra | sup\nw:‖w‖2≤‖Cx+Du‖2 V̇ (x) ≤ −αV (x)\n} .\nLet S := {w : ‖w‖2 ≤ ‖Cx+Du‖2}. We can then rewrite the left side of the above inequality as\nsup w∈S V̇ (x) = sup w∈S ẋTPx+ xTPẋ = 2xTP (Ax+Bu) + sup w∈S 2xTPGw\n= 2xTP (Ax+Bu) + 2‖GTPx‖2‖Cx+Du‖2,\nby the definition of the NLDI dynamics and the closed-form minimization of a linear term over an L2 ball. Rearranging yields an inequality of the desired form. We note that by definition of the\n4While this problem is infinite-horizon and continuous in time, in practice, one would optimize it in discrete time over a large finite time horizon.\nspecifications (4), there is some K corresponding to P such that the policy u = Kx satisfies the exponential stability condition (2); thus, Kx ∈ CNLDI, and CNLDI is non-empty. Further, as the above inequality represents a second-order cone constraint in u, this set is convex in u.\nWe further consider the special case where D = 0, i.e., the norm bound on w does not depend on the control action. This form of NLDI arises in many common settings (e.g., where w characterizes linearization error in a nonlinear system but the dynamics depend only linearly on the action), and is one for which we can compute the relevant projection in closed form (as described shortly).\nCorollary 1.1. Consider the NLDI system (3) with D = 0, some stability parameter α > 0, and Lyapunov function V (x) = xTPx with P satisfying Equation (4). Assuming P exists, define\nCNLDI-0(x) := { u ∈ Ra | 2xTPBu ≤ −xT (2PA+ αP )x− 2‖GTPx‖2‖Cx‖2 } for all states x ∈ Rs. For all x, CNLDI-0(x) is a non-empty set of actions that satisfy the exponential stability condition (2). Further, CNLDI-0(x) is a convex set in u.\nProof. The result follows by setting D = 0 in Theorem 1 and rearranging terms. As the above inequality represents a linear constraint in u, this set is convex in u." }, { "heading": "4.2.2 DERIVING EFFICIENT, DIFFERENTIABLE PROJECTIONS", "text": "For the general NLDI setting (3), we note that the relevant projection PCNLDI(x) (see Theorem 1) represents a projection onto a second-order cone constraint. As this projection does not necessarily have a closed form, we must implement it using a differentiable optimization solver (e.g., Agrawal et al. (2019)). For computational efficiency purposes, we implement a custom solver that employs an accelerated projected dual gradient method for the forward pass, and employs implicit differentiation through the fixed point equations of this solution method to compute relevant gradients for the backward pass. Derivations and additional details are provided in Appendix C.\nIn the case where D = 0 (see Corollary 1.1), we note that the projection operation PCNLDI-0(x) does have a closed form, and can in fact be implemented via a single ReLU operation. Specifically, defining ηT := 2xTPB and ζ := −xT (2PA+ αP )x− 2‖GTPx‖2‖Cx‖2, we see that\nPCNLDI-0(x) (π̂(x)) = { π̂(x) if ηT π̂(x) ≤ ζ π̂(x)− η\nT π̂(x)−ζ ηT η\nη otherwise = π̂(x)− ReLU\n( ηT π̂(x)− ζ\nηT η\n) η. (10)" }, { "heading": "5 EXPERIMENTS", "text": "Having instantiated our general framework, we demonstrate the power of our approach on a variety of simulated control domains.5 In particular, we evaluate performance on the following metrics:\n• Average-case performance: How well does the method optimize the performance objective (i.e., LQR cost) under average (non-worst case) dynamics?\n• Worst-case stability: Does the method remain stable even when subjected to adversarial (worst-case) dynamics?\nIn all cases, we show that our method is able to improve performance over traditional robust controllers under average conditions, while still guaranteeing stability under worst-case conditions." }, { "heading": "5.1 DESCRIPTION OF DYNAMICS SETTINGS", "text": "We evaluate our approach on five NLDI settings: two synthetic NLDI domains, the cart-pole task, a quadrotor domain, and a microgrid domain. (Additional experiments for PLDI and H∞ control settings are described in Appendix I.) For each setting, we choose a time discretization based on the speed at which the system evolves, and run each episode for 200 steps over this discretization. In all cases except the microgrid setting, we use a randomly generated LQR objective where the matrices Q1/2 and R1/2 are drawn i.i.d. from a standard normal distribution.\n5Code for all experiments is available at https://github.com/locuslab/ robust-nn-control\n101\n105\nLo ss N LD I (D = 0\n)\nNon-robust Methods Robust Methods\n101\n105 Lo ss N LD I (D 0 )\n100\n103\nLo ss\nCa rt\npo le\n100\n103\nLo ss\nQ ua\ndr ot\nor\n0 250 500 750 1000 Training epochs\n10 1\n102\nLo ss\nM ic\nro gr\nid\n0 250 500 750 1000 Training epochs\nSetting: MBP PPO RARL Robust MBP * Robust PPO *\nOriginal Adversarial Figure 1: Test performance over training epochs for all learning methods employed in our experiments. For each training epoch (10 updates for the MBP model and 18 for PPO), we report average quadratic loss over 50 episodes, and use “X” to indicate cases where the relevant method became unstable. (Lower loss is better.) Our robust methods (denoted by ∗), unlike the non-robust methods and RARL, remain stable under adversarial dynamics throughout training.\nSynthetic NLDI settings. We generate NLDIs of the form (3) with s = 5, a = 3, and d = k = 2 by generating matrices A,B,G,C and D i.i.d. from normal distributions, and producing the disturbance w(t) using a randomly-initialized neural network (with its output scaled to satisfy the norm-bound on the disturbance). We investigate settings both where D 6= 0 and where D = 0. In both cases, episodes are run for 2 seconds at a discretization of 0.01 seconds.\nCart-pole. In the cart-pole task, our goal is to balance an inverted pendulum resting on top of a cart by exerting horizontal forces on the cart. For our experiments, we linearize this system as an NLDI with D 6= 0 (see Appendix D), and add a small additional randomized disturbance satisfying the NLDI bounds. Episodes are run for 10 seconds at a discretization of 0.05 seconds.\nPlanar quadrotor. In this setting, our goal is to stabilize a quadcopter in the two-dimensional plane by controlling the amount of force provided by the quadcopter’s right and left thrusters. We linearize this system as an NLDI withD = 0 (see Appendix E), and add a small disturbance as in the cart-pole setting. Episodes are run for 4 seconds at a discretization of 0.02 seconds.\nMicrogrid. In this final setting, we aim to stabilize a microgrid by controlling a storage device and a solar inverter. We augment the system given in Lam et al. (2016) with LQR matrices and NLDI bounds (see Appendix F). Episodes are run for 2 seconds at a discretization of 0.01 seconds." }, { "heading": "5.2 EXPERIMENTAL SETUP", "text": "We demonstrate our approach by constructing a robust policy class (8) for each of these settings, and optimizing this policy class via different approaches. Specifically, we construct a nominal nonlinear control policy class as π̂θ(x) = Kx+ π̃θ(x), where K is obtained via robust LQR optimization (6), and where π̃θ(x) is a feedforward neural network. To construct the projections PC , we employ the value of P obtained when solving for K. For the purposes of demonstration, we then optimize our robust policy class πθ(x) = PC(π̂θ(x)) using two different methods:\n• Robust MBP (ours): A model-based planner that assumes the true dynamics are known.\n• Robust PPO (ours): An RL approach based on PPO (Schulman et al., 2017) that does not assume known dynamics (beyond the bounds used to construct the robust policy class).\nRobust MBP is optimized using gradient descent for 1,000 updates, where each update samples 20 roll-outs. Robust PPO is trained for 50,000 updates, where each update samples 8 roll-outs; we choose the model that performs best on a hold-out set of initial conditions during training. We note that while we use PPO for our demonstration, our approach is agnostic to the particular method of training, and can be deployed with many different (deep) RL paradigms.\nWe compare our robust neural network-based method against the following baselines:\n• Robust LQR: Robust (linear) LQR controller obtained via Equation (6).\n• Robust MPC: A robust model-predictive control algorithm (Kothare et al., 1996) based on state-dependent LMIs. (As the relevant LMIs are not always guaranteed to solve, our implementation temporarily reverts to the Robust LQR policy when that occurs.)\n• RARL: The robust adversarial reinforcement learning algorithm (Pinto et al., 2017), which trains an RL agent in the presence of an adversary. (We note that unlike the other robust methods considered here, this method is not provably robust.)\n• LQR: A standard non-robust (linear) LQR controller.\n• MBP and PPO: The non-robust neural network policy class π̂θ(x) optimized via a modelbased planner and the PPO algorithm, respectively.\nIn order to evaluate performance, we train all methods on the dynamical settings described in Section 5.1, and evaluate them on two different variations of the dynamics:\n• Original dynamics: The dynamical settings described above (“average case”).\n• Adversarial dynamics: Modified dynamics with an adversarial test-time disturbance w(t) generated to maximize loss (“worst case”). We generate this disturbance separately for each method described above (see Appendix G for more details).\nInitialization states are randomly generated for all experiments. For the synthetic NLDI and microgrid settings, these are generated from a standard normal distribution. For both cart-pole and quadrotor, because our NLDI bounds model linearization error, we must generate initial points within a region where this linearization holds. In particular, the linearization bounds only hold for a specified L∞ ball, BNLDI, around the equilibrium. We use a simple heuristic to construct this ball and jointly find a smaller L∞ ball, Binit, such that there exists a level set L of the Robust LQR Lyapunov function with Binit ⊆ L ⊆ BNLDI (details in Appendix H). Since Robust LQR (and by extension our methods) are guaranteed to decrease the relevant Lyapunov function, this guarantees that these methods will never leave BNLDI when initialized starting from any point inside Binit – i.e., that our NLDI bounds will always hold throughout the trajectories produced by these methods." }, { "heading": "5.3 RESULTS", "text": "Table 1 shows the performance of the above methods. We report the integral of the quadratic loss over the prescribed time horizon on a test set of states, or indicate cases where the relevant method became unstable (i.e., the loss became orders of magnitude larger than for other approaches). (Sample trajectories for these methods are also provided in Appendix H.)\nThese results illustrate the basic advantage of our approach. In particular, both our Robust MBP and Robust PPO methods show improved “average-case” performance over the other provably robust methods (namely, Robust LQR and Robust MPC). As expected, however, the non-robust\nLQR, MBP, and PPO methods often perform better within the original nominal dynamics, as they are optimizing for expected performance but do not need to consider robustness under worst-case scenarios. However, when we apply allowable adversarial perturbations (that still respect our disturbance bounds), the non-robust LQR, MBP, and PPO approaches diverge or perform very poorly. Similarly, the RARL agent performs well under the original dynamics, but diverges under adversarial perturbations in the generic NLDI settings. In contrast, both of our provably robust approaches (as well as Robust LQR) remain stable under even “worst-case” adversarial dynamics. (We note that the baseline Robust MPC method goes unstable in one instance, though this is due to numerical instability issues, rather than issues with theoretical guarantees.)\nFigure 1 additionally shows the performance of all neural network-based methods on the test set over training epochs. While the robust and non-robust MBP and PPO approaches both converge quickly to their final performance levels, both non-robust versions become unstable under the adversarial dynamics very early in the process. The RARL method also frequently destabilizes during training. Our Robust MBP and PPO policies, on the other hand, remain stable throughout the entire optimization process, i.e., do not destabilize during either training or testing. Overall, these results show that our method is able to learn policies that are more expressive than traditional robust methods, while guaranteeing these policies will be stable under the same conditions as Robust LQR." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have presented a class of nonlinear control policies that combines the expressiveness of neural networks with the provable stability guarantees of traditional robust control. This policy class entails projecting the output of a neural network onto a set of stabilizing actions, parameterized via robustness specifications from the robust control literature, and can be optimized using a model-based planning algorithm if the dynamics are known or virtually any RL algorithm if the dynamics are unknown. We instantiate our general framework for dynamical systems characterized by several classes of linear differential inclusions that capture many common robust control settings. In particular, this entails deriving efficient, differentiable projections for each setting, via implicit differentiation techniques. We show over a variety of simulated domains that our method improves upon traditional robust LQR techniques while, unlike non-robust LQR and neural network methods, remaining stable even under worst-case allowable perturbations of the underlying dynamics.\nWe believe that our approach highlights the possible connections between traditional control methods and (deep) RL methods. Specifically, by enforcing more structure in the classes of deep networks we consider, it is possible to produce networks that provably satisfy many of the constraints that have typically been thought of as outside the realm of RL. We hope that this work paves the way for future approaches that can combine more structured uncertainty or robustness guarantees with RL, in order to improve performance in settings traditionally dominated by classical robust control." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the Department of Energy Computational Science Graduate Fellowship (DE-FG02-97ER25308), the Center for Climate and Energy Decision Making through a cooperative agreement between the National Science Foundation and Carnegie Mellon University (SES00949710), the Computational Sustainability Network, and the Bosch Center for AI. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1745016. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\nWe thank Vaishnavh Nagarajan, Filipe de Avila Belbute Peres, Anit Sahu, Asher Trockman, Eric Wong, and anonymous reviewers for their feedback on this work." }, { "heading": "A DETAILS ON ROBUST CONTROL SPECIFICATIONS", "text": "As described in Section 3.1, for many dynamical systems of the form (1), it is possible to specify a set of linear, time-invariant policies guaranteeing infinite-horizon exponential stability via a set of LMIs. Here, we derive the LMI (4) provided in the main text for the NLDI system (3), and additionally describe relevant LMI systems for systems characterized by polytopic linear differential inclusions (PLDIs) and for H∞ control settings.\nA.1 EXPONENTIAL STABILITY IN NLDIS Consider the general NLDI system (3). We seek to design a time-invariant control policy u(t) = Kx(t) and a quadratic Lyapunov function V (x) = xTPx with P 0 for this system that satisfy the exponential stability criterion V̇ (x) ≤ −αV (x), ∀t. We derive an LMI characterizing such a controller and Lyapunov function, closely following and expanding upon the derivation provided in Boyd et al. (1994).\nSpecifically, consider the NLDI system (3), reproduced below:\nẋ = Ax+Bu+Gw, ‖w‖2 ≤ ‖Cx+Du‖2. (A.1)\nThe time derivative of this Lyapunov function along the trajectories of the closed-loop system is\nV̇ (x) = ẋTPx+ xTPẋ\n= (Ax+Bu+Gw)TPx+ xTP (Ax+Bu+Gw)\n= ((A+BK)x+Gw)TPx+ xTP ((A+BK)x+Gw)\n= [ x w ]T [ (A+BK)TP + P (A+BK) PG GTP 0 ] [ x w ] .\n(A.2)\nThe exponential stability condition V̇ (x) ≤ −αV (x) is thus implied by inequality[ x w ]T M1 [ x w ] := [ x w ]T [ (A+BK)TP + P (A+BK) + αP PG GTP 0 ] [ x w ] ≤ 0. (A.3)\nAdditionally, the norm bound on w can be equivalently expressed as[ x w ]T M2 [ x w ] := [ x w ]T [ (C +DK)T (C +DK) 0 0 −I ] [ x w ] ≥ 0. (A.4)\nUsing the S-procedure, it follows that for some λ ≥ 0, the following matrix inequality is a sufficient condition for exponential stability:\nM1 + λM2 0. (A.5) Using Schur Complements, this matrix inequality is equivalent to\n(A+BK)TP + P (A+BK) + αP + λ(C +DK)T (C +DK) + 1\nλ PGGTP 0. (A.6)\nLeft- and right-multiplying both sides by P−1, and making the change of variables S = P−1, Y = KS, and µ = 1/λ, we obtain\nSAT +AS + Y TBT +BY + αS + 1\nµ\n( SCT + Y TDT ) (CS +DY ) + µGGT 0. (A.7)\nUsing Schur Complements again on this inequality, we obtain our final system of linear matrix inequalities as[\nAS + SAT + µGGT +BY + Y TBT + αS SCT + Y TDT\nCS +DY −µI\n] 0, S 0, µ > 0, (A.8)\nwhere then K = Y S−1 and P = S−1. Note that the first matrix inequality is homogeneous; we can therefore assume µ = 1 (and therefore, λ = 1), without loss of generality.\nA.2 EXPONENTIAL STABILITY IN PLDIS Consider the setting of polytopic linear differential inclusions (PLDIs), where the dynamics are of the form\nẋ(t) = A(t)x(t) +B(t)u(t), (A(t), B(t)) ∈ Conv{(A1, B1), . . . , (AL, BL)}. (A.9)\nHere, A(t) ∈ Rs×s andB(t) ∈ Rs×a can vary arbitrarily over time, as long as they lie in the convex hull (denoted Conv) of the set of points above, where Ai ∈ Rs×s, Bi ∈ Rs×a for i = 1, . . . , L. We seek to design a time-invariant control policy u(t) = Kx(t) and quadratic Lyapunov function V (x) = xTPx with P 0 for this system that satisfy the exponential stability criterion V̇ (x) ≤ −αV (x), ∀t. Such a controller and Lyapunov function exist if there exist S ∈ Rs×s 0 and Y ∈ Ra×s such that\nAiS +BiY + SA T i + Y TBTi + αS 0, ∀i = 1, . . . , L, (A.10)\nwhere then K = Y S−1 and P = S−1. The derivation of this LMI follows similarly to that for exponential stability in NLDIs, and is well-described in Boyd et al. (1994).\nA.3 H∞ CONTROL Consider the following H∞ control setting with linear time-invariant dynamics\nẋ(t) = Ax(t) +Bu(t) +Gw(t), w ∈ L2, (A.11)\nwhere A, B, and G are time-invariant as for the NLDI case, and where we define L2 as the set of time-dependent signals with finite L2 norm.6\nIn cases such as these with larger or more unstructured disturbances, it may not be possible to guarantee asymptotic convergence to an equilibrium. In these cases, our goal is to construct a robust controller with bounds on the extent to which disturbances affect some performance output (e.g., LQR cost), as characterized by the L2 gain of the disturbance-to-output map. Specifically, we consider the stability requirement that this L2 gain be bounded by some parameter γ > 0 when disturbances are present, and that the system be exponentially stable in the disturbance-free case. This requirement can be characterized via the condition that for all t and some σ ≥ 0,\nE(x, ẋ, u) := V̇ (x) + αV (x) + σ ( xTQx+ uTRu− γ2‖w‖22 ) ≤ 0. (A.12)\nWe note that when E(x(t), ẋ(t), u(t)) ≤ 0 for all t, both of our stability criteria are met. To see this, note that integrating both sides of (A.12) from 0 to∞ and ignoring the non-negative terms on the left hand side after integration yields∫ ∞\n0 (x(t)TQx(t)+u(t)TRu(t))dt ≤ γ2 ∫ ∞ 0 ‖w(t)‖22dt+ (1/σ)V (x(0)). (A.13)\nThis is precisely the desired bound on the L2 gain of the disturbance-to-output map (see Khalil and Grizzle (2002)). We also note that in the disturbance-free case, substituting w = 0 into (A.12) yields\nV̇ (x) ≤ −αV (x)− σ ( xTQx+ uTRu ) ≤ −αV (x), (A.14)\nwhere the last inequality follows from the non-negativity of the LQR cost; this is precisely our condition for exponential stability.\nWe now seek to design a time-invariant control policy u(t) = Kx(t) and quadratic Lyapunov function V (x) = xTPx with P 0 that satisfies the above condition. In particular, we can write\nE (x(t), (A+BK)x(t) +Gw(t),Kx(t))= [ x(t) w(t) ]T M1 [ x(t) w(t) ] , (A.15)\nwhere\nM1 :=\n[ (A+BK)TP + P (A+BK) + αP + σ(Q+KTRK) PG\nGTP −γ2σI\n] . (A.16)\n6The L2 norm of a time-dependent signal w(t) : [0,∞)→ Rd is defined as √∫∞\n0 ‖w(t)‖22dt.\nTherefore, we seek to find a P ∈ Rs×s 0 and K ∈ Rs×a that satisfy M1 0, for some design parameters α > 0 and σ > 0. Using Schur complements, the matrix inequalityM1 0 is equivalent to\n(A+BK)TP + P (A+BK) + αP + σ(Q+KTRK) + PGGTP/(γ2σ) 0. (A.17) As in Appendix A.1, we left- and right-multiply both sides by P−1, and make the change of variables S = P−1, Y = KS, and µ = 1/σ to obtain\nSAT +AS+Y TBT +BY +αS+ 1\nµ\n( (SQ1/2)(Q1/2S) + (Y TR1/2)(R1/2Y ) ) +µGGT /γ2 0.\nUsing Schur Complements again, we obtain the LMISAT +AS + Y TBT +BY + αS + µGGT /γ2 [SQ1/2 Y TR1/2][Q1/2S R1/2Y ] −µI 0, S 0, µ > 0, (A.18) where then K = Y S−1, P = S−1, and σ = 1/µ." }, { "heading": "B DERIVATION OF SETS OF STABILIZING POLICIES AND ASSOCIATED PROJECTIONS", "text": "We describe the construction of the set of actions C(x), defined in Equation (7), for PLDI systems (A.9) and H∞ control settings (A.11). (The relevant formulations for the NLDI system (3) are described in the main text.)\nB.1 EXPONENTIAL STABILITY IN PLDIS For the general PLDI system (A.9), relevant sets of exponentially stabilizing actions CPLDI are given by the following theorem.\nTheorem B.1. Consider the PLDI system (A.9), some stability parameter α > 0, and a Lyapunov function V (x) = xTPx with P satisfying (A.10). Assuming P exists, define\nCPLDI(x) := u ∈ R a | 2xTPB1 2xTPB2\n... 2xTPBL\nu ≤ − xT (αP + 2PA1)x xT (αP + 2PA2)x\n... xT (αP + 2PAL)x\n \nfor all states x ∈ Rs. For all x, CPLDI(x) is a non-empty set of actions that satisfy the exponential stability condition (2). Further, CPLDI(x) is a convex set in u.\nProof. We seek to find a set of actions such that the condition (2) is satisfied along all possible trajectories of (A.9), i.e., for any allowable instantiation of (A(t), B(t)). A set of actions satisfying this condition at a given x is given by\nCPLDI(x) := {u ∈ Ra | V̇ (x) ≤ −αV (x) ∀(A(t), B(t)) ∈ Conv{(A1, B1), . . . , (AL, BL)}.\nExpanding the left side of the inequality above, we see that for some coefficients γi ∈ R ≥ 0, i = 1, . . . , L satisfying ∑L i=1 γi(t) = 1,\nV̇ (x) = ẋTPx+ xTPẋ = 2xTP (A(t)x+B(t)u)\n= 2xTP ( L∑ i=1 γi(t)Aix+ γi(t)Biu ) = L∑ i=1 γi ( 2xTP (Aix+Biu) ) by definition of the PLDI dynamics and of the convex hull. Thus, if we can ensure\n2xTP (Aix+Biu) ≤ −αV (x) = −αxTPx, ∀i = 1, . . . , L, then we can ensure that exponential stability holds. Rearranging this condition and writing it in matrix form yields an inequality of the desired form. We note that by definition of the specifications (A.10), there is some K corresponding to P such that the policy u = Kx satisfies all of the above inequalities; thus, Kx ∈ CPLDI(x), and CPLDI(x) is non-empty. Further, as the above inequality represents a linear constraint in u, this set is convex in u.\nWe note that the relevant projection PCPLDI(x) represents a projection onto an intersection of halfspaces, and can thus be implemented via differentiable quadratic programming (Amos and Kolter, 2017).\nB.2 H∞ CONTROL For the H∞ control system (A.11), relevant sets of actions satisfying the condition (A.12) are given by the following theorem.\nTheorem B.2. Consider the system (A.11), some stability parameter α > 0, and a Lyapunov function V (x) = xTPx with P satisfying Equation (A.18). Assuming P exists, define\nCH∞(x) := { u ∈ Ra | uTRu+ (2BTPx)Tu+ xT ( PA+ATP+αP+Q+γ−2PGGTP ) x ≤ 0 } for all states x ∈ Rs. For all x, CH∞(x) is a non-empty set of actions that guarantee condition (A.12), i.e., that the L2 gain of the disturbance-to-output map is bounded by γ and that the system is exponentially stable in the disturbance-free case. Further, CH∞(x) is convex in u.\nProof. We seek to find a set of actions such that the condition E(x, ẋ, u) ≤ 0 is satisfied along all possible trajectories of (A.11), where E is defined as in (A.12). A set of actions satisfying this condition at a given x is given by\nCH∞(x) := {u ∈ Ra | sup w∈L2 E(x, ẋ, u) ≤ 0, ẋ = Ax+Bu+Gw}.\nTo begin, we note that\nE(x,Ax+Bu+Gw, u) = xTP (Ax+Bu+Gw) + (Ax+Bu+Gw)TPx+ αxTPx + σ ( xTQx+ uTRu− γ2‖w‖22 ) We then maximize E over w:\nw? = arg max w E(x,Ax+Bu+Gw, u) = GTPx/(σγ2). (B.1)\nTherefore,\nCH∞(x) = {u | E(x,Ax+Bu+Gw?, u, w?) ≤ 0}. (B.2)\nExpanding and rearranging terms, this becomes CH∞(x)={u | uT (σR)u+ (2BTPx)Tu+ xT ( PA+ATP+αP+σQ+PGGTP/(σγ2) ) x ≤ 0}.\n(B.3)\nWe note that by definition of the specifications (A.18), there is some K corresponding to P such that the policy u = Kx satisifies the conditions above (see (A.17)); thus, Kx ∈ CH∞ , and CH∞ is non-empty. We note further that CH∞ is an ellipsoid in the control action space, and is thus convex in u.\nWe rewrite the set CH∞(x) such that the projection PCH∞ (x) can be viewed as a second-order cone projection, in order to leverage our fast custom solver (Appendix C). In particular, defining P̃ = σR, q̃ = BTPx, and r̃ = xT ( PA+ATP+αP+σQ+PGGTP/(σγ2) ) x, we can rewrite the ellipsoid above as\nCH∞(x) = {u | u>P̃ u+ 2q̃>u+ r̃ ≤ 0}. (B.4)\nWe note that as P̃ 0 and r̃ − q̃>P̃−1q̃ < 0, this ellipsoid is non-empty (see, e.g., section B.1 in Boyd and Vandenberghe (2004)). We can then rewrite the ellipsoid as\nCH∞(x) = {u | ‖Ãu+ b̃‖2 ≤ 1} (B.5) where à = √\nP̃ q̃>P̃−1q̃−r̃ and b̃ =\n√ P\nq>P−1q−rP −1q. The constraint ‖Ãu + b̃‖2 ≤ 1 is then a\nsecond-order cone constraint in u." }, { "heading": "C A FAST, DIFFERENTIABLE SOLVER FOR SECOND-ORDER CONE PROJECTION", "text": "In order to construct the robust policy class described in Section 4 for the general NLDI system (3) and the H∞ setting (A.11), we must project a nominal (neural network-based) policy onto the second-order cone constraints described in Theorem 1 and Appendix B.2, respectively. As this projection operation does not necessarily have a closed form, we implement it via a custom differentiable optimization solver.\nMore generally, consider a set of the form\nC = {x ∈ Rn | ‖Ax+ b‖2 ≤ cTx+ d} (C.1)\nfor some A ∈ Rm×n, b ∈ Rm, c ∈ Rn, and d ∈ R. Given some input y ∈ Rn, we seek to compute the second-order cone projection PC(y) by solving the problem\nminimize x∈Rn\n1 2 ‖x− y‖22\nsubject to ‖Ax+ b‖2 ≤ cTx+ d. (C.2)\nLet F denote the `2 norm cone, i.e., F := {(w, t) | ‖w‖2 ≤ t}. Introducing the auxiliary variable z ∈ Rm+1, we can then rewrite the above optimization problem equivalently as\nminimize x∈Rn, z∈Rm+1\n1 2 ‖x− y‖22 + 1F (z)\nsubject to z = [ Ax+ b cTx+ d ] =: Gx+ h,\n(C.3)\nwhere for brevity we define G = [ A cT ] and h = [ b d ] , and where 1F denotes the indicator function for membership in the set F . We describe our fast solution technique for computing this projection, as well as our method for obtaining gradients through the solution.\nC.1 COMPUTING THE PROJECTION We construct a fast solver for problem (C.3) using an accelerated projected dual gradient method. Specifically, define µ = Rm+1 as the dual variable on the equality constraint in Equation (C.3). The Lagrangian for this problem can then be written as\nL (x, z, µ) = 1\n2 ‖x− y‖22 + 1F (z) + µT (z −Gx− h), (C.4)\nand the dual problem is given by maxµ minx,z L (x, z, µ). To form the dual problem, we minimize the Lagrangian with respect to x and z as\ninf x,z L (x, z, µ) = inf x\n1\n2\n{ ‖x− y‖22 − µTGx } + inf\nz {µT z + 1F (z)} − µTh. (C.5)\nWe note that the first term on the right side is minimized at x?(µ) = y +GTµ. Thus, we see that\ninf x\n1 2 {‖x− y‖22 − µTGx} = − 1 2 µTGGTµ− µTGy. (C.6)\nFor the second term, denote µ = (µ̃, s) and z = (z̃, t). We can then rewrite this term as\ninf z {µT z + 1F (z)}} = inf t≥0 inf z̃ {t · s+ µ̃T z̃ | ‖z̃‖2 ≤ t}. (C.7)\nFor a fixed t ≥ 0, the above objective is minimized at z̃ = −tµ̃/‖µ̃‖2. (The problem is infeasible for t < 0.) Substituting this minimizer into (C.7) and minimizing the result over t ≥ 0 yields\ninf z {µT z + 1F (z)} = inf t≥0 t(s− ‖µ̃‖2) = −1F (µ) (C.8)\nwhere the last identity follows from definition of the second-order cone F . Hence the negative dual problem becomes\nminimize µ\n1 2 µTGGTµ+ µT (Gy + h) + 1F (µ). (C.9)\nWe now solve this problem via Nesterov’s accelerated projected dual gradient method (Nesterov, 2013). For notational brevity, define f(µ) := 12µ\nTGGTµ + µT (Gy + h). Then, starting from arbitrary µ(−1), µ(0) ∈ Rm+1 we perform the iterative updates\nν(k) = µ(k) + β(k)(µ(k) − µ(k−1)) µ(k+1) = PF ( ν(k) − 1\nLf ∇f(ν(k))\n) ,\n(C.10)\nwhere Lf = λmax(GGT ) is the Lipschitz constant of f , and PF is the projection operator onto F (which has a closed form solution; see Bauschke (1996)). Letting mf = λmin(GGT ) denote the strong convexity constant of f , the momentum parameter is then scheduled as (Nesterov, 2013)\nβk = k − 1 k + 2 if mf = 0√ Lf − √ mf√\nLf + √ mf\nif mf > 0. (C.11)\nAfter computing the optimal dual variable µ?, i.e., the fixed point of (C.10), the optimal primal variable can be recovered via the equation x? = y +GTµ? (as can be observed from the first-order conditions of the Lagrangian (C.4)).\nC.2 OBTAINING GRADIENTS In order to incorporate the above projection into our neural network, we need to compute the gradients of all problem variables (i.e., G, h, and y) through the solution x?. In particular, we note that x? has a direct dependence on both G and y, and an indirect dependence on all of G, h, and y through µ?.\nTo compute the relevant gradients through µ?, we apply the implicit function theorem to the fixed point of the update equations (C.10). Specifically, as these updates imply that µ? = ν?, their fixed point can be written as\nµ? = PF ( µ? − 1\nLf ∇f(µ?)\n) . (C.12)\nDefineM := ∂PF (·)∂(·) ∣∣ (·)=µ?− 1Lf ∇f(µ ?) , and note that∇f(µ?) = GGTµ?+Gy+h. The differential of the above fixed-point equation is then given by\ndµ? = M × (\ndµ? − 1 Lf\n( dGGTµ? +GdGTµ? +GGTdµ? + dGy +Gdy + dh )) . (C.13)\nRearranging terms to separate the differentials of problem outputs from problem variables, we see that(\nI −M + 1 Lf\nMGGT )\ndµ? = − 1 Lf\nM ( dGGTµ? +GdGTµ? + dGy +Gdy + dh ) , (C.14)\nwhere I is the identity matrix of appropriate size.\nAs described in e.g. Amos and Kolter (2017), we can then use these equations to form the Jacobian of µ? with respect to any of the problem variables by setting the differential of the relevant problem variable to I and of all other problem variables to 0; solving the resulting equation for dµ? then yields the value of the desired Jacobian. However, as these Jacobians can be large depending on problem size, we rarely want to form them explicitly. Instead, given some backward pass vector ∂` ∂µ? ∈ R 1×(m+1) with respect to the optimal dual variable, we want to directly compute the gradient\nof the loss with respect to the problem variables: e.g., for y, we want to directly form the result of the product ∂`∂µ? ∂µ? ∂y ∈ R 1×n. We do this via a similar method as presented in Amos and Kolter (2017), and refer the reader there for a more in-depth explanation of the method described below.\nDefine J := I − M + 1Lf MGG T to represent the coefficient of dµ? on the left side of Equation (C.14). Given ∂`∂µ? , we then compute the intermediate term\ndµ := −J−T ( ∂`\n∂µ?\n)T . (C.15)\nWe can then form the relevant gradient terms directly as( ∂`\n∂µ? ∂µ? ∂G\n)T = 1 Lf M ( dµ(G Tµ?)T + µ?(GTdµ) T + dµy T )\n( ∂`\n∂µ? ∂µ? ∂h\n)T = 1\nLf Mdµ\n( ∂`\n∂µ? ∂µ? ∂y\n)T = 1\nLf GTMdµ.\n(C.16)\nIn these computations, we note that as our solver returns x?, the backward pass vector we are given is actually ∂`∂x? ∈ R 1×n; thus, we compute ∂`∂µ? = ∂` ∂x? ∂x? ∂µ? = ∂` ∂x?G T for use in Equation (C.15).\nAccounting additionally for the direct dependence of some of the problem variables on x? (recalling that x? = y +GTu?), the desired gradients are then given by( ∂`\n∂G\n)T = ( ∂`\n∂x? ∂x? ∂G + ∂` ∂x? ∂x? ∂u? ∂u? ∂G\n)T = µ? ∂`\n∂x? +\n1 Lf M ( dµ(G Tµ?)T + µ?(GTdµ) T + dµy T )\n( ∂`\n∂h\n)T = \n* 0\n∂` ∂x? ∂x? ∂h + ∂` ∂x? ∂x? ∂u? ∂u? ∂h T = 1 Lf Mdµ\n( ∂`\n∂y\n)T = ( ∂`\n∂x? ∂x? ∂y + ∂` ∂x? ∂x? ∂u? ∂u? ∂y\n)T = ( ∂`\n∂x?\n)T + 1\nLf GTMdµ.\n(C.17)" }, { "heading": "D WRITING THE CART-POLE PROBLEM AS AN NLDI", "text": "In the cart-pole task, our goal is to balance an inverted pendulum resting on top of a cart by exerting horizontal forces on the cart. Specifically, the state of this system is defined as x = [px, ṗx, ϕ, ϕ̇]\nT , where px is the cart position and ϕ is the angular displacement of the pendulum from its vertical position; we seek to stabilize the system at x = ~0 by exerting horizontal forces u ∈ R on the cart. For a pendulum of length ` and mass mp, and for a cart of mass mc, the dynamics of the system are (as described in Tedrake (2009)):\nẋ = ṗx u+mp sinϕ(`ϕ̇ 2−g cosϕ) mc+mp sin2 ϕ ϕ̇\n(mc+mp)g sinϕ−u cosϕ−mp`ϕ̇2 cosϕ sinϕ\nl(mc+mp sin2 ϕ)\n , (D.1)\nwhere g = 9.81 m/s2 is the acceleration due to gravity. We rewrite this system as an NLDI by defining ẋ = f(x, u) and then linearizing the system about its equilibrium point as\nẋ = Jf (0, 0) [ x u ] + Inw, ‖w‖ ≤ ‖Cx+Du‖, (D.2)\nwhere Jf is the Jacobian of the dynamics, w = f(x, u)− Jf (0, 0) [x u] T is the linearization error, and In is the n × n identity matrix. We bound this linearization error by numerically obtaining the matrices C and D, assuming that x and u are within a neighborhood of the origin. We describe this process in more detail below. As a note, while we employ an NLDI here to characterize the linearization error, it is also possible to characterize this error via polytopic uncertainty (see Appendix J); we choose to use an NLDI here as it yields a much smaller problem description than a PLDI in this case.\nD.1 DERIVING Jf (0, 0) For ẋ = f(x, u), we see that\nJf (x, u) = 0 1 0 0 00 0 ∂p̈x/∂ϕ ∂p̈x/∂ϕ̇ ∂p̈x/∂u0 0 0 1 0 0 0 ∂ϕ̈/∂ϕ ∂ϕ̈/∂ϕ̇ ∂ϕ̈/∂u, , (D.3) where\n∂p̈x ∂ϕ = mp cosϕ\n( ϕ̇2l − g cosϕ ) + gmp sin 2 ϕ\nmc +mp sin 2 ϕ\n− 2mp sinϕ cosϕ\n( mp sinϕ ( ϕ̇2l − g cosϕ ) + u )(\nmc +mp sin 2 ϕ )2 ,\n∂p̈x ∂ϕ̇ = 2ϕ̇lmp sinϕ mc +mp sin 2 ϕ ,\n∂p̈x ∂u = 1 mc +mp sin 2 ϕ ,\n∂ϕ̈ ∂ϕ =\ng(mc +mp) cosϕ+ ϕ̇ 2lmp sin 2 ϕ −ϕ̇2lmp cos2 ϕ+ u sinϕ l ( mc +mp sin 2 ϕ ) − 2mp sinϕ cosϕ(g(mc +mp) sinϕ −ϕ̇2lmp sinϕ cosϕ− u cosϕ l ( mc +mp sin 2 ϕ )2 ,\n∂ϕ̈ ∂ϕ̇ = −2ϕ̇mp sinϕ cosϕ mc +mp sin 2 ϕ ,\n∂ϕ̈ ∂u = − cosϕ l(mc +mp sin 2 ϕ) .\nWe thus see that\nJf (0, 0) = 0 1 0 0 00 0 −mpg/mc 0 1/mc0 0 0 1 0 0 0 g(mc+mp)/lmc 0 −1/mc . (D.4) D.2 OBTAINING C AND D We then seek to construct matrices C and D that bound the linearization error w between the true\ndynamics ẋ and our first-order linear approximation Jf (0, 0) [ x u ] . To do so, we bound the error of\nthis approximation entry-wise: that is, for each entry i = 1, . . . , s, we want to find Fi such that for all x in some region x ≤ x ≤ x̄, and all u in some region u ≤ u ≤ ū,\nw2i = ( ∇fi(0) [ x u ] − ẋi )2 ≤ [ x u ]T Fi [ x u ] . (D.5)\nThen, given the matrix M = [ F T/2 1 F T/2 2 F T/2 3 F T/2 4 F T/2 5 F T/2 6 ]T (D.6)\nwe can then obtain C = M1:s and D = Ms:s+m (where the subscripts indicate column-wise indexing).\nWe solve separately for each Fi to minimize the difference between the right and left sides of Equation (D.5) (while enforcing that the right side is larger than the left side) over a discrete grid of points within x ≤ x ≤ x̄ and u ≤ u ≤ ū. By assuming that Fi is symmetric, we are able to cast this as a linear program in the upper triangular entries of Fi.\nTo obtain the matrices C and D used for the cart-pole experiments in the main paper, we let x̄ = [1.5 2 0.2 1.5]\nT , ū = 10, x = −x̄, and u = −ū. As each entry-wise difference in Equation (D.5) contained exactly three variables (i.e., a total of three entries from x and u), we solved each entry-wise linear program over a mesh grid of 50 points per variable." }, { "heading": "E WRITING QUADROTOR AS AN NLDI", "text": "In the planar quadrotor setting, our goal is to stabilize a quadcopter in the two-dimensional plane by controlling the amount of force provided by the quadcopter’s right and left thrusters. Specifically, the state of this system is defined as x = [px pz ϕ ṗx ṗz ϕ̇]\nT , where (px, pz) is the position of the quadcopter in the vertical plane and ϕ is its roll (i.e., angle from the horizontal position); we seek to stabilize the system at x = ~0 by controlling the amount of force u = [ur, ul]\nT from right and left thrusters. We assume that our action u is additional to a baseline force of [mg/2 mg/2]T provided by the thrusters by default to prevent the quadcopter from falling. For a quadrotor with mass m, moment-arm ` for the thrusters, and moment of inertia J about the roll axis, the dynamics of this system are then given by (as modified from Singh et al. (2020)):\nẋ = ṗx cosϕ− ṗz sinϕ ṗx sinϕ+ ṗz cosϕ ϕ̇ ṗzϕ̇− g sinϕ\n−ṗxϕ̇− g cosϕ+ g 0\n+ 0 0 0 0 0 0 0 0\n1/m 1/m `/J −`/J u, (E.1) where g = 9.81 m/s2. We linearize this system via a similar method as for the cart-pole setting, i.e., as in Equation (D.2). We describe this process in more detail below. We note that since the dependence of the dynamics on u is linear, we have that D = 0 for our resultant NLDI. As for cart-pole, while we employ an NLDI here to characterize the linearization error, it is also possible to characterize this error via polytopic uncertainty (see Appendix J); we choose to use an NLDI here as it yields a much smaller problem description than a PLDI in this case.\nE.1 DERIVING Jf (0, 0) For ẋ = f(x, u), we see that\nJf (x, u) = 0 0 −ṗx sinϕ− ṗz cosϕ cosϕ − sinϕ 0 0 0 0 ṗx cosϕ− ṗz sinϕ sinϕ cosϕ 0 0 0 0 0 0 0 1 0 0 0 −g cosϕ 0 ϕ̇ ṗz 0 0 0 g sinϕ −ϕ̇ 0 −ṗx 0 0 0 0 0 0 0 0 , (E.2) and thus\nJf (0, 0) = 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 −g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 . (E.3)\nE.2 OBTAINING C AND D We obtain the matricesC andD via a similar method as described in Appendix D, though in practice we only consider the linearization error with respect to x (i.e., since the dynamics are linear with respect to u, we have D = 0). We let x̄ = [1 1 0.15 0.6 0.6 1.3] and x = −x̄. As for cart-pole, each entry wise difference in the equivalent of Equation (D.5) contained exactly three variables (i.e., a total of three entries from x and u), and each entry-wise linear program was solved over a mesh grid of 50 points per variable." }, { "heading": "F DETAILS ON THE MICROGRID SETTING", "text": "For our experiments, we build upon the microgrid setting given in Lam et al. (2016). In this system, the state x ∈ R3 captures voltage deviations, frequency deviations, and the amount of power generated by a diesel generator connected to the grid; the action u ∈ R2 describes the current associated with a storage device and a solar PV inverter; and the disturbance w ∈ R describes the difference between the amount of power demanded and the amount of power produced by solar panels on the grid. The authors also define a performance index y ∈ R2 which captures voltage and frequency deviations (i.e., two of the entries of the state x).\nTo construct an NLDI of the form (3) for this system, we directly use the A, B, and G matrices given in Lam et al. (2016). We generate C i.i.d. from a normal distribution and let D = 0, to represent the fact that the disturbance w and the entries of the state x are correlated, but that w is likely not correlated with the actions u. Finally, we let Q and R be diagonal matrices with 1 in the entries corresponding to quantities represented in the performance index y, and with 0.1 in the rest of the diagonal entries, to emphasize that the variables in y are the most important in describing the performance of the system." }, { "heading": "G GENERATING AN ADVERSARIAL DISTURBANCE", "text": "In the NLDI settings explored in our experiments, we seek to construct an “adversarial” disturbance w(t) that obeys the relevant norm bounds ‖w(t)‖2 ≤ ‖Cx(t)+Du(t)‖2 while maximizing the loss. To do this, we use a model predictive control method where the actions taken are w(t). Specifically, for each policy π, we model w(t) as a neural network specific to that policy. Every 10 steps of a roll-out, we optimize w(t) through gradient descent to maximize the loss over a horizon of 40 steps, subject to the constraint ‖w(t)‖2 ≤ ‖Cx(t) +Du(t)‖2." }, { "heading": "H ADDITIONAL EXPERIMENTAL DETAILS", "text": "Initial states. To pick initial states in our experiments, for the synthetic settings, we sample each attribute of the state i.i.d. from a standard Gaussian distribution. For cart-pole and planar quadrotor, we sample uniformly from bounds chosen such that the non-robust LQR algorithm (under the original dynamics) did not go unstable. For cart-pole, these bounds were chosen to be px ∈ [−1, 1], ϕ ∈ [−0.1, 0.1], ṗx = ϕ̇ = 0. For planar quadrotor, these bounds were px, pz ∈ [−1, 1], ϕ ∈ [−0.05, 0.05], ṗx = ṗz = ϕ̇ = 0.\nConstructing NLDI bounds. Given these initial states, for the cart-pole and quadrotor settings, we needed to construct our NLDI disturbance bounds such that they would hold over the entire trajectory of the robust policy; if not, the robustness specification (A.8) would not hold, and our agent might in fact increase the Lyapunov function. To ensure this approximately, we used a simple heuristic: we ran the (non-robust) LQR agent for a full episode with 50 different starting conditions, and constructed an L∞ ball around all states reached in any of these trajectories. We then used these L∞ balls on the states to construct the matrices C and D for our disturbance bounds, using the procedure described in Appendices D and E.\nComputing infrastructure and runtime. All experiments were run on an XPS 13 laptop with an Intel i7 processor. The planar quadrotor and synthetic NLDI experiment with D = 0 took about 1 day to run (since the projections were simple half-space projections), while all the other synthetic domains and cart-pole took about 3 days to run. The majority of the run-time was in computing the adversarial disturbances for test-time evaluations.\nHyperparameter selection. For our experiments, we did not perform large parameter searches. The learning rate we chose for our model-based planner, (both robust and non-robust) remained constant for the different domains; we tried learning rates of 1× 10−3, 1× 10−4, 1× 10−5 and found 1× 10−3 worked best for the non-robust version and 1× 10−4 worked best for the robust version. For our PPO hyperparameters, we simply used those used in the original PPO paper.\nOne parameter we had to tune for each environment was the time step. In particular, we had to pick a time step high enough that we could run episodes for a reasonable total length of time (within which the non-robust agents would go unstable), but low enough to reasonably approximate a continuoustime setting (since, for our robustness guarantees, we assume the agent’s actions evolve in continuous time). Our search space was small, however, consisting of 0.05, 0.02, 0.01, and 0.005 seconds.\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(a) LQR\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(b) Robust LQR\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(c) MBP\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(d) Robust MBP\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(e) PPO\n0 2 4 6 8 10 t\n2.0\n1.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\n2.0 x\nx\n(f) Robust PPO\nFigure H.1: Trajectories of 6 different methods on the cart-pole domain under adversarial dynamics.\nTrajectory plots. Figure H.1 shows sample trajectories of different methods in the cart-pole domain under adversarial dynamics. The non-robust LQR and model-based planning approaches both diverge and the non-robust PPO doesn’t diverge, but doesn’t clearly converge after 10 seconds. The robust methods, on the other hand, all clearly converge after 10 seconds.\nRuntime comparison. Tables H.1 and H.2 show the evaluation and training time of our methods and the baselines over 50 episodes run in parallel. In the NLDI cases where D = 0, i.e., Generic NLDI (D = 0) and Quadrotor, our projection adds only a very small computational cost. In the other cases, the additional computational cost is more significant, but our method is still far less expensive than the Robust MPC method.\nEnvironment LQR MBP PPO RobustLQR Robust MPC RARL Robust MBP∗ Robust PPO∗\nGeneric NLDI (D = 0) 0.63 0.61 0.84 0.57 718.06 0.71 0.73 0.94 Generic NLDI (D 6= 0) 0.64 0.62 0.83 0.58 824.86 0.81 15.13 25.38\nCart-pole 0.55 0.67 0.84 0.53 646.90 0.84 10.12 13.37 Quadrotor 0.95 0.98 1.19 0.88 3348.68 1.14 1.15 1.30 Microgrid 0.58 0.61 0.79 0.57 601.90 0.74 8.14 10.25 Generic PLDI 0.57 0.54 0.76 0.51 819.24 0.73 69.35 64.03 Generic H∞ 0.84 0.80 1.03 0.76 N/A 1.00 47.81 63.67\nTable H.1: Time (in seconds) taken to run each method on the test set of every environment for 50 episodes run in parallel.\nEnvironment MBP PPO RARL RobustMBP∗ Robust PPO∗\nGeneric NLDI (D = 0) 26.36 101.77 102.37 30.78 114.60 Generic NLDI (D 6= 0) 26.46 100.79 82.53 221.35 1158.28\nCart-pole 25.49 87.04 98.90 146.34 689.93 Quadrotor 41.24 131.48 112.95 46.13 159.06 Microgrid 23.03 112.52 87.71 113.61 436.64\nTable H.2: Time (in minutes) taken to train each method in every environment." }, { "heading": "I EXPERIMENTS FOR PLDIS AND H∞ CONTROL SETTINGS", "text": "In addition to the NLDI settings explored in the main text, we test the performance of our method on PLDI and H∞ control settings. As for the experiments in the main text, we choose a time discretization based on the speed at which the system evolves, and run each episode for 200 steps over this discretization. In both cases, we use a randomly generated LQR objective where the matrices Q1/2 and R1/2 are drawn i.i.d. from a standard normal distribution.\nSynthetic PLDI setting. We generate PLDI instances (A.9) with s = 5, a = 3, and L = 3. Specifically, we generate convex hull matrices (A1, B1), . . . , (A3, B3) i.i.d. from normal distributions, and generate (A(t), B(t)) by using a randomly-initialized neural network with softmax output to weight the convex hull matrices. Episodes were run for 2 seconds at a discretization of 0.01 seconds.\nSynthetic H∞ setting. We generate H∞ control instances (A.11) with s = 5, a = 3, and d = 2 by generating matricesA,B andG i.i.d. from normal distributions. The disturbancew(t) was produced using a randomly-initialized neural network, with its output scaled to satisfy the L2 bound on the disturbance. Specifically, we scaled the output of the neural network to satisfy an attenuating normbound on the disturbance; at time t, the norm-bound was given by 20× f(2× t/T ), where T is the time horizon and f is the standard normal PDF function. Episodes were run for T = 2 seconds at a discretization of 0.01 seconds.\nResults are given in Figure I.1 and Table I.1." }, { "heading": "J NOTES ON LINEARIZATION VIA PLDIS AND NLDIS", "text": "While we linearize the cart-pole and quadrotor dynamics via NLDIs in our experiments, we note that these dynamics can also be characterized via PLDIs. More generally, in this section, we show how we can use the framework of PLDIs to model linearization errors arising in the analysis of nonlinear systems.\nConsider the nonlinear dynamical system\nẋ = f (x, u) with f(0, 0) = 0. (J.1)\nfor x ∈ Rs and u ∈ Ra. Define ξ = (x, u). We would like to represent the above system as a PLDI in the region R := {ξ | ξ ≤ ξ ≤ ξ̄} including the origin. The mean value theorem states that for each component of f , we can write\nfi(ξ) = fi(0) +∇fi(z)T ξ, (J.2)\nfor some z = tξ, where t ∈ [0, 1]. Now, let p = s+ a. Defining the Jacobian of f as\nJf (z) = ∇f1(z) T\n... ∇fp(z)T , (J.3) and recalling that f(0) = 0, we can rewrite (J.2) as\nf(ξ) = Jf (z)ξ. (J.4)\nNow, suppose we can find component-wise bounds on the matrix Jf (z) overR, i.e,\nM ≤ Jf (z) ≤ M̄ for all z ∈ R. (J.5)\nWe can then write Jf (z) = ∑\n1≤i,j≤p\nmij(t)Eij with mij(t) ∈ [mij , m̄ij ], (J.6)\nwhere Eij = eieTj and ei is the i-th unit vector in Rp.\nWe now seek to bound the Jacobian using polytopic bounds. To do this, note that we can write\nJf (z) =\n2p 2∑\nκ=1 γκAκ γκ ≥ 0, ∑ κ γκ = 1, (J.7)\nwhere Aκ’s are the vertices of the polytope in (J.6), i.e.,\nAκ ∈ V = ∑ 1≤i,j≤p mijEij | mij ∈ {mij , m̄ij} . (J.8) Together, Equations (J.2), (J.4), (J.7), and (J.8) characterize the original nonlinear dynamics as a PLDI.\nWe note that this PLDI description is potentially very large; in particular, the size of V is exponential in the square of the number of non-constant entries in the Jacobian Jf (z), which could be as large as 2p 2 = 2(s+a) 2 . This problem size may therefore become intractable for larger control settings.\nWe note, however, that we can in fact express this PLDI more concisely as an NLDI. More precisely, we would like to find matrices A,B,C parameterizing the form of NLDI below, which is equivalent to that presented in Equation (3) (see Chapter 4 of Boyd et al. (1994)):\nDf(z) ∈ {A+B∆C | ‖∆‖2 ≤ 1} for all z ∈ R. (J.9)\nIt can shown that the solution to the SDP\nminimize tr(V +W )\nsubject to W 0[ V (Aκ −A)T\nAκ −A W\n] 0, ∀Aκ ∈ V\n(J.10)\nyields the matrices A, B, and C with V = CTC and W = BBT , which can be used to construct NLDI (J.9). While the NLDI here is more concise than the PLDI, the trade-off is that the NLDI norm bounds obtained via this method may be rather loose. As such, for our settings, we obtain NLDI bounds numerically (see Appendices D and E), as these are tighter than NLDI specifications obtained via the above method (though they are potentially slightly inexact). An alternative approach would be to examine how to tighten the conversion from PLDIs to NLDIs, which has been explored in other work (e.g. Kuiava et al. (2013))." } ]
2,021
ENFORCING ROBUST CONTROL GUARANTEES WITHIN NEURAL NETWORK POLICIES
SP:bf9538a602859eaf9e0c3138c5e46c782863a054
[ "This paper proposed the combination of two techniques for improved learning with unlabelled data: 1) Positive-Unlabelled (PU) classifier, and 2) class-conditional GAN (cGAN). The idea is that the PU classifier can help produce more accurate pseudo labels for training of a cGAN, and with the improved cGAN, the generated images can be used in turn to further improve the PU classifier. The idea looks interesting and the empirical results verified its effectiveness. ", "This paper targets at relieving the massive labeled data consumption of deep learning through the framework of semi-supervised learning. In particular, it finds out that two training approaches, Positive-Unlabeled classification and the conditional generation, can benefit each other. Jointly conducting these two approaches can push better performance on both tasks, thus eventually achieving better performance with a limited amount of labeled data. The authors combined the two tasks with a new type of GAN network. They further gave the corresponding theoretical proof for this new GAN model and verified its performance on the benchmark datasets." ]
The scarcity of class-labeled data is a ubiquitous bottleneck in a wide range of machine learning problems. While abundant unlabeled data normally exist and provide a potential solution, it is extremely challenging to exploit them. In this paper, we address this problem by leveraging Positive-Unlabeled (PU) classification and the conditional generation with extra unlabeled data simultaneously, both of which aim to make full use of agnostic unlabeled data to improve classification and generation performance. In particular, we present a novel training framework to jointly target both PU classification and conditional generation when exposing to extra data, especially out-of-distribution unlabeled data, by exploring the interplay between them: 1) enhancing the performance of PU classifiers with the assistance of a novel Conditional Generative Adversarial Network (CGAN) that is robust to noisy labels, 2) leveraging extra data with predicted labels from a PU classifier to help the generation. Our key contribution is a Classifier-Noise-Invariant Conditional GAN (CNI-CGAN) that can learn the clean data distribution from noisy labels predicted by a PU classifier. Theoretically, we proved the optimal condition of CNI-CGAN and experimentally, we conducted extensive evaluations on diverse datasets, verifying the simultaneous improvements on both classification and generation.
[]
[ { "authors": [ "Jessa Bekker", "Jesse Davis" ], "title": "Learning from positive and unlabeled data: A survey", "venue": "Machine Learning,", "year": 2020 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Grigorios G Chrysos", "Jean Kossaifi", "Stefanos Zafeiriou" ], "title": "Robust conditional generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08657,", "year": 2018 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Marthinus Du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Convex formulation for learning from positive and unlabeled data", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Marthinus C Du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Analysis of learning from positive and unlabeled data", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Charles Elkan" ], "title": "The foundations of cost-sensitive learning", "venue": "In International joint conference on artificial intelligence,", "year": 2001 }, { "authors": [ "Yixiao Ge", "Dapeng Chen", "Hongsheng Li" ], "title": "Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Jie Gui", "Zhenan Sun", "Yonggang Wen", "Dacheng Tao", "Jieping Ye" ], "title": "A review on generative adversarial networks: Algorithms, theory, and applications", "venue": "arXiv preprint arXiv:2001.06937,", "year": 2020 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tianyu Guo", "Chang Xu", "Boxin Shi", "Chao Xu", "Dacheng Tao" ], "title": "Learning from bad data via generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shohei Hido", "Yuta Tsuboi", "Hisashi Kashima", "Masashi Sugiyama", "Takafumi Kanamori" ], "title": "Inlierbased outlier detection via direct density ratio estimation", "venue": "Eighth IEEE International Conference on Data Mining,", "year": 2008 }, { "authors": [ "Ming Hou", "Brahim Chaib-draa", "Chao Li", "Qibin Zhao" ], "title": "Generative adversarial positive-unlabelled learning", "venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Takuhiro Kaneko", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Label-noise robust generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Masahiro Kato", "Takeshi Teshima", "Junya Honda" ], "title": "Learning from positive and unlabeled data with a selection", "venue": null, "year": 2018 }, { "authors": [ "Ryuichi Kiryo", "Gang Niu", "Marthinus C du Plessis", "Masashi Sugiyama" ], "title": "Positive-unlabeled learning with non-negative risk estimator", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kiran Koshy Thekumparampil", "Sewoong Oh", "Ashish Khetan" ], "title": "Robust conditional gans under missing or uncertain labels", "venue": "arXiv preprint arXiv:1906.03579,", "year": 2019 }, { "authors": [ "Mario Lucic", "Michael Tschannen", "Marvin Ritter", "Xiaohua Zhai", "Olivier Bachem", "Sylvain Gelly" ], "title": "High-fidelity image generation with fewer labels", "venue": "nternational Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Atsuhiro Noguchi", "Tatsuya Harada" ], "title": "Image generation from small datasets via batch statistics adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Alex Smola", "Le Song", "Choon Hui Teo" ], "title": "Relative novelty detection", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Ke Sun", "Bing Yu", "Zhouchen Lin", "Zhanxing Zhu" ], "title": "Patch-level neighborhood interpolation: A general and effective graph-based regularization strategy", "venue": null, "year": 1911 }, { "authors": [ "Kiran K Thekumparampil", "Ashish Khetan", "Zinan Lin", "Sewoong Oh" ], "title": "Robustness of conditional gans to noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jun Zhu Tsung Wei Tsai", "Tsung Wei Tsai" ], "title": "Countering noisy labels by learning from auxiliary clean labels", "venue": "arXiv preprint arXiv:1905.13305,", "year": 2019 }, { "authors": [ "Qin Wang", "Wen Li", "Luc Van Gool" ], "title": "Semi-supervised learning by augmented distribution alignment", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yanwu Xu", "Mingming Gong", "Junxiang Chen", "Tongliang Liu", "Kun Zhang", "Kayhan Batmanghelich" ], "title": "Generative-discriminative complementary learning", "venue": null, "year": 2020 }, { "authors": [ "Yixing Xu", "Chang Xu", "Chao Xu", "Dacheng Tao" ], "title": "Multi-positive and unlabeled learning", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "Shin’ya Yamaguchi", "Sekitoshi Kanai", "Takeharu Eda" ], "title": "Effective data augmentation with multidomain learning gans", "venue": "arXiv preprint arXiv:1912.11597,", "year": 2019 }, { "authors": [ "Wei Li Shaogang Gong Yanbei Chen", "Xiatian Zhu" ], "title": "Semi-supervised learning under class distribution mismatch", "venue": null, "year": 2020 }, { "authors": [ "Bing Yu", "Jingfeng Wu", "Jinwen Ma", "Zhanxing Zhu" ], "title": "Tangent-normal adversarial regularization for semi-supervised learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kato" ], "title": "Kato et al., 2018) focused on remedying the selection bias in the PU learning. Besides, Multi-Positive and Unlabeled Learning (Xu et al., 2017) extended the binary PU setting to the multi-class version, therefore adapting to more practical applications. By contrast, our multipositive unlabeled method absorbs the advantages of previous approaches, and in the meanwhile intuitively extends them to fit the differential deep neural networks optimization", "venue": null, "year": 2017 }, { "authors": [ "Self-Distillation (Yanbei Chen" ], "title": "under-studied problem, which can guarantee the effectiveness of learning. In contrast, our approach leverages the PU learning to construct the “open world” classification. Out-Of-Distribution (OOD) Detection OOD Detection is one classical but always vibrant machine learning problem. PU learning can be used for the detection of outliers in an unlabeled dataset with knowledge", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Existing machine learning methods, particularly deep learning models, typically require big data to pursue remarkable performance. For instance, conditional deep generative models are able to generate high-fidelity and diverse images, but they have to rely on vast amounts of labeled data (Lucic et al., 2019). Nevertheless, it is often laborious or impractical to collect large-scale accurate class-labeled data in real-world scenarios, and thus the label scarcity is ubiquitous. Under such circumstances, the performance of classification and conditional generation (Mirza & Osindero, 2014) drops significantly (Lucic et al., 2019). At the same time, diverse unlabeled data are available in enormous quantities, and therefore a key issue is how to take advantage of the extra data to enhance the conditional generation or classification.\nWithin the unlabeled data, both in-distribution and out-of-distribution data exist, where indistribution data conform to the distribution of the labeled data while out-of-distribution data do not. Our key insight is to harness the out-of-distribution data. In the generation with extra data, most related works focused on the in-distribution data (Lucic et al., 2019; Gui et al., 2020; Donahue & Simonyan, 2019). When it comes to the out-of-distribution data, the majority of existing methods (Noguchi & Harada, 2019; Yamaguchi et al., 2019; Zhao et al., 2020) attempted to forcibly train generative models on a large amount of unlabeled data, and then transferred the learned knowledge of the pre-trained generator to the in-distribution data. In classification, a common setting to utilize unlabeled data is semi-supervised learning (Miyato et al., 2018; Sun et al., 2019; Berthelot et al., 2019), which usually assumes that the unlabeled and labeled data come from the same distribution, ignoring their distributional mismatch. In contrast, Positive and Unlabeled (PU) Learning (Bekker & Davis, 2020; Kiryo et al., 2017) is an elegant way of handling this under-studied problem, where a model has the only access to positive samples and unlabeled data. Therefore, it is possible to utilize pseudo labels predicted by a PU classifier on unlabeled data to guide the conditional gen-\neration. However, the predicted signals from the classifier tend to be noisy. Although there are a flurry of papers about learning from noisy labels for classification (Tsung Wei Tsai, 2019; Ge et al., 2020; Guo et al., 2019), to our best knowledge, no work has considered to leverage the noisy labels seamlessly in the joint classification and generation. Additionally, another work (Hou et al., 2018) leveraged GANs to recover both positive and negative data distribution to step away from overfitting, but they never considered the noise-invariant generation or their mutual improvement. The generative-discriminative complementary learning (Xu et al., 2019) was investigated in weakly supervised learning, but we are the first attempt to tackle the (Multi-) Positive and Unlabeled learning setting while developing the method of noise-invariant generation from noisy labels. Please refer to Section 5 for the discussion about more related works.\nIn this paper, we focus on the mutual benefits of conditional generation and PU classification, when we are only accessible to little class-labeled data, but extra unlabeled data, including outof-distribution data, can be available. Firstly, a parallel non-negative multi-class PU estimator is derived to classify both the positive data of all classes and the negative data. Then we design a Classifier-Noise-Invariant Conditional Generative Adversarial Network (CNI-CGAN) that is able to learn the clean data distribution on all unlabeled data with noisy labels provided by the PU classifier. Simultaneously, we also leverage our CNI-CGAN to enhance the performance of the PU classification through data augmentation, demonstrating a reciprocal benefit for both generation and classification. We provide the theoretical analysis on the optimal condition of our CNI-CGAN and conduct extensive experiments to verify the superiority of our approach." }, { "heading": "2 OUR METHOD", "text": "" }, { "heading": "2.1 POSITIVE-UNLABELED LEARNING", "text": "Traditional Binary Positive-Unlabeled Problem Setting Let X ∈ Rd and Y ∈ {±1} be the input and output variables and p(x, y) is the joint distribution with marginal distribution pp(x) = p(x|Y = +1) and pn(x) = p(x|Y = −1). In particular, we denote p(x) as the distribution of unlabeled data. np, nn and nu are the amount of positive, negative and unlabeled data, respectively.\nParallel Non-Negative PU Estimator Vanilla PU learning (Bekker & Davis, 2020; Kiryo et al., 2017; Du Plessis et al., 2014; 2015) employs unbiased and consistent estimator. Denote gθ : Rd → R as the score function parameterized by θ, and ` : R×{±1} → R as the loss function. The risk of gθ can be approximated by its empirical version denoted as R̂pn(gθ):\nR̂pn(gθ) = πpR̂ + p (gθ) + πnR̂ − n (gθ), (1)\nwhere πp represents the class prior probability, i.e. πp = P (Y = +1) with πp+πn = 1. In addition, R̂+p (gθ) = 1 np ∑np i=1 ` (gθ (x p i ) ,+1) and R̂ − n (gθ) = 1 nn ∑nn i=1 ` (gθ (x n i ) ,−1) .\nAs negative data xn are unavailable, a common strategy is to offset R−n (gθ). We also know that πnpn(x) = p(x) − πppp(x), and hence πnR̂−n (gθ) = R̂−u (gθ) − πpR̂−p (gθ). Then the resulting unbiased risk estimator R̂pu(gθ) can be formulated as:\nR̂pu(gθ) = πpR̂ + p (gθ)− πpR̂−p (gθ) + R̂−u (gθ), (2)\nwhere R̂−p (gθ) = 1 np ∑np i=1 ` (gθ (x p i ) ,−1) and R̂−u (gθ) = 1nu ∑nu i=1 ` (gθ (x u i ) ,−1). The advantage of this unbiased risk minimizer is that the optimal solution can be easily obtained if g is linear in θ. However, in real scenarios we tend to leverage more flexible models gθ, e.g., deep neural networks. This strategy will push the estimator to a point where it starts to suffer from overfitting. Hence, we decide to utilize non-negative risk (Kiryo et al., 2017) for our PU learning, which has been verified in (Kiryo et al., 2017) to allow deep neural network to mitigate overfitting. The non-negative PU estimator is formulated as:\nR̂pu(gθ) = πpR̂ + p (gθ) + max { 0, R̂−u (gθ)− πpR̂−p (gθ) } . (3)\nIn pursue of the parallel implementation of R̂pu(gθ), we replace max { 0, R̂−u (gθ)− πpR̂−p (gθ) }\nwith its lower bound 1N ∑N i=1 max { 0, R̂−u (gθ;X iu)− πpR̂−p (gθ;X ip) } where X iu and X ip denote as the unlabeled and positive data in the i-th mini-batch, and N is the number of batches.\nFrom Binary PU to Multi-PU Learning Previous PU learning focuses on learning a classifier from positive and unlabeled data, and cannot easily be adapted to K + 1 multi-classification tasks where K represents the number of classes in the positive data. Multi-Positive and Unlabeled learning (Xu et al., 2017) was ever developed, but the proposed algorithm may not allow deep neural networks. Instead, we extend binary PU learning to multi-class version in a straightforward way by additionally incorporating cross entropy loss on all the positive data with labels for different classes. More precisely, we consider theK+1-class classifier fθ as a score function fθ = ( f1θ (x), . . . , f K+1 θ (x) ) . After the softmax function, we select the first K positive data to construct cross-entropy loss `CE, i.e., `CE(fθ(x), y) = log ∑K+1 j=1 exp ( f jθ (x) ) − fyθ (x) where y ∈ [K]. For the PU loss, we\nconsider the composite function h(fθ(x)) : Rd → R where h(·) conducts a logit transformation on the accumulative probability for the first K classes, i.e., h(fθ(x)) = ln( p1−p ) in which\np = ∑K j=1 exp ( f jθ (x) ) / ∑K+1 j=1 exp ( f jθ (x) ) . The final mini-batch risk of our PU learning can be presented as:\nR̃pu(fθ;X i) = πpR̂+p (h(fθ);X ip) + max { 0, R̂−u (h(fθ);X iu)− πpR̂−p (h(fθ);X ip) }\n+ R̂CEp (fθ;X ip), (4)\nwhere R̂CEp (fθ;X ip) = 1np ∑np i=1 ` CE (fθ (x p i ) , y)." }, { "heading": "2.2 CLASSIFIER-NOISE-INVARIANT CONDITIONAL GENERATIVE ADVERSARIAL", "text": "NETWORK (CNI-CGAN)\nTo leverage extra data, i.e., all unlabeled data, to benefit the generation, we deploy our conditional generative model on all data with pseudo labels predicted by our PU classifier. However, these predicted labels tend to be noisy, reducing the reliability of the supervision signals and thus worsening the performance of the conditional generative model. Besides, the noise depends on the accuracy of the given PU classifier. To address this issue, we focus on developing a novel noise-invariant conditional GAN that is robust to noisy labels provided by a specified classifier, e.g. a PU classifier. We call our method Classifier-Noise-Invariant Conditional Generative Adversarial Network (CNI-CGAN) and the architecture is depicted in Figure 1. In the following, we elaborate on each part of it.\nPrinciple of the Design of CNI-CGAN\nAlbeit being noisy, the pseudo labels given by the PU classifier still provide rich information that we can exploit. The key is to take the noise generation mechanism into consideration during the generation. We denote the real data as xr and the predicted hard label through the PU classifier as PUθ(xr), i.e., PUθ(xr) = arg maxi f i θ(xr), as displayed in Figure 1. We let the generator “imitate” the noise generation\nmechanism to generate pseudo labels for the labeled data. With both pseudo and real labels, we can leverage the PU classifier fθ to estimate a confusion matrix C̃ to model the label noise from the classifier. During the generation, a real label y, while being fed into the generator G, will also be polluted by C̃ to compute a noisy label ỹ, which then will be combined with the generated fake sample xg for the following discrimination. Finally, the discriminator D will distinguish the real samples [xr, PUθ(xr)] out of fake samples [xg, ỹ]. Overall, the noise “generation” mechanism from both sides can be balanced.\nEstimation of C̃ The key in the design of C̃ is to estimate the label noise of the pre-trained PU classifier by considering all the samples of each class. More specifically, the confusion matrix C̃ is k+ 1 by k+ 1 and each entry C̃ij represents the probability of a generated sample xg , given a label i, being classified as class j by the PU classifier. Mathematically, we denote C̃ij as:\nC̃ij = P (PUθ(xg) = j|y = i) = Ez[I{PUθ(xg)=j|y=i}], (5)\nwhere xg = G(z, y = i) and I is the indicator function. Owing to the stochastic optimization nature when training deep neural networks, we incorporate the estimation of C̃ in the processing of training by Exponential Moving Average (EMA) method. This choice can balance the utilization of information from previous training samples and the updated PU classifier to estimate C̃. We formulate the update of C̃(l+1) in the l-th mini-batch as follows:\nC̃(l+1) = λC̃(l) + (1− λ)∆C̃Xl , (6)\nwhere ∆C̃Xl denotes the incremental change of C̃ on the current l-th mini-batch data Xl via Eq. 5. λ is the averaging coefficient in EMA.\nTheoretical Guarantee of Clean Data Distribution Firstly, we denote O(x) as the oracle class of sample x from an oracle classifier O(·). Let πi, i = 1, ...,K+1, be the class-prior probability of the class i in the multi-positive unlabeled setting. Theorem 1 proves the optimal condition of CNI-CGAN to guarantee the convergence to the clean data distribution. The proof is provided in Appendix A. Theorem 1. (Optimal Condition of CNI-CGAN) Let P g be a probabilistic transition matrix where P gij = P (O(xg) = j|y = i) indicates the probability of sample xg with the oracle label j generated by G with the initial label i. We assume that the conditional sample space of each class is disjoint with each other, then\n(1) P g is a permutation matrix if the generator G in CNI-CGAN is optimal, with the permutation, compared with an identity matrix, only happens on rows r where corresponding πr, r ∈ r are equal. (2) If P g is an identity matrix and the generator G in CNI-CGAN is optimal, then pr(x, y) = pg(x, y) where pr(x, y) and pg(x, y) are the real and the generating joint distribution, respectively.\nBriefly speaking, CNI-CGAN can learn the clean data distribution if P g is an identity matrix. More importantly, the method we elaborate till now has already guaranteed Pg as a permutation matrix, which is very close to an identity one. We need an additional constraint, although the permutation happens only when same class-prior probabilities exist.\nThe Auxiliary Loss The optimal G in CNI-CGAN can only guarantee that pg(x, y) is close to pr(x, y) as the optimal permutation matrix P g is close to the identity matrix. Hence in practice, to ensure that we can exactly learn an identity matrix for P g and thus achieve the clean data distribution, we introduce an auxiliary loss to encourage a larger trace of P g , i.e., ∑K+1 i=1 P (O(xg) = i)|y = i). As O(·) is intractable, we approximate it by the current PU classifier PUθ(xg). Then we obtain the auxiliary loss `aux:\n`aux(z, y) = max{κ− 1\nK + 1 K+1∑ i=1 Ez(I{PUθ(xg)=i|y=i}), 0}, (7)\nwhere κ ∈ (0, 1) is a hyper-parameter. With the support of auxiliary loss, P g has the tendency to converge to the identity matrix where CNI-CGAN can learn the clean data distribution even in the presence of noisy labels.\nComparison with RCGAN (Thekumparampil et al., 2018; Kaneko et al., 2019) The theoretical property of CNI-CGAN has a major advantage over existing Robust CGAN (RCGAN) (Thekumparampil et al., 2018; Kaneko et al., 2019), for which the optimal condition can only be achieved when the label confusion matrix is known a priori. Although heuristics can be employed, such as RCGAN-U (Thekumparampil et al., 2018), to handle the unknown label noise setting, these approaches still lack the theoretical guarantee to converge to the clean data distribution.\nTo guarantee the efficacy of our approach, one implicit and mild assumption is that our PU classifier will not overfit on the training data, while our non-negative estimator helps to ensure that it as\nexplained in Section 2.1. To further clarify the optimization process of CNI-CGAN, we elaborate the training steps of D and G, respectively.\nD-Step: We train D on an adversarial loss from both the real data and the generated (xg, ỹ), where ỹ is corrupted by C̃. C̃y denotes the y-th row of C̃. We formulate the loss of D as:\nmax D∈F E x∼p(x) [φ(D(x, PUθ(x)))] + E z∼PZ,y∼PY ỹ|y∼C̃y [φ(1−D(G(z, y), ỹ))], (8)\nwhere F is a family of discriminators and PZ is the distribution of latent space vector z, e.g., a Normal distribution. PY is a discrete uniform distribution on [K + 1] and φ is the measuring function.\nG-Step: We train G additionally on the auxiliary loss `aux(z, y) as follows:\nmin G∈G E z∼PZ,y∼PY ỹ|y∼C̃y [φ(1−D(G(z, y), ỹ)) + β`aux(z, y)] , (9)\nwhere β controls the strength of auxiliary loss and G is a family of generators. In summary, our CNI-CGAN conducts K+ 1 classes generation, which can be further leveraged to benefit the K+ 1 PU classification via data augmentation.\nAlgorithm 1 Alternating Minimization for PU Learning and Classifier-Noise-Invariant Generation. Input: Training data (Xp, Xu). Batch size M and hyper-parameter β > 0, λ, κ ∈ (0, 1). L0 and L ∈ N+. Initializing C̃(1) as identity matrix. Number of batches N during the training. Output: Model parameter for generator G, and θ for the PU classifier fθ.\n1: / * Pre-train PU classifier fθ * / 2: for i = 1 to N do 3: Update fθ by descending its stochastic gradient of R̃pu ( fθ;X i ) via Eq. 4. 4: end for 5: repeat 6: / * Update CNI-CGAN * / 7: for l = 1 to L do 8: Sample {z1, ..., zM}, {y1, ...,yM} and {x1, ...,xM} from PZ , PY and all training data,\nrespectively, and then sample {ỹ1, ..., ỹM} through the current C̃(l). Then, update the discriminator D by ascending its stochastic gradient of\n1\nM M∑ i=1 [φ(D(xi, PUθ(xi)))] + φ(1−D(G(zi,yi), ỹi))].\n9: Sample {z1, ..., zM} and {y1, ...,yM} from PZ and PY , and then sample {ỹ1, ..., ỹM} through the current C̃(l). Update the generator G by descending its stochastic gradient of\n1\nM M∑ i=1 [φ(1−D(G(zi,yi), ỹi)) + β`aux(yi, zi)].\n10: if l ≥ L0 then 11: Compute ∆C̃Xl = 1 M ∑M i=1 I{PUθ(G(zi,yi))|yi} via Eq. 5, and then estimate C̃ by\nC̃(l+1) = λC̃(l) + (1− λ)∆C̃Xl . 12: end if 13: end for 14: / * Update PU classifier via Data Augmentation * / 15: Sample {z1, ..., zM} and {y1, ...,yM} from PZ and PY , respectively, and then update the\nPU classifier fθ by descending its stochastic gradient of\n1\nM M∑ i=1 `CE (fθ (G(zi,yi)) ,yi) .\n16: until convergence" }, { "heading": "3 ALGORITHM", "text": "Firstly, we obtain a PU classifier fθ trained on multi-positive and unlabeled dataset with the parallel non-negative estimator derived in Section 2.1. Then we train our CNI-CGAN, described in Section 2.2, on all data with pseudo labels predicted by the pre-trained PU classifier. As our CNICGAN is robust to noisy labels, we leverage the data generated by CNI-CGAN to conduct data augmentation to improve the PU classifier. Finally, we implement the joint optimization for the training of CNI-CGAN and the data augmentation of the PU classifier. We summarize the procedure in Algorithm 1 and provide more details in Appendix C.\nComputational Cost Analysis In the implementation of our CNI-CGAN, we need to additionally estimate C̃, a (K + 1)× (K + 1) matrix. The computational cost of this small matrix is negligible compared with the updating of discriminator and generator networks, although the estimation of C̃ is crucial.\nSimultaneous Improvement on PU Learning and Generation with Extra Data From the perspective of PU classification, due to the theoretical guarantee from Theorem 1, CNI-CGAN is capable of learning a clean data distribution out of noisy pseudo labels predicted by the pre-trained PU classifier. Hence, the following data augmentation has the potential to improve the generalization of PU classification regardless of the specific form of the PU estimator. From the perspective of generation with extra data, the predicted labels on unlabeled data from the PU classifier can provide CNI-CGAN with more supervised signals, thus further improving the quality of generation. Due to the joint optimization, both the PU classification and conditional generative models are able to improve each other reciprocally, as demonstrated in the following experiments." }, { "heading": "4 EXPERIMENT", "text": "Experimental Setup We perform our approaches and several baselines on MNIST, Fashion-MNIST and CIFAR-10. We select the first 5 classes on MNIST and 5 non-clothes classes on FashionMNIST, respectively, for K + 1 classification (K = 5). To verify the consistent effectiveness of our method in the standard binary PU setting, we pick the 4 categories of transportation tools in CIFAR10 as the one-class positive dataset. As for the baselines, the first is CGAN-P, where a Vanilla CGAN (Mirza & Osindero, 2014) is trained only on limited positive data. Another natural baseline is CGAN-A where a Vanilla CGAN is trained on all data with labels given by the PU classifier.\nThe last baseline is RCGAN-U (Thekumparampil et al., 2018) where the confusion matrix is totally learnable while training. For fair comparisons, we choose the same GAN architecture. Through a line search of hyper-parameters, we choose κ as 0.75, β as 5.0 and λ = 0.99 across all the datasets. We set L0 as 5 in Algorithm 1. More details about hyper-parameters can be found in Appendix D.\nEvaluation Metrics For MNIST and Fashion-MNIST, we mainly use Generator Label Accuracy (Thekumparampil et al., 2018) and PU Accuracy to evaluate the quality of generated images. Generator Label Accuracy compares specified y from CGANs to the true class of the generated examples through a pre-trained (almost) oracle classifier f . In experiments, we pre-trained two K+1 classifiers with 99.28% and 98.23% accuracy on the two datasets, respectively. Additionally, the increased PU Accuracy measures the closeness between generated data distribution and test (almost real) data distribution for the PU classification, serving as a key indicator to reflect the quality of generated images. For CIFAR 10, we use both Inception Score (Salimans et al., 2016) to evaluate the quality of the generated samples, and the increased PU Accuracy to quantify the improvement of generated samples on the PU classification." }, { "heading": "4.1 GENERATION AND CLASSIFICATION PERFORMANCE", "text": "We set the whole training dataset as the unlabeled data and select certain amount of positive data with the ratio of Positive Rate. Figure 2 presents the trend of Generator Label Accuracy, Inception Score and PU Accuracy as the Positive Rate increases. It turns out that CNI-CGAN outperforms CGAN-P and CGAN-A consistently especially when the Positive Rate is small, i.e. little positive data. Remarkably, our approach enhances the PU accuracy greatly when exposed to low positive rates, while CGAN-A even worsens the original PU classifier sometimes in this scenario due to the existence of too much label noise given by a less accurate PU classifier. Meanwhile, when more supervised positive data are given, the PU classifier generalizes better and then provides more accurate labels, conversely leading to more consistent and better performance for all methods. Besides, note that even though the CGAN-P achieves comparable generator label accuracy on MNIST, it results in a lower Inception Score. We demonstrate this in Appendix D.\nTo verify the advantage of theoretical property for our CNI-CGAN, we further compare it with RCGCN-U (Thekumparampil et al., 2018; Kaneko et al., 2019), the heuristic version of robust generation against unknown noisy labels setting without the theoretical guarantee of optimal condition. As observed in Table 1, our method outperforms RCGAN-U especially when the positive rate is low. When the amount of positive labeled data is relatively large, e.g., 10.0%, both our approach and RCGAN-U can obtain comparable performance.\nVisualization To further demonstrate the superiority of CNI-CGAN compared with the other baselines, we present some generated images withinK+1 classes from CGAN-A, RCGAN-U and CNICGAN on MNIST, and high-quality images from CNI-CGAN on Fashion-MNIST and CIFAR-10, in Figure 3. In particular, we choose the positive rate as 0.2% on MNIST, yielding the initial PU classifier with 69.14% accuracy. Given the noisy labels on all data, our CNI-CGAN can generate more accurate images of each class visually compared with CGAN-A and RCGAN-U. Results of Fashion-MNIST and comparison with CGAN-P on CIFAR-10 can refer to Appendix E." }, { "heading": "4.2 ROBUSTNESS OF OUR APPROACH", "text": "Robustness against the Initial PU accuracy The auxiliary loss can help the CNI-CGAN to learn the clean data distribution regardless of the initial accuracy of PU classifiers. To verify that, we select distinct positive rates, yielding the pre-trained PU classifiers with different initial accuracies. Then we perform our method based on these PU classifiers. Figure 4 suggests that our approach can still attain the similar generation quality under different initial PU accuracies after sufficient training, although better initial PU accuracy can be beneficial to the generation performance in the early phase.\nRobustness against the Unlabeled data In real scenarios, we are more likely to have little knowledge about the extra data we have. To further verify the robustness of CNI-CGAN against the unknown distribution of extra data, we test different approaches across different amounts and distributions of the unlabeled data. Particularly, we consider two different types of distributions for unlabeled data. Type 1 is [ 1K+1 , ..., 1 K+1 , 1 K+1 ] where the number of data in each class, including the negative data, is even, while type 2 is [ 12K , ... 1 2K , 1 2 ] where the negative data makes up half of all unlabeled data. In experiments, we focus on the PU Accuracy to evaluate both the generation quality and the improvement of PU learning. For MNIST, we choose 1% and 0.5% for two settings while we opt for 0.5% and 0.2% on both Fashion-MNIST and CIFAR-10.\nFigure 5 manifests that the accuracy of PU classifier exhibits a slight ascending tendency with the increasing of the number of unlabeled data. More importantly, our CNI-CGAN almost consistently outperforms other baselines across different amount of unlabeled data as well as distinct distributions of unlabeled data. This verifies that the robustness of our proposal to the distribution of extra data can be maintained potentially. We leave the investigation on the robustness against more imbalanced situations as future works." }, { "heading": "5 RELATED WORKS", "text": "Positive-Unlabeled (PU) Learning. Positive and Unlabeled (PU) Learning is the setting where a learner has only access to positive examples and unlabeled data (Bekker & Davis, 2020; Kiryo et al., 2017). One related work (Hou et al., 2018) employed GANs (Goodfellow et al., 2014) to recover both positive and negative data distribution to step away from overfitting. Kato et al. (Kato et al., 2018) focused on remedying the selection bias in the PU learning. Besides, Multi-Positive and Unlabeled Learning (Xu et al., 2017) extended the binary PU setting to the multi-class version, therefore adapting to more practical applications. By contrast, our multi-positive unlabeled method absorbs the advantages of previous approaches, and in the meanwhile intuitively extends them to fit the differential deep neural networks optimization.\nConditional GANs on Few Labels Data. To attain high-quality images with both fidelity and diversity, the training of generative models requires a large dataset. To reduce the need of huge amount of data, the vast majority of methods (Noguchi & Harada, 2019; Yamaguchi et al., 2019; Zhao et al., 2020) attempted to transfer prior knowledge of the pre-trained generator. Another branch (Lucic et al., 2019) is to leverage self- and supervised learning to add pseudo labels on the in-distribution unlabeled data in order to expand labeled dataset. Compared with this approach, our strategy can be viewed to automatically “pick” useful in-distribution data from total unknown unlabeled data via PU learning framework, and then constructs robust conditional GANs to generate clean data distribution out of predicted label noise. Please refer to more related works in Appendix B." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "In this paper, we proposed a new method, CNI-CGAN, to jointly exploit PU classification and conditional generation. It is, to our best knowledge, the first method of such kind to break the ceiling of class-label scarcity, by combining two promising yet separate methodologies to gain massive mutual improvements. CNI-CGAN can learn the clean data distribution from noisy labels given by a PU classifier, and then enhance the performance of PU classification through data augmentation in various settings. We have demonstrated, both theoretically and experimentally, the superiority of our proposal on diverse benchmark datasets in an exhaustive and comprehensive manner. In the future, it will be promising to investigate learning strategies on imbalanced data, e.g., cost-sensitive learning (Elkan, 2001), to extend our approach to broader settings, which will further cater to realworld scenarios where highly unbalanced data are commonly available. In addition, the leverage of soft labels in the design of CNI-CGAN is also promising.\nEthics Statement. Our designed CNI-CGAN framework can interplay with the PU classification and robust generation, which can mitigate the scarcity of class-labeled data. Leveraging extra data may correlate with the privacy issue as the privacy issue still exists in generative models. Thus, a privacy-guaranteed version of our algorithm can be further proposed in the future to handle the potential privacy issue.\nReproducibility Statement. For the theoretical part, we clearly state the related assumption and detailed proof process in Appendix A. In terms of the algorithm, our implementation is directly adapted from the public one of generative models and PU learning." }, { "heading": "A APPENDIX: PROOF OF THEOREM 1", "text": "Firstly, we recall some definitions. Denote xr , xg as the real training and generated samples, respectively. x are the population of all data, and xr are sampled from p(x). yg represents the initial labels for the generator G, while ỹ indicates the labels perturbed by C̃ from yg . The class-prior πi meets πi = P (yg = i) = P (O(xr) = i). For a rigorous proof of Theorem 1, we elaborate it again in the appendix.\nTheorem 1 We assume that the following three mild assumptions can be met: (a) PU classifier is not overfitting on the training data, (b) P (PUθ(xg)|O(xg), yg) = P (PUθ(xg)|O(xg)), (c) the conditional sample space is disjoint from each other class. Then,\n(1) P g is a permutation matrix if the generator G in CNI-CGAN is optimal, with the permutation, compared with an identity matrix, only happens on rows r where corresponding πr, r ∈ r are equal.\n(2) If P g is an identity matrix and the generator G in CNI-CGAN is optimal, then pr(x, y) = pg(x, y) where pr(x, y) and pg(x, y) are the real and generating joint distribution, respectively.\nA.1 PROOF OF (1)\nProof. For a general setting, the oracle class of xg given by label yg is not necessarily equal to PUθ(xg). Thus, we consider the oracle class of xg , i.e., O(xg) in the proof.\nOptimal G. In CNI-CGAN, G is optimal if and only if\npr(xr, PUθ(xr)) = p g(xg, ỹ). (10)\nThe equivalence of joint probability distribution can further derive the equivalence of marginal distribution, i.e., pr(xr) = p\ng(xg). We define a probability matrix C where Cij = P (PUθ(x) = j|O(x) = i) where x are the population data. According to (c), we can apply O(·) on both xr and xg in Eq. 10. Then we have:\nP (O(xr) = i, PUθ(xr) = j) (c) = P (O(xg) = i, ỹ = j)\nP (O(xr) = i)P (PUθ(xr) = j|O(xr) = i) = K+1∑ k=1 P (yg = k,O(xg) = i)P (ỹ = j|yg = k,O(xg) = i)\nπiCij (a) = K+1∑ k=1 P (O(xg) = i|yg = k)P (yg = k)P (ỹ = j|yg = k)\nπiCij = K+1∑ k=1 P g>ik πkC̃kj ,\n(11)\nwhere assumption (a) indicates that PUθ(xr) is close to PUθ(x) so that P (PUθ(xr) = j|O(xr) = i) = P (PUθ(x) = j|O(x) = i). Then the corresponding matrix form follows as\nΠC = P g>ΠC̃ (12)\nDefinition. According to the definition of C̃ and Law of Total Probability, we have:\nP (yg = i)P (PUθ(xg) = j|yg = i) =\nπi K+1∑ k=1 P (O(xg) = k|yg = i)P (PUθ(xg) = j|O(xg) = k, yg = i)\nπiC̃ij (b) = πi K+1∑ k=1 P gikP (PUθ(xg) = j|O(xg) = k)\nπiC̃ij = πi K+1∑ k=1 P gikCkj ,\n(13)\nwhere the last equation is met as p(xg) is close to p(x) whenG is optimal, and thus P (PUθ(xg) = j|O(xg) = k) = P (PUθ(x) = j|O(x) = k). Then we consider the corresponding matrix form as follows\nΠC̃ = ΠP gC (14)\nwhere Π is the diagonal matrix of prior vector π. Combining Eq. 14 and 12, we have P g>ΠP g = Π, which indicates P g is a general orthogonal matrix. In addition, the element of P g is non-negative and the sum of each row is 1. Therefore, we have P g is a permutation matrix with permutation compared with the identity matrix only happens on rows r where corresponding πr, r ∈ r are equal. Particularly, if all πi are different from each other, then permutation operation will not happen, indicating the optimal conditional of P g is the identity matrix.\nA.2 PROOF OF (2)\nWe additionally denote yr as the real label of real sample xr , i.e., yr = O(xr). According to the optimal condition ofG in Eq. 10, we have pr(xr) = pg(xg). Since we have P g is an identity matrix, thenO(xg) = yg a.e. Thus, we have pg(xg|yg = i) = pg(xg|O(xg) = i),∀i = 1, ..,K + 1. According the assumption (c) and Eq. 10, we have pr(xr|O(xr) = i) = pg(xg|O(xg) = i). In addition, we know that pr(xr|O(xr) = i) = pr(xr|yr = i), thus we have pr(xr|yr = i) = pg(xg|yg = i). Further, we consider the identical class-prior πi. Finally, we have\npr(xr|yr = i)πi = pg(xg|yg = i)πi pr(xr|yr = i)p(O(xr) = i) = pg(xg|yg = i)p(yg = i)\npr(xr|yr = i)p(yr = i) = pg(xg|yg = i)p(yg = i) pr(xr, yr) = p g(xg, yg).\n(15)" }, { "heading": "B APPENDIX: MORE RELATED WORKS", "text": "Positive-Unlabeled (PU) Learning. Positive and Unlabeled (PU) Learning is the setting where a learner has only access to positive examples and unlabeled data (Bekker & Davis, 2020; Kiryo et al., 2017). One related work (Hou et al., 2018) employed GANs (Goodfellow et al., 2014) to recover both positive and negative data distribution to step away from overfitting. Kato et al. (Kato et al., 2018) focused on remedying the selection bias in the PU learning. Besides, Multi-Positive and Unlabeled Learning (Xu et al., 2017) extended the binary PU setting to the multi-class version, therefore adapting to more practical applications. By contrast, our multipositive unlabeled method absorbs the advantages of previous approaches, and in the meanwhile intuitively extends them to fit the differential deep neural networks optimization.\nConditional GANs on Few Labels Data. To attain high-quality images with both fidelity and diversity, the training of generative models requires a large dataset. To reduce the need of huge amount of data, the vast majority of methods (Noguchi & Harada, 2019; Yamaguchi et al., 2019; Zhao et al., 2020) attempted to transfer prior knowledge of the pre-trained generator. Another branch (Lucic et al., 2019) is to leverage selfand supervised learning to add pseudo labels on the in-distribution unlabeled data in order to expand labeled dataset. Compared with this approach, our strategy can be viewed to automatically “pick” useful in-distribution data from total unknown unlabeled data via PU learning framework, and then constructs robust conditional GANs to generate clean data distribution out of predicted label noise.\nRobust GANs. Robust Conditional GANs (Thekumparampil et al., 2018; Kaneko et al., 2019) were proposed to defend against class-dependent noisy labels. The main idea of these methods is to corrupt labels of generated samples before feeding to the adversarial discriminator, forcing the generator to produce sample with clean labels. Another supplementary investigation (Koshy Thekumparampil et al., 2019) explored the scenario when CGANs get exposed to missing or ambiguous labels, while another work (Chrysos et al., 2018) leveraged the structure of the model in the target space to address this issue. In contrast, the noises in our model stem from the prediction error of a given classifier. We employ the imperfect classifier to estimate the label confusion noise, yielding a new branch of Robust CGANs against “classifier” label noises.\nSemi-Supervised Learning (SSL). One crucial issue in SSL (Miyato et al., 2018; Yu et al., 2019; Sun et al., 2019) is how to tackle with the mismatch of unlabeled and labeled data. Augmented Distribution Alignment (Wang et al., 2019) was proposed to leverage adversarial training to alleviate the bias, but they focused on the empirical distribution mismatch owing to the limited number of labeled data. Further, Uncertainty Aware Self-Distillation (Yanbei Chen, 2019) was proposed to concentrate on this under-studied problem, which can guarantee the effectiveness of learning. In contrast, our approach leverages the PU learning to construct the “open world” classification.\nOut-Of-Distribution (OOD) Detection OOD Detection is one classical but always vibrant machine learning problem. PU learning can be used for the detection of outliers in an unlabeled dataset with knowledge only from a collection of inlier data (Hido et al., 2008; Smola et al., 2009). Another interesting and related\nwork is Outlier Exposure (Hendrycks et al., 2018), an approach that leveraged an auxiliary dataset to enhance the anomaly detector based on existing limited data. This problem is similar to our generation task, the goal of which is to take better advantage of extra dataset, especially out-of-distribution data, to boost the generation.\nLearning from Noisy Labels Rotational-Decoupling Consistency Regularization (RDCR) (Tsung Wei Tsai, 2019) was designed to integrate the consistency-based methods with the self-supervised rotation task to learn noise-tolerant representations. Mutual Mean-Teaching (Ge et al., 2020) was proposed to refine the soft labels on person re-identification task by averaging the parameters of two neural networks . In addition, the data with noisy labels can also be viewed as bad data. Another work (Guo et al., 2019) provided a worstcase learning formulation from bad data, and designed a data-generation scheme in an adversarial manner, augmenting data to improve the current classifier." }, { "heading": "C APPENDIX: DETAILS ABOUT ALGORITHM 1", "text": "Similar in (Kiryo et al., 2017), we utilize the sigmoid loss `sig(t, y) = 1/(1 + exp(ty)) in the implementation of the PU learning. Besides, we denote ri = R̂−u ( g;X iu ) − πpR̂−p ( g;X ip ) in the i-th mini-batch. Instructed by the algorithm in (Kiryo et al., 2017), if ri < 0 we turn to optimize −∇θri in order to make this mini-batch less overfitting, which is slightly different from Eq. 4." }, { "heading": "D APPENDIX: DETAILS ABOUT EXPERIMENTS", "text": "PU classifier and GAN architecture For the PU classifier, we employ 6 convolutional layers with different number of filters on MNIST, Fashion-MNIST and CIFAR 10, respectively. For the GAN architecture, we leverage the architecture of generator and discriminator in the tradition conditional GANs (Mirza & Osindero, 2014). To guarantee the convergence of RCGAN-U, we replace Batch Normalization with Instance Batch Normalization. The latent space dimensions of generator are 128, 128, 256 for the three datasets, respectively. As for the optimization of GAN, we deploy the avenue same as WGAN-GP (Gulrajani et al., 2017) to pursue desirable generation quality. Specifically, we set update step of discriminator as 1.\nChoice of Hyper-parameters We choose κ as 0.75, β as 5.0 and λ = 0.99 across all the approaches. The learning rates of PU classifier and CGAN are 0.001 and 0.0001, respectively. In the alternate minimization process, we set the update step as 1 for PU classifier after updating the CGAN, and L0 as 5 in Algorithm 1. We used the same and sufficient epoch for all settings (180 epochs for joint optimization) to guarantee the convergence as well as for fair comparisons.\nFurther Evaluation of CGAN-P and Ours from the Aspect of Inception Score To better verify our approach can generate more pleasant images than CGAN-P, we additionally compare the Inception Score these two methods attain. Specifically, we trained a (almost) perfect classifier with 99.21 % and 91.33% accuracy for MNIST and Fashion-MNIST respectively. Then we generate 50,000 samples from the two approaches to compute Inception Score, the results of which are exhibited in Table 2. It turns out that our method attain the consistent superiority against CGAN-P on the Inception Score for MNIST, even though the generator label accuracy of these two approaches are comparable. Note that the two method obtains the similar Inception Score on Fashion-MNIST, but our strategy outperforms CGAN-P significantly from the perspective of generator label accuracy. Overall, we can claim that our method is better than CGAN-P." }, { "heading": "E APPENDIX: MORE IMAGES", "text": "We additionally show some generated images on other datasets generated by baselines and CNI-CGAN, shown in Figure 6. Note that we highlight the erroneously generated images with red boxes. Specifically, on FashionMNIST our approach can generated images with more accurate labels compared with CGAN-A and RCGANU. Additionally, the quality of generated images from our approach are much better than those from CGAN-P that only leverages limited supervised data, as shown in Figure 7 on CIFAR-10." } ]
2,021
null
SP:d262c708016b776be1799df31f4b052c107c2b5b
[ "The paper argues for using differentiable simulators for policy optimization. To avoid back propagation through time, the paper splits the policy optimization problem into two steps: i) find improved action sequence for a set of initial conditions, ii) fit a parametric policy to the set of improved action sequences. First and second order methods are presented and evaluated on a version of payload-on-crane stabilization problem." ]
Current reinforcement learning (RL) methods use simulation models as simple black-box oracles. In this paper, with the goal of improving the performance exhibited by RL algorithms, we explore a systematic way of leveraging the additional information provided by an emerging class of differentiable simulators. Building on concepts established by Deterministic Policy Gradients (DPG) methods, the neural network policies learned with our approach represent deterministic actions. In a departure from standard methodologies, however, learning these policy does not hinge on approximations of the value function that must be learned concurrently in an actor-critic fashion. Instead, we exploit differentiable simulators to directly compute the analytic gradient of a policy’s value function with respect to the actions it outputs. This, in turn, allows us to efficiently perform locally optimal policy improvement iterations. Compared against other state-ofthe-art RL methods, we show that with minimal hyper-parameter tuning our approach consistently leads to better asymptotic behavior across a set of payload manipulation tasks that demand a high degree of accuracy and precision.
[]
[ { "authors": [ "Peter Anderson", "Angel Chang", "Devendra Singh Chaplot", "Alexey Dosovitskiy", "Saurabh Gupta", "Vladlen Koltun", "Jana Kosecka", "Jitendra Malik", "Roozbeh Mottaghi", "Manolis Savva" ], "title": "On evaluation of embodied navigation agents", "venue": "arXiv preprint arXiv:1807.06757,", "year": 2018 }, { "authors": [ "Y. Bengio", "P. Simard", "P. Frasconi" ], "title": "Learning long-term dependencies with gradient descent is difficult", "venue": "IEEE Transactions on Neural Networks,", "year": 1994 }, { "authors": [ "James Bern", "Pol Banzet", "Roi Poranne", "Stelian Coros" ], "title": "Trajectory optimization for cable-driven soft robot locomotion", "venue": "In Proc. Robot. Sci. Syst.,", "year": 2019 }, { "authors": [ "Rinu Boney", "Norman Di Palo", "Mathias Berglund", "Alexander Ilin", "Juho Kannala", "Antti Rasmus", "Harri Valpola" ], "title": "Regularizing trajectory optimization with denoising autoencoders", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Greg Brockman", "Vicki Cheung", "Ludwig Pettersson", "Jonas Schneider", "John Schulman", "Jie Tang", "Wojciech Zaremba" ], "title": "URL http://arxiv.org/ abs/1606.01540", "venue": "OpenAI Gym. CoRR,", "year": 2016 }, { "authors": [ "Ignasi Clavera", "Jonas Rothfuss", "John Schulman", "Yasuhiro Fujita", "Tamim Asfour", "Pieter Abbeel" ], "title": "Model-Based Reinforcement Learning via Meta-Policy Optimization", "venue": "Proceedings of The 2nd Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Ignasi Clavera", "Yao Fu", "Pieter Abbeel" ], "title": "Model-augmented actor-critic: Backpropagating through paths", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Erwin Coumans", "Yunfei Bai" ], "title": "Pybullet, a python module for physics simulation for games, robotics and machine learning", "venue": null, "year": 2019 }, { "authors": [ "Filipe de Avila Belbute-Peres", "Kevin A. Smith", "Kelsey R. Allen", "Josh Tenenbaum", "J. Zico Kolter" ], "title": "End-to-end differentiable physics for learning and control", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Jonas Degrave", "Michiel Hermans", "Joni Dambre", "Francis wyffels" ], "title": "A differentiable physics engine for deep learning in robotics", "venue": "Frontiers in Neurorobotics,", "year": 2019 }, { "authors": [ "Marc Peter Deisenroth", "Carl Edward Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Evan Drumwright", "John Hsu", "Nathan P. Koenig", "Dylan A. Shell" ], "title": "Extending open dynamics engine for robotics simulation", "venue": "Second International Conference,", "year": 2010 }, { "authors": [ "Alexis Duburcq", "Yann Chevaleyre", "Nicolas Bredech", "Guilhem Boéris" ], "title": "Online trajectory planning through combined trajectory optimization and function approximation: Application to the exoskeleton atalante", "venue": null, "year": 1910 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing Function Approximation Error in Actor-Critic", "venue": "Methods. CoRR,", "year": 2018 }, { "authors": [ "Radek Grzeszczuk", "Demetri Terzopoulos", "Geoffrey Hinton" ], "title": "Neuroanimator: Fast neural network emulation and control of physics-based models", "venue": "In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques,", "year": 1998 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy P. Lillicrap", "Sergey Levine" ], "title": "Deep Reinforcement Learning for Robotic Manipulation", "venue": "CoRR, abs/1610.00633,", "year": 2016 }, { "authors": [ "Tuomas Haarnoja", "Sehoon Ha", "Aurick Zhou", "Jie Tan", "George Tucker", "Sergey Levine" ], "title": "Learning to Walk Via Deep Reinforcement Learning", "venue": "In Proceedings of Robotics: Science and Systems, FreiburgimBreisgau,", "year": 2019 }, { "authors": [ "Jemin Hwangbo", "Joonho Lee", "Alexey Dosovitskiy", "Dario Bellicoso", "Vassilios Tsounis", "Vladlen Koltun", "Marco Hutter" ], "title": "Learning agile and dynamic motor skills for legged robots", "venue": "Science Robotics,", "year": 2019 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke", "Sergey Levine" ], "title": "QTOpt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation", "venue": "CoRR, abs/1806.10293,", "year": 2018 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski" ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Jens Kober", "J. Andrew Bagnell", "Jan Peters" ], "title": "Reinforcement learning in robotics: A survey", "venue": "The International Journal of Robotics Research,", "year": 2013 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-Ensemble Trust-Region Policy Optimization", "venue": "CoRR, abs/1802.10592,", "year": 2018 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Variational policy search via trajectory optimization", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Yuxi Li" ], "title": "Deep Reinforcement Learning", "venue": "CoRR, abs/1810.06339,", "year": 2018 }, { "authors": [ "Junbang Liang", "Ming C. Lin", "Vladlen Koltun" ], "title": "Differentiable cloth simulation for inverse problems", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous Methods for Deep Reinforcement Learning", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Igor Mordatch", "Emo Todorov" ], "title": "Combining the benefits of function approximation and trajectory optimization", "venue": null, "year": 2015 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S. Fearing", "Sergey Levine" ], "title": "Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning", "venue": "CoRR, abs/1708.02596,", "year": 2017 }, { "authors": [ "Anusha Nagabandi", "Kurt Konoglie", "Sergey Levine", "Vikash Kumar" ], "title": "Deep Dynamics Models for Learning Dexterous Manipulation", "venue": "In Conference on Robot Learning (CoRL),", "year": 2019 }, { "authors": [ "OpenAI", "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Józefowicz", "Bob McGrew", "Jakub W. Pachocki", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray", "Jonas Schneider", "Szymon Sidor", "Josh Tobin", "Peter Welinder", "Lilian Weng", "Wojciech Zaremba" ], "title": "Learning Dexterous In-Hand Manipulation", "venue": "CoRR, abs/1808.00177,", "year": 2018 }, { "authors": [ "Paavo Parmas" ], "title": "Total stochastic gradient algorithms and applications in reinforcement learning", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Xue Bin Peng", "Aviral Kumar", "Grace Zhang", "Sergey Levine" ], "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Stephane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning", "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Connor Schenck", "Dieter Fox" ], "title": "Guided policy search with delayed sensor measurements", "venue": "arXiv preprint arXiv:1609.03076,", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust Region Policy Optimization", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael I. Jordan", "Pieter Abbeel" ], "title": "HighDimensional Continuous Control Using Generalized Advantage Estimation", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Jie Tan", "Tingnan Zhang", "Erwin Coumans", "Atil Iscen", "Yunfei Bai", "Danijar Hafner", "Steven Bohez", "Vincent Vanhoucke" ], "title": "Sim-to-real: Learning agile locomotion for quadruped robots", "venue": "arXiv preprint arXiv:1804.10332,", "year": 2018 }, { "authors": [ "Yuval Tassa", "Yotam Doron", "Alistair Muldal", "Tom Erez", "Yazhe Li", "Diego de Las Casas", "David Budden", "Abbas Abdolmaleki", "Josh Merel", "Andrew Lefrancq", "Timothy Lillicrap", "Martin Riedmiller" ], "title": "DeepMind Control Suite", "venue": "URL https://arxiv.org/abs/1801.00690", "year": 2018 }, { "authors": [ "Yuhui Wang", "Hao He", "Xiaoyang Tan", "Yaozhong Gan" ], "title": "Trust region-guided proximal policy optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Symplectic ode-net: Learning hamiltonian dynamics with control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "H. Zhu", "A. Gupta", "A. Rajeswaran", "S. Levine", "V. Kumar" ], "title": "Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Simon Zimmermann", "Roi Poranne", "Stelian Coros" ], "title": "Optimal control via second order sensitivity analysis", "venue": "CoRR, abs/1905.08534,", "year": 2018 }, { "authors": [ "Simon Zimmermann", "Roi Poranne", "James M. Bern", "Stelian Coros" ], "title": "PuppetMaster: Robotic animation of marionettes", "venue": "ACM Trans. Graph.,", "year": 2019 }, { "authors": [ "Zimmermann" ], "title": "Under review as a conference paper at ICLR 2021 A.5 DIFFERANTIABLE SIMULATOR Following the approach", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The main goal in RL is to formalize principled algorithmic approaches to solving sequential decision-making problems. As a defining characteristic of RL methodologies, agents gain experience by acting in their environments in order to learn how to achieve specific goals. While learning directly in the real world (Haarnoja et al., 2019; Kalashnikov et al., 2018) is perhaps the holy grail in the field, this remains a fundamental challenge: RL is notoriously data hungry, and gathering real-world experience is slow, tedious and potentially unsafe. Fortunately, recent years have seen exciting progress in simulation technologies that create realistic virtual training grounds, and sim-2real efforts (Tan et al., 2018; Hwangbo et al., 2019) are beginning to produce impressive results.\nA new class of differentiable simulators (Zimmermann et al., 2019; Liang et al., 2019; de Avila Belbute-Peres et al., 2018; Degrave et al., 2019) is currently emerging. These simulators not only predict the outcome of a particular action, but they also provide derivatives that capture the way in which the outcome will change due to infinitesimal changes in the action. Rather than using simulators as simple black box oracles, we therefore ask the following question: how can the additional information provided by differentiable simulators be exploited to improve RL algorithms?\nTo provide an answer to this question, we propose a novel method to efficiently learn control policies for finite horizon problems. The policies learned with our approach use neural networks to model deterministic actions. In a departure from established methodologies, learning these policies does not hinge on learned approximations of the system dynamics or of the value function. Instead, we leverage differentiable simulators to directly compute the analytic gradient of a policy’s value function with respect to the actions it outputs for a specific set of points sampled in state space. We show how to use this gradient information to compute first and second order update rules for locally optimal policy improvement iterations. Through a simple line search procedure, the process of updating a policy avoids instabilities and guarantees monotonic improvement of its value function.\nTo evaluate the policy optimization scheme that we propose, we apply it to a set of control problems that require payloads to be manipulated via stiff or elastic cables. We have chosen to focus our attention on this class of high-precision dynamic manipulation tasks for the following reasons:\n• they are inspired by real-world applications ranging from cable-driven parallel robots and crane systems to UAV-based transportation to (Figure 1);\n• the systems we need to learn control policies for exhibit rich, highly non-linear dynamics;\n• the specific tasks we consider constitute a challenging benchmark because they require very precise sequences of actions. This is a feature that RL algorithms often struggle with, as the control policies they learn work well on average but tend to output noisy actions. Given that sub-optimal control signals can lead to significant oscillations in the motion of the payload, these manipulation tasks therefore make it possible to provide an easy-to-interpret comparison of the quality of the policies generated with different approaches;\n• by varying the configuration of the payloads and actuation setups, we can finely control the complexity of the problem to test systematically the way in which our method scales.\nFigure 1: Real-world applications that inspire the control problems we focus on in this paper\nThe results of our experiments confirm our theoretical derivations and show that our method consistently outperforms two state-of-the-art (SOTA) model-free RL algorithms, Proximal Policy Optimization(PPO) (Wang et al., 2019) and Soft Actor-Critic(SAC) (Haarnoja et al., 2018), as well as the model-based approach of Backpropagation Through Time (BPTT). Although our policy optimization scheme (PODS) can be interleaved within the algorithmic framework of most RL methods (e.g. by periodically updating the means of the probability distributions represented by stochastic policies), we focused our efforts on evaluating it in isolation to pinpoint the benefits it brings. This allowed us to show that with minimal hyper-parameter tuning, the second order update rule that we derive provides an excellent balance between rapid, reliable convergence and computational complexity. In conjunction with the continued evolution of accurate differentiable simulators, our method promises to significantly improve the process of learning control policies using RL." }, { "heading": "2 RELATED WORK", "text": "Deep Reinforcement Learning. Deep RL (DRL) algorithms have been increasingly more successful in tackling challenging continuous control problems in robotics (Kober et al., 2013; Li, 2018). Recent notable advances include applications in robotic locomotion (Tan et al., 2018; Haarnoja et al., 2019), manipulation (OpenAI et al., 2018; Zhu et al., 2019; Kalashnikov et al., 2018; Gu et al., 2016), and navigation (Anderson et al., 2018; Kempka et al., 2016; Mirowski et al., 2016) to mention a few. Many model-free DRL algorithms have been proposed over the years, which can be roughly divided into two classes, off-policy methods (Mnih et al., 2016; Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018) and on-policy methods (Schulman et al., 2015; 2016; Wang et al., 2019), based on whether the algorithm can learn independently from how the samples were generated. Recently, model-based RL algorithms (Nagabandi et al., 2017; Kurutach et al., 2018; Clavera et al., 2018; Nagabandi et al., 2019) have emerged as a promising alternative for improving the sample efficiency. Our method can be considered as an on-policy algorithm as it computes first or second-order policy improvements given the current policy’s experience.\nPolicy Update as Supervised Learning. Although policy gradient methods are some of the most popular approaches for optimizing a policy (Kurutach et al., 2018; Wang et al., 2019), many DRL algorithms also update the policy in a supervised learning (SL) fashion by explicitly aiming to mimic expert demonstration (Ross et al., 2011) or optimal trajectories (Levine & Koltun, 2013a;b; Mordatch & Todorov, 2015). Optimal trajectories, in particular, can be computed using numerical methods such as iterative linear–quadratic regulators (Levine & Koltun, 2013a;b) or contact invariant optimization (Mordatch & Todorov, 2015). The solutions they provide have the potential to improve the sample efficiency of RL methods either by guiding the learning process through meaningful samples (Levine & Koltun, 2013a) or by explicitly matching action distributions (Mordatch & Todorov, 2015). Importantly, these approaches are not only evaluated in simulation but have also been shown\nto be effective for many real-world robotic platforms, including manipulators (Schenck & Fox, 2016; Levine et al., 2016) and exoskeletons (Duburcq et al., 2019). Recently, Peng et al. (2019) proposed an off-policy RL algorithm that uses SL both to learn the value function and to fit the policy to the advantage-weighted target actions. While our method shares some similarities with this class of approaches that interleave SL and RL, the updates of our policy do not rely on optimal trajectories that must be given as input. Rather, we show how to leverage differentiable simulators to compute locally optimal updates to a policy. These updates are computed by explicitly taking the gradient of the value function with respect to the actions output by the policy. As such, our method also serves to reinforce the bridge between the fields of trajectory optimization and reinforcement learning.\nDifferentiable Models. Our approach does not aim to learn a model of the system dynamics, but rather leverages differentiable simulators that explicitly provide gradients of simulation outcomes with respect to control actions. We note that traditional physics simulators such as ODE Drumwright et al. (2010) or PyBullet Coumans & Bai (2016–2019) are not designed to provide this information. We build, in particular, on a recent class of analytically differentiable simulators that have been shown to effectively solve trajectory optimization problems, with a focus on sim-2-real transfer, for both manipulation (Zimmermann et al., 2019) and locomotion tasks (Bern et al., 2019).\nDegrave et al. (2019) embed a differentiable rigid body simulator within a recurrent neural network to concurrently perform simulation steps while learning policies that minimize a loss corresponding to the control objective. While their goal is related to ours, we show how to leverage explicitlycomputed gradients to formulate second order policy updates that have a significant positive effect on convergence. Furthermore, in contrast to Degrave et al. (2019), we show that PODS consistently outperforms two common RL baselines, PPO (Wang et al., 2019) and SAC (Haarnoja et al., 2018).\nAlso related to our method is the very recent work of Clavera et al. (2020). Their observation is that while most model-based RL algorithms use models simply as a source of data augmentation or as a black-box oracle to sample from (Nagabandi et al., 2017), the differentiability of learned dynamics models can and should be exploited further. In an approach that is related to ours, they propose a policy optimization algorithm based on derivatives of the learned model. In contrast, we directly use differentiable simulators for policy optimization, bypassing altogether the need to learn the dynamics – including all the hyperparameters that are involved in the process, as well as the additional strategies required to account for the inaccuracies introduced by the learned dynamics (Boney et al., 2019). Thanks to the second order update rule that we derive, our method consistently outperforms SOTA model-free RL algorithms in the tasks we proposed. In contrast, their method only matches the asymptotic performance of model-free RL (which is a feat for model-based RL). It is also worth pointing out that while model-based approaches hold the promise of enabling learning directly in the real world, with continued progress in sim-2-real transfer, methods such as ours that rely on accurate simulation technologies will continue to be indispensable in the field of RL.\nA common approach to leverage differentable models is that of backpropagating through time (BPTT) as is the main focus of Grzeszczuk et al. (1998), Deisenroth & Rasmussen (2011), Parmas (2018), Degrave et al. (2019), and Clavera et al. (2020), where a policy πθ parametrized by θ is optimized directly in parameter space (PS), coupling the actions at each time step by the policy parameters. In contrast, our approach alternates between optimizing in trajectory space (TS), following gradient information of the value function for an independent set of actions at = πθ(s)|s=st , and in parameter space (PS) by doing imitation learning of the monotonically improved actions at by πθ. Alternating between TS and PS allows PODS to avoid the well-know problems of BPTT (vanishing and exploding gradients), that have been reported for a long time (Bengio et al., 1994)." }, { "heading": "3 POLICY OPTIMIZATION ON DIFFERENTIABLE SIMULATORS", "text": "Following the formulation employed by DPG methods, for a deterministic neural network policy πθ parameterized by weights θ, the RL objective J(πθ) and its gradient∇θJ(πθ) are defined as:\nJ(πθ) = ∫ S p(s0)V πθ (s0)ds0, (1)\n∇θJ(πθ) = ∫ S p(s0)∇θV πθ (s0)ds0 ≈ 1 k k∑ i ∇θV πθ (s0,i). (2)\nwhere p(s0) is the initial probability distribution over states, V πθ is the value function for πθ, and the second expression in Eq. 2 approximates the integral with a sum over a batch of k initial states sampled from S, as is standard.\nRestricting our attention to an episodic problem setup with fixed time horizon N and deterministic state dynamics st+1 = f(st, at), the value function gradient simplifies to:\n∇θV πθ (s0) = ∇θ ( r(s0, πθ(s0)) + N∑ t=1 r(st, πθ(st)) ) . (3)\nNoting that the state st can be specified as a recursive function st = f(st−1, πθ(st−1)), the computation of the gradient in Eq 3 is equivalent to backpropagating through time (BPTT) into the policy parameters. However, BPTT can be challenging due to well known problems of vanishing or exploding gradients (Degrave et al., 2019). We therefore turn our focus to the task of performing policy improvement iterations. In particular, our goal is to find a new policy ā, in trajectory space, such that V πθ (s0) < V ā(s0) for a batch of initial states sampled according to s0 ∼ p(s0)." }, { "heading": "3.1 FIRST ORDER POLICY IMPROVEMENT", "text": "While the parametrization of πθ is given in terms of θ (the weights of the neural net), we will choose TS policy ā to directly have as parameters the actions that are executed at each time step. By representing the actions independently of each other, rather than having them coupled through θ, BPTT is therefore not required. Moreover, at the start of each policy improvement step, we initialize the TS policy ā = [ a0, a1, . . . , aN−1 ] to match the output of πθ, where the individual terms at are the actions executed during a rollout of πθ(s)|s=st−1 . Thus, V πθ (s0) = V ā(s0) initially. The value function gradient of policy ā is then:\n∇āV ā(s0) = ∇āV ā(s(ā), ā) = ∇ā ( r ( s0, a0 ) + N∑ t=1 r ( st(at−1), at )) . (4)\nwhere s(ā) = [ s0, s1(a0), . . . , sN (aN−1) ] is the vector of the state trajectory associated to the policy rollout. For the sake of clarity we now switch notation from∇ā to d(.)dā :\ndV ā(s0) dā = ∂V ā ∂ā + ∂V ā ∂s ds dā . (5)\nFor a known, differentiable reward, the terms ∂V ā ∂ā and ∂V ā\n∂s can be easily computed analytically. In contrast, the Jacobian dsdā , that represents the way in which the state trajectory changes as the policy ā changes, is the first piece of information that we will require from a differentiable simulator. Furthermore, notice that even though we are not BPTT, the lower triangular structure of dsdā encodes the dependency of a particular point in state space on all the previous actions during a rollout (see the Appendix A.5 for more details on the Jacobian structure.\nThe first order update rule for policy ā is then computed as:\nā = πθ + αa dV ā(s0)\ndā . (6)\nSince this update rule uses the policy gradient (i.e. the direction of local steepest ascent), there exists a value αa > 0 such that V πθ (s0) < V ā(s0). In practice, we use the simulator to run a standard line-search on αa to ensure the inequality holds. We note, however, that if desired, αa can also be treated as a hyperparameter that is tuned to a sufficiently small value.\nOnce the policy ā has been improved, we can use the corresponding state trajectories s(ā) to update the parameters of the neural net policy πθ by running gradient descent on the following loss:\nLθ = 1\nk k∑ i N∑ t 1 2 ‖πθ(st,i)− at,i‖2, (7)\nwhere the gradient and update rule are given by:\n∇θLθ = 1\nk k∑ i N∑ t ∇θπθ(si)(πθ(st,i)− at,i), (8)\nθ = θ − αθ∇θLθ. (9)\nHere, i indexes the batch of initial states used to approximate the integral in Eq 2. Notice that gradients ∇θJ(πθ) and ∇θLθ are closely related for the first iteration in the policy improvement operation, where:\n∇θLθ = −αθαa 1\nk k∑ i ∇θπθ(s0,i) dV ā(s0,i) dā , (10)\nwhich explains why minimizing Eq.7 improves the value function formulated in Eq. 1. It is also worth noting that the stability of the policy improvement process is guaranteed by the parameter αa, which is found through a line search procedure such that V πθ (s0) < V ā(s0), as well as through the intermediate targets of Eq. 7, which eliminate potential overshooting problems that might occur if the gradient direction in Eq.10 was followed too aggressively." }, { "heading": "3.2 SECOND ORDER POLICY IMPROVEMENT", "text": "For a second order policy update rule, the Hessian d 2V ā(s0)\ndā2 is required. A brief derivation of this expression can be found in the Appendix and is summarized as follows:\nd2V ā(s0)\ndā2 =\nd\ndā\n[ ∂V ā\n∂ā + ∂V ā ∂s ds dā\n] , (11)\n= ∂V ā\n∂s\n( ds\ndā\nT ∂\n∂s\nds dā + ∂ ∂ā ds dā\n) + ds\ndā T(∂2V ā ∂s2 ds dā + 2 ∂2V ā ∂s∂ā ) + ∂2V ā ∂ā2 . (12)\nThe second order tensors ∂∂s ds dā and ∂ ∂ā ds dā are additional terms that a differentiable simulator must provide. As described in Zimmermann et al. (2019), these terms can be computed analytically. However, they are computationally expensive to compute, and they often lead to the Hessian becoming indefinite. As a consequence, ignoring these terms from the equation above results in a Gauss-Newton approximation of the Hessian:\nd2V ā(s0)\ndā2 ≈ Ĥ = ds dā\nT ∂2V ā\n∂s2 ds dā + ∂2V ā ∂a2 . (13)\nIn the expression above we assume that the rewards do not couple s and a. As long as the second derivatives of the rewards with respect to states and actions are positive definite, which is almost always the case, the Gauss-Newton approximation Ĥ is also guaranteed to be positive semi-definite. A second order update rule for ā can therefore be computed as:\nā = πθ + αaĤ −1 dV\nā(s0) dā . (14)\nAnalogous to the first order improvements discussed in the previous section, the same loss Lθ can be used to perform a policy update on πθ to strictly improve its value function. In this case, Lθ incorporates the second order policy updates of Eq. 14 without the need to compute the Hessian of the neural network policy, and with the additional benefit of allowing the use of well-defined acceleration methods such as Adam (Kingma & Ba, 2015)." }, { "heading": "3.3 MONOTONIC POLICY IMPROVEMENT", "text": "The combination of a simple line search on αa together with the use of Lθ to update πθ provides a simple and very effective way of preventing overshooting as θ is updated. PODS therefore features\nmonotonic increases in performance, as shown through our experiments. As summarized in Figure 2 for the task of controlling a 2D pendulum such that it goes to stop as quickly as possible (see the experiments section for a detailed description of task), both the first and second order policy improvement methods are well-behaved. Nevertheless, there is a drastic difference in convergence rates, with the second order method winning by a significant margin.\nAlgorithm 1: PODS: Policy Optimization via Differentiable Simulators for epoch = 1, M do\nfor sample i = 1, k do Sample initial condition s0,i Collect πθ by rolling out πθ starting from s0,i Compute improved policy āi (Eq 6. or Eq 14.) end Run gradient descent on Lθ (Eq 7.) such that the output\nof πθ matches āi for the entire sequence of states s(āi) end\nIn contrast to other approaches such as PPO (Wang et al., 2019) and SAC (Haarnoja et al., 2018), our policy update scheme does not need to be regularized by a KL-divergence metric, demonstrating its numerical robustness. Our method is only limited by the expressive power of policy πθ, as it needs to approximate ā well. For reasonable network architectures, this is not a problem, especially since ā corresponds to local improvements. The overall PODS algorithm is summarized above. For the experiments we present in the next section, we collected k = 4000 rollouts for each epoch, and we performed 50 gradient descent steps on Lθ for each policy optimization iteration." }, { "heading": "4 EXPERIMENTS", "text": "Environments: The environments used in our experiments set up cable-driven payload manipulation control problems that are inspired by the types of applications visualized in Figure 1. For all these examples, as illustrated in Figure 3, the action space is defined by the velocity of one or more handles, which are assumed to be directly controlled by a robot, and the state space is defined by the position of the handle as well as the position and velocity of the payload. We model our dynamical systems as mass-spring networks by connecting payloads to handles or to each other via stiff bilateral or unilateral springs. Using a simulation engine that follows closely the description in Zimmermann et al. (2019), we use a BDF2 integration scheme, as it exhibits very little numerical damping and is stable even under large time steps. Although this is not a common choice for RL environments, the use of higher order integration schemes also improves simulation quality and accuracy, as pointed out by Zhong et al. (2020). The Jacobian dsdā , which is used for both the first order and second order policy updates, is computed analytically via sensitivity analysis, as described in detail Zimmermann et al. (2018). The computational cost of computing this Jacobian is significantly less than performing the sequence of simulation steps needed for a policy rollout.\nThe control problems we study here are deceptively simple. All the environments fall in the category of underactuated systems and, in consequence, policies for such environments must fully leverage the system’s dynamics to successfully achieve a task. The lack of numerical damping in the motion’s payload, in particular, necessitates control policies that are very precise, as even small errors lead to\nnoticeable oscillations. These environments also enable us to incrementally increase the complexity of the tasks in order to study the scalability of our method, as well as that of the RL algorithms we compare against. For comparison purposes, in particular, we use three different types of dynamical systems; 2D Simple Pendulum, 3D Simple Pendulum, and 3D Double Pendulum. A detailed description of these environments is presented in Appendix A.2.\nFor all the environments, the action space describes instantaneous velocities of the handles, which are restricted to remain within physically reasonable limits.\nTasks: In order to encode our tasks, we used continuous rewards that are a function of the following state variables: the position of the handle (p), the position of the mass points representing the payloads relative to a target position (x), and their global velocities (v). The reward also contains a term that is a function of the actions which are taken. This term takes the form of a simple regularizer that aims to discourage large control actions.\nr(st, at) = 1\n1 2wp||pt||2 + 1 2wx||xt||2 + 1 2wv||v||2 + 1 2wa||at||2\n, (15)\nwhere the coefficients wp, wx, wv, wa allow each sub-objective to be weighted independently, as is commonly done. This very general reward formulation allows us to define two different tasks that we apply to each of the three systems described above:\n• Go to stop: Starting from an initial state with non-zero velocity, the pendulum must go to stop as quickly as possible in a downward configuration. For this task the weights wp = wx = 0.\n• Go to stop at the origin: In addition to stopping as fast as possible, the system must come to rest at a target location, which, without loss of generality, is chosen to be the origin.\nThe architecture of the neural network policies that we used is detailed in Appendix A.3. For a fair comparison, the neural network policies for PODS, PPO and SAC were initialized with the same set of initial weights. We fine tuned hyper parameters of PPO and SAC to get the best performance we could, and otherwise ran standard implementations provided in Achiam (2018).\n4.1 RESULTS\nThe monotonically improving behaviour of PODS can be seen in Figure 5. The reward reported is the result of averaging the reward of 1000 rollouts started from a test bed of unseen initial states. Even if the initial progress of PODS is not always as fast as PPO or SAC, it consistently leads to a higher reward after a small number of epochs. We note that the standard deviations visualized in this figure are indicative of a large variation in problem difficulty for the different state-space points that seed the test rollouts (e.g. a double pendulum that has little momentum is easier to be brought to a stop than one that is swinging wildly). As can be seen, the tasks that demand the payloads to be brought to a stop at a specific location are considerably more challenging. The supplementary video illustrates the result of the rollouts to provide an intuition into the quality of the control\npolicies learned with our method. Furthermore, Appendix A.6 presents convergence plots for the cable driven payload 2D, and the discretized 3D rope environments.\nPODS vs BPTT: To further explore the benefits of the PODS second order update rule, we compared against the approach of BPTT which naturally leverages the differentiability of the model. We found BPTT to be highly sensitive to the weight initialization of the policy. In Figure 4, we report results using the weight initialization that we found to favor BPTT the most. When training neural network policies, doing BPTT for a 100 steps rollout is effectively equivalent to backpropagating through a network that is 100 times deeper than the actual network policy, which is in itself a feat considering that despite introducing a terminal cost function to stabilize BPPT, Clavera et al. (2020)\nonly reports results of effectively BPTT for a maximum of 10 steps. Nontheless, BPTT is able to outperform PODS with the 1st order update rule. However, PODS with the 2nd order update rule is able to significantly outperform BPTT both in terms on convergence rates and final performance. Even though, a second order formulation of BPTT could be derived, it’s deployment would involve the hessian of the neural network policy which is computationally expensive. In contrast, PODS first order and second order formulations are equally easy to deploy.\nPODS, SAC, and PPO: To better understand the relative performance of the control policies learned with PODS, SAC and PPO, we report the terminal kinetic energy (KE) of the payload (Figure 6), the average magnitude of control action (Figure 8 – Appendix), and the average distance to the target location for the Stop At Origin tasks (Figure 7) – note, lower is better, and upon convergence, control policies learned with PODS adequately solve each individual problem in our randomized test bed. The shaded areas represent half the standard deviation of each metric. For figures with a logarithmic scale only the upper side of the standard deviation is presented.\nFor the task of stopping as fast as possible, PODS leads to a terminal kinetic energy that is typically orders of magnitude better than the other approaches (Top row Figure 6). For the tasks of stopping at the origin, SAC achieves very good terminal KE. The policies SAC learns, however, output large, high frequency handle motions, as seen in the high control cost in Figure 8. These actions end up counteracting the natural dynamic oscillations of the payload. The same strategy for the 3D double pendulum, however, is unsuccessful. In contrast, PODS learns control policies that use less effort to solve the control tasks than both SAC and PPO. This indicates that our policies learn to leverage the dynamics of the payload much more effectively, a characteristic that we attribute to the local improvement steps which, by design, monotonically improve the value functions of the control policies. Furthermore, it should also be noted that the class of fine manipulation tasks that we are dealing with represents a challenge for policies that output noisy actions." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we presented a highly effective strategy for policy optimization. As a core idea behind our approach, we exploit differentiable simulators to directly compute the analytic gradient of a policy’s value function with respect to the actions it outputs. Through specialized update rules,\nthis gradient information is used to monotonically improve the policy’s value function. We demonstrated the efficacy of our approach by applying it to a series of increasingly challenging payload manipulation problems, and we showed that it outperforms two SOTA RL methods both in terms of convergence rates, and in terms of quality of the learned policies.\nOur work opens up exciting avenues for future investigations. For example, although we evaluated PODS in isolation in order to best understand its strengths, it would be interesting to interleave it with existing RL methods. This will require extensions of our formulation to stochastic policies, and it would allow the relative strengths of different approaches to be effectively combined (e.g. exploration vs exploitation, with PODS excelling in the latter but not being designed for the former). We are also excited about the prospect of applying PODS to other types of control problems, particularly ones that include contacts (e.g. locomotion, grasping, etc). Although the need for a specialized simulator makes the application to standard RL benchmark suites (Brockman et al., 2016; Tassa et al., 2018) challenging, we note that sim-2-real success with a differentiable simulator has been recently reported in the context of soft locomoting robots (Bern et al., 2019). With continued evolution of such simulation technologies, we are excited about the prospect of creating a new benchmark suite applicable to approaches such as PODS that use differentiable simulators at their core." }, { "heading": "A APPENDIX", "text": "A.1 VALUE FUNCTION HESSIAN\nd2V ā(s0)\ndā2 =\nd\ndā\n[ ∂V ā\n∂ā + ∂V ā ∂s ds dā\n] ,\n= d\ndā\n[ ∂V ā\n∂ā\n] + d\ndā\n[ ∂V ā\n∂s\nds\ndā\n] ,\n=\n[ ds\ndā\nT ∂2V ā ∂s∂ā + ∂2V ā ∂ā2\n] + d\ndā\n[ ∂V ā\n∂s\n] ds\ndā + ∂V ā ∂s d dā\n[ ds\ndā\n] ,\n=\n[ ds\ndā\nT ∂2V ā ∂s∂ā + ∂2V ā ∂ā2\n] + [ ds\ndā\nT ∂V ā ∂s + ∂2V ā ∂ā∂s\n] ds\ndā + ∂V ā ∂s\n[ ds\ndā\nT ∂\n∂s\nds dā + ∂ ∂ā ds dā\n] ,\n= ∂V ā\n∂s\n( ds\ndā\nT ∂\n∂s\nds dā + ∂ ∂ā ds dā\n) + ds\ndā T(∂2V ā ∂s2 ds dā + 2 ∂2V ā ∂s∂ā ) + ∂2V ā ∂ā2 ." }, { "heading": "A.2 DETAILED DESCRIPTION OF ENVIROMENTS", "text": "• 2D Simple Pendulum: This system corresponds to a cable-driven pendulum in 2D (Figure 3 left). The handle of the pendulum is constrained to move only along the horizontal axis in order to test the degree to which a control policy can exploit the natural dynamics of the system.\n• 3D Simple Pendulum: For this system the pendulum is free to move in 3D, but the handle is restricted to moving along a horizontal plane.\n• 3D Double Pendulum: Extending the dynamical system above, the payload for this problem consists of two mass points that are coupled to each other via a stiff bilateral spring. The dimensionality of the state space doubles, and the system exhibits very rich and dynamic motions.\n• Cable drive payload 2D: For this environment we have a densely connected network of 4 point masses and two handles that are constrained to move along the horizontal axis.\n• Rope in 3D: For this environment we use 5 point masses to descretize a rope in 3D and one handle that is constrained to move on the horizontal plane." }, { "heading": "A.3 ARCHITECTURE OF NEURAL NETWORK POLICIES", "text": "The neural networks representing the control policies for all our environments share the same architecture, 2 fully connected layers of 256 units each with ReLU activations and one output layer with Tanh activation, to ensure that the policy only outputs commands that are within the velocity limits." }, { "heading": "A.4 PODS: ADDITIONAL FIGURES", "text": "" }, { "heading": "A.5 DIFFERANTIABLE SIMULATOR", "text": "Following the approach in Zimmermann et al. (2018), the sensitivity dsdā has the structure of the figure below.\nds dā = −\n( ∂G\n∂s\n)−1 dG\ndā ." }, { "heading": "A.6 ADDITIONAL DEMOS", "text": "See the accompanying video for more details of the following environments.\n1Figure reproduced with authorization of the authors ( http://arxiv.org/abs/1905.08534 )" } ]
2,020
null
SP:172fcdf24499acfeba5a1593b48135c0e2b5e6b1
[ "The paper proposes an autoencoder-based outlier detection system. The main idea of the paper is to ensure that outlier points are mapped to areas distant from the inliers in the embedding space. To this end, a novel cost function is introduced, which weighs the reconstruction error based on a prior distribution for the embedding space. This cost function hopes to force inliers to be mapped to the high probability region of the prior distribution and push outliers to low probability regions. Combined with a normal/multivariate prior distribution, this then enables the use of simple distance-based outlier detection methods." ]
State-of-the-art deep outlier detection methods map data into a latent space with the aim of having outliers far away from inliers in this space. Unfortunately, this is shown to often fail the divergence penalty they adopt pushes outliers into the same high-probability regions as inliers. We propose a novel method, OP-DMA, that successfully addresses the above problem. OP-DMA succeeds in mapping outliers to low probability regions in the latent space by leveraging a novel PriorWeighted Loss (PWL) that utilizes the insight that outliers are likely to have a higher reconstruction error than inliers. Building on this insight, explicitly encourages outliers to be mapped to low-propbability regions of its latent by weighing the reconstruction error of individual points by a multivariate Gaussian probability density function evaluated at each point’s latent representation. We formally prove that OP-DMA succeeds to map outliers to low-probability regions. Our experimental study demonstrates that OP-DMA consistently outperforms state-of-art methods on a rich variety of outlier detection benchmark datasets.
[]
[ { "authors": [ "Jinwon An", "Sungzoon Cho" ], "title": "Variational autoencoder based anomaly detection using reconstruction probability", "venue": "Special Lecture on IE,", "year": 2015 }, { "authors": [ "Laura Beggel", "Michael Pfeiffer", "Bernd Bischl" ], "title": "Robust anomaly detection in images using adversarial autoencoders", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Markus M Breunig", "Hans-Peter Kriegel", "Raymond T Ng", "Jörg Sander" ], "title": "Lof: identifying densitybased local outliers", "venue": "In Proceedings of the 2000 ACM SIGMOD international conference on Management of data,", "year": 2000 }, { "authors": [ "Raghavendra Chalapathy", "Aditya Krishna Menon", "Sanjay Chawla" ], "title": "Anomaly detection using one-class neural networks", "venue": "arXiv preprint arXiv:1802.06360,", "year": 2018 }, { "authors": [ "Varun Chandola", "Arindam Banerjee", "Vipin Kumar" ], "title": "Anomaly detection: A survey", "venue": "ACM computing surveys (CSUR),", "year": 2009 }, { "authors": [ "Jinghui Chen", "Saket Sathe", "Charu Aggarwal", "Deepak Turaga" ], "title": "Outlier detection with autoencoder ensembles", "venue": "In Proceedings of the 2017 SIAM international conference on data mining,", "year": 2017 }, { "authors": [ "Sarah M Erfani", "Sutharshan Rajasegarar", "Shanika Karunasekera", "Christopher Leckie" ], "title": "Highdimensional and large-scale anomaly detection using a linear one-class svm with deep learning", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Neda Lazarevic-McManus", "JR Renno", "Dimitrios Makris", "Graeme A Jones" ], "title": "An object-based comparative methodology for motion detection based on the f-measure", "venue": "Computer Vision and Image Understanding,", "year": 2008 }, { "authors": [ "Yezheng Liu", "Zhe Li", "Chong Zhou", "Yuanchun Jiang", "Jianshan Sun", "Meng Wang", "Xiangnan He" ], "title": "Generative adversarial active learning for unsupervised outlier detection", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Anmol Madan", "Manuel Cebrian", "Sai Moturu", "Katayoun Farrahi" ], "title": "Sensing the” health state” of a community", "venue": "IEEE Pervasive Computing,", "year": 2011 }, { "authors": [ "Pramuditha Perera", "Ramesh Nallapati", "Bing Xiang" ], "title": "Ocgan: One-class novelty detection using gans with constrained latent representations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Peter J Rousseeuw", "Katrien Van Driessen" ], "title": "A fast algorithm for the minimum covariance determinant", "venue": "estimator. Technometrics,", "year": 1999 }, { "authors": [ "Mohammad Sabokrou", "Mohammad Khalooei", "Mahmood Fathy", "Ehsan Adeli" ], "title": "Adversarially learned one-class classifier for novelty detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Mayu Sakurada", "Takehisa Yairi" ], "title": "Anomaly detection using autoencoders with nonlinear dimensionality reduction", "venue": "In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis,", "year": 2014 }, { "authors": [ "Bernhard Schölkopf", "John C Platt", "John Shawe-Taylor", "Alex J Smola", "Robert C Williamson" ], "title": "Estimating the support of a high-dimensional distribution", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "Karanjit Singh", "Shuchita Upadhyaya" ], "title": "Outlier detection: applications and techniques", "venue": "International Journal of Computer Science Issues (IJCSI),", "year": 2012 }, { "authors": [ "Ha Son Vu", "Daisuke Ueta", "Kiyoshi Hashimoto", "Kazuki Maeno", "Sugiri Pranata", "Sheng Mei Shen" ], "title": "Anomaly detection with adversarial dual autoencoders", "venue": null, "year": 1902 }, { "authors": [ "Yan Xia", "Xudong Cao", "Fang Wen", "Gang Hua", "Jian Sun" ], "title": "Learning discriminative reconstructions for unsupervised outlier removal", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Chong Zhou", "Randy C Paffenroth" ], "title": "Anomaly detection with robust deep autoencoders", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Background. Outlier detection, the task of discovering abnormal instances in a dataset, is critical for applications from fraud detection, error measurement identification to system fault detection (Singh & Upadhyaya, 2012). Given outliers are by definition rare, it is often infeasible to get enough labeled outlier examples that are represetnative of all the forms the outliers could take. Consequently, unsupervised outlier detection methods that do not require prior labeling of inliers or outliers are frequently adopted (Chandola et al., 2009).\nState-of-Art Deep Learning Methods for Outlier Detection. Deep learning methods for outlier detection commonly utilize the reconstruction error of an autoencoder model as an outlier score for outlier detection (Sakurada & Yairi, 2014; Vu et al., 2019). However, directly using the reconstruction error as the outlier score has a major flaw. As the learning process converges, both outliers and inliers tend to converge to the average reconstruction error (to the same outlier score) – making them indistinguishable (Beggel et al., 2019). This is demonstrated in Figure 1a, which shows that the ratio of average reconstruction error for outliers converges to that of the inliers.\nTo overcome this shortcoming, recent work (Beggel et al., 2019; Perera et al., 2019) utilizes the distribution-mapping capabilities of generative models that encourage data to follow a prior distribution in the latent space. These cutting-edge methods assume that while the mapping of inlier points will follow the target prior distribution, outliers will not due to their anomalous nature. Instead, outliers will be mapped to low-probability regions of the prior distribution, making it easy to detect them as outliers (Beggel et al., 2019; Perera et al., 2019).\nHowever, this widely held assumption has been shown to not hold in practice (Perera et al., 2019). Unfortunately, as shown in Figure 1b, both inliers and outliers are still mapped to the same high probability regions of the target prior distribution, making them difficult to distinguish.\nProblem Definition. Given a given dataset X ∈ RM of multivariate observations, let f : RM → RN , N ≤M , be a function from the multivariate feature space of X to a latent space f(x) ∈ RN such that f(X) ∼ PZ , where PZ is a known and tractable prior probability density function. The dataset X ∈ RM is composed as X = XO + XI , where XO and XI are a set of outlier and inlier points, respectively. During training, it is unknown whether any given point x ∈ X is an outlier\nor an inlier. Intuitively, our goal is to find a function f that maps instances of a dataset X into a latent space S with a known distribution, such that outliers are mapped to low probability regions and inliers to high probability regions. More formally, we define unsupervised distribution-mapping outlier detection as the problem of finding a function f∗ with the aforementioned properties of f such that we maximize the number of outliers xo ∈XO and inliers xi ∈XI for which PZ(f∗(xo)) < PZ(f ∗(xi)) holds.\nChallenges. To address the open problem defined above, the following challenges exist:\n1. Overpowering divergence penalty. Intuitively, distribution mapping methods utilize a divergence penalty to achieve a latent space mapping of input data that has a high probability of following a target prior distribution. While the data overall should follow this prior distribution, a solution must be found to instead maps outliers to low-probability regions of the prior. Having the data match the prior overall, while having outliers mapped to low probability regions of the prior creates a conflict, as the two tasks are diametrically opposed. To achieve such a mapping requires overpowering the divergence penalty in order to map outliers to low probability regions in the latent space.\n2. Unknown outlier status. In unsupervised outlier detection, during training points do not have any labels indicating whether they are outliers or inliers. This unsupervised scenario, while common in practice (Singh & Upadhyaya, 2012), makes it challenging to design strategies that explicitly coerce outliers to be mapped to low-probability regions.\nOur OP-DMA Approach. In this work, we propose the Outlier Preserving Distribution Mapping Autoencoder (OP-DMA). Our core idea is to propose a novel Prior Weighted Loss (PWL) function that solves the two conflicting tasks of mapping the input data to a prior distribution while encouraging outliers to be mapped to low probability regions of that prior. This PWL directly addresses the shortcomings of the existing distribution mapping outlier detection methods (Vu et al., 2019; Perera et al., 2019), and to the best of our knowledge is the first unsupervised cost function that explicitly encourages outliers to be mapped to low probability regions.\nWe assume that outliers will have a high reconstruction error during the initial stages of training, which causes the PWL to place them in low-probability (low PDF) regions in the latent space. This way, PWL overcomes the challenge of overpowering the divergence penalty. It succeeds in mapping outliers to low-probability regions (far from the mean of the latent distribution) even though each input point’s outlier status is unknown. Our OP-DMA framework is pluggable, meaning off-theshelf distance-based outlier methods can be flexibly plugged in post-transformation.\nOur key contributions are as follows:\n1. Propose OP-DMA, a novel distribution-mapping autoencoder that effectively separates outliers from inliers in the latent space without knowing nor making assumptions on the original distribution of the data in the feature space.\n2. Design the Prior-Weighted Loss (PWL), which when coupled with a divergence penalty encourages outliers to be mapped to low-probability regions while inliers are mapped to high-probability regions of the latent space of an autoencoder.\n3. Provide rigorous theoretical proof that the optimal solution for OP-DMA places outliers further than inliers from the mean of the distribution of the data in the latent space.\n4. Demonstrate experimentally that OP-DMA consistently outperforms other state-of-art outlier detection methods on a rich variety of real-world benchmark outlier datasets.\nSignificance: OP-DMA is a versatile outlier detection strategy as it can handle input data that has arbitrary distributions in the feature space, while not making any distance or density assumptions on the data. To the best of our knowledge, we are the first to propose a loss function that explicitly encourages outliers to be mapped to low-probability regions while inliers are mapped to high probability regions. Our PWL approach is pluggable, and can easily be incorporated into alternate outlier detectors. Our ideas could also spur further research into various prior weighted loss functions." }, { "heading": "2 RELATED WORK", "text": "State-of-the-art deep outlier detection methods fall into one of three categories: 1) Autoencoders coupled with classic outlier detectors (Erfani et al., 2016; Chalapathy et al., 2018), 2) Reconstruction error-based outlier detection methods (Zhou & Paffenroth, 2017; Chen et al., 2017; Sabokrou et al., 2018; Xia et al., 2015), or 3) Generative outlier detection methods (Perera et al., 2019; Vu et al., 2019; Liu et al., 2019). 1) Autoencoders coupled with classic outlier detectors project data into a lower dimensional latent space before performing outlier detection on that latent representation. These methods make the strict assumption that outliers in the original space will remain outliers in the latent space. Further, they fail to explicitly encourage this in the mapping function. 2) Reconstruction error-based outlier detection methods utilize the reconstruction error of an autoencoder network to identify outliers. They typically use the reconstruction error directly as the anomaly score (An & Cho, 2015). In more recent work, they try to separate outliers into a separate low-rank matrix analogous to RPCA (Zhou & Paffenroth, 2017) or they introduce a separate discriminator network (Sabokrou et al., 2018). However, as shown in (Beggel et al., 2019), for autoencoders the reconstruction error of outliers often converges to that of inliers. This negatively impacts the performance of such reconstruction error methods. 3) Generative outlier detection methods leverage deep generative models (Goodfellow et al., 2014; Kingma & Welling, 2013) to generate the latent space such that the distribution of the latent space is encouraged to match a known prior so that thereafter an appropriate outlier method for the prior can be applied (Vu et al., 2019) to the latent space, or a discriminator can identify outliers in the latent space (Vu et al., 2019) or both the latent space and reconstructed space (Perera et al., 2019). However, as discussed in Section 1, in practice both inliers and outliers are both mapped to the prior distribution as outliers that are mapped to low-probability regions will generally incur a high cost from the divergence term which matches the latent distribution to the prior. OP-DMA shares characteristics with each of these three categories. However, unlike the other methods in these categories, OP-DMA actively encourages outlier to be mapped to low-probability regions instead of just assuming that this will be the case. OP-DMA is is a generative outlier method that uses the reconstruction error to encourage outliers to be mapped to low-probability regions. Further, it can flexibly be paired with nearly any classic outlier detector after distribution mapping." }, { "heading": "3 PROPOSED APPROACH: OP-DMA", "text": "Overview of approach. OP-DMA consists of three main components:\n1. A distribution mapping autoencoder (DMA) that OP-DMA utilizes to map a datasetX from the feature space RM into a lower dimensional latent space RN , such that the distribution of\nthe encoded data in the lower dimensional latent space has a known probability distribution PZ . This step is crucial as it makes it easy for OP-DMA to easily identify low probability regions of the latent space (outliers should be mapped here). This can be done because after the distribution mapping, we can explicitly calculate the Probability Density Function (PDF) of the latent space so long as we selected a prior distribution with a known PDF.\n2. A novel Probability-Weighted Loss (PWL) function for distribution mapping that encourages outliers to be mapped to low-probability regions of the latent space, solving both the challenges of overpowering divergence penalty and unknown outlier status.\n3. An traditional outlier detection method is used to identify outliers in the transformed latent space. The choice of outlier detection method is flexible as long as it is amenable to the prior distribution PZ selected in step 1 of OP-DMA. For instance, when a Gaussian distribution is used for the prior, then OP-DMA utilizes a classical distance-based outlier detection method for step 3. These steps are described in the following subsections and illustrated in Figure 2." }, { "heading": "3.1 DISTRIBUTION MAPPING AUTOENCODER (DMA)", "text": "In order to use prior-weighting to map outliers to low-probability regions of a known PDF in a latent space, our distribution mapping method must meet two design requirements:\n1. A one-to-one mapping between each original data point, its latent representation and the reconstructed data point must be established so that each data point’s reconstructed data point is unique and can be determined, and vice versa.\n2. The divergence term must impose a cost based on how well a batch of latent data points match the prior overall, rather than requiring individual data points to have a high probability of being a draw from the prior.\nTo meet these requirements, we select the Wasserstein AutoEncoder (WAE) (Tolstikhin et al., 2017) as the foundation for our distribution mapping. WAEs are distribution-mapping autoencoders that minimize the Wasserstein distance between the original data and its reconstruction, while mapping the input data to a latent space with a known prior distribution. To see why we base our distributionmapping technique on this method, consider the WAE objective function for encoder networkQ and decoder network G:\nWλc (X,Y ) = Reconstruction Error︷ ︸︸ ︷ inf Q EPXEQ(Z|X)[c(X,G(Z))] + Divergence Penalty︷ ︸︸ ︷ λD(PQ, PZ) . (1)\nThe first term on the right hand side of Equation 1 corresponds to the reconstruction error between the input data and reconstructed data for cost function c. The second term D is a divergence penalty between the distribution of the latent space and the prior distribution, with λ a constant weight term that determines how much that divergence is penalized. Let us deterministically produce the latent representation Q(X) and output G(Q(X)|X) (by using Q(X) = δµ(X), where µ is some function\nmapping input data set X to Q(X), for instance). It is now clear why Wasserstein autoencoders are an appropriate choice to model our distribution mapping method, as the reconstruction error term EPXEQ(Z|X)[c(X,G(Z))] in Equation 1 represents a one-to-one correspondence between input data, its latent representation and the reconstructed output (meeting requirement 1). Additionally, D is a batch-level cost term that would be incurred if the latent representation of a batch doesn’t match the prior distribution but doesn’t require individual points to be mapped to a high probability region of the prior (meeting requirement 2). However, we note that WAEs unfortunately do not encourage outliers in the feature space to remain outliers in the latent space. Consider D to be a discriminator network. Then D is likely to learn a boundary around the high probability region of the prior distribution. Thus the encoder network Q will be penalized for mapping an outlier to a low probability region outside of the boundary found by D as the discriminator D would correctly identify it as a generated point." }, { "heading": "3.2 PRIOR-WEIGHTED LOSS (PWL): NOVEL LOSS FUNCTION FOR OUTLIER COERCION", "text": "We now describe our novel Prior-Weighted Loss (PWL) that tackles the above challenge of WAEs mapping outliers to high probability regions. The key idea is that outliers will initially have higher reconstruction error than inliers during training. This core idea draws from the area of anomaly detection using reconstruction probability (An & Cho, 2015). We thus propose the Prior Weighted Loss (PWL), a novel cost term that weights each data point’s reconstruction error term in Equation 1 by the point’s latent likelihood, PZ(Q(x)). The latent likelihood is the PDF of the latent space’s prior distribution evaluated at its corresponding latent representation.\nThe prior weighted loss c′ is defined as c′ := c(x,G(Q(x))) · PZ(Q(X)) As the latent likelihood is large in high probability regions and small in low probability regions by definition, points with a high reconstruction error that are mapped to high-probability regions will be penalized more than those with high reconstruction error that are mapped to low probability regions. Since outliers are assumed to result in a high reconstruction error (at least during early training epochs), by reducing the penalty to the network for poorly reconstructed points that have been mapped to low-probability regions of the prior, the network is encouraged to map outliers to these low-probability regions. We now introduce our OP-DMA objective Wλc′ as:\nWλc′ =\nPrior weighted loss︷ ︸︸ ︷ inf\nQ:PQ=PZ EPXEQ(Z|X)[c′(X,G(Z))] +\nDivergence penalty︷ ︸︸ ︷ λD(PQ, PZ) (2)\nSince we have significantly modified the reconstruction error term in the Wasserstein autoencoder loss function, a natural question is whether or not OP-DMA still corresponds to an autoencoder. Specifically, will the decoder’s output still match the input data to the encoder? If this does not hold, two issues could arise: 1) The latent features learned by the network might be unrelated to the input, and hence useless in cases where it is desirable to use the latent representation in a downstream task. 2) More importantly for our outlier detection task, if the network is no longer encouraged to reconstruct the input, the crucial property that outliers will have a higher reconstruction error may no longer hold. In such a case, the “reconstruction error” may be meaningless. Fortunately, we can show that our OP-DMA loss function still corresponds to a Wasserstein divergence between the input and reconstructed distributions (Theorem 1). For this, we must demonstrate that is that the prior-weighted cost c′ meets the requirements of a Wasserstein divergence’s cost function, namely, that c′(x1, x2) ≥ 0 (∀ x1, x2 ∈ supp(P )), (c′(x, x) = 0) (∀ x ∈ supp(P )), and Eγ [c′(x1, x2)] ≥ 0 (∀ γ ∈ Γ[P, PZ ]) Theorem 1. Let Wc be a Wasserstein distance. Then Wc′ is a Wasserstein distance, with c’ the prior-weighted c." }, { "heading": "3.3 UNSUPERVISED STATISTICAL OUTLIER DETECTION METHOD", "text": "Intuitively, an ideal mapping would place all inliers within regions where the latent likelihood is greater than some value V , and all outliers into some alternate regions where the latent likelihood is less than that value V . The core result fundamental to our work is thus that this scenario is indeed the optimal solution for the loss function of OP-DMA as stated in Theorem 2.\nTheorem 2. Let Q be an encoder network such that D(PQ, PZ ,F) = 0, where D(A,B,F) is the Maximum Mean Discrepancy between A and B, F is the set of bounded continuous functions and PZ = N (0,Σ). Let us consider or dataset X as a centered random variable, X : Ω → Rn, X ∼ PX . Let X(A), A ⊂ Ω, be outliers and let H = Ω− A be the inliers, where ∫ X(A)\npX(x)dx = α. Further, let c′(a,G(Q(a)) > c′(h,G(Q(h)) ∀ a ∈ X(A), h ∈ X(H). Then, the optimal solution of OP-DMA is to map such that ‖Q(X(A))‖mahalanobis ≥ δ and ‖Q(X(H))‖mahalanobis < δ, where\nδ = √∫ 1−α 0 t−n/2−1e 1 2t 2 n 2 Γ(n2 ) dt (3)\nThis important result implies that after transformation with OP-DMA outliers can be separated from inliers using a simple distance metric. This lays a solid foundation for a simple yet effective outlier detection scheme. Namely, we first transform the dataset X to a latent representation with a multivariate Gaussian prior distribution, as justified by Theorem 2. Then, as Equation 3 states, outliers can be isolated using a simple distance-based approach. More specifically, any standard outlier detection method that finds outliers in Gaussian distributions (e.g. EllipticEnvelope method (Rousseeuw & Driessen, 1999)) can be used to find outliers in the latent space." }, { "heading": "3.4 PULLING IT ALL TOGETHER: UNSUPERVISED OUTLIER DETECTION USING OP-DMA", "text": "OP-DMA, our end-to-end outlier detection approach, is now summarized. First, the input data is transformed to match a prior distribution with a distribution mapping autoencoder using our novel Prior-Weighted Loss (PWL) (Equation 2). We chose this prior to be a multivariate Gaussian distribution with 0 mean and identity covariance, as justified by Theorem 2. Then, an Elliptic Envelope (Rousseeuw & Driessen, 1999) is used to identify outliers. The outlier detection process is outlined in Appendix A.3. We use the unbiased estimator of Maximum Mean Discrepency (MMD) from (Gretton et al., 2012) for the divergence term. For the kernel k of MMD, we use the inverse multiquadratics kernel as in (Tolstikhin et al., 2017) and Mean Squared Error (MSE) for c." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "Compared Methods. We compare OP-DMA to state-of-the-art distribution mapping outlier detection methods. These include methods that perform outlier detection on the latent space of a WAE (Tolstikhin et al., 2017), a VAE (Kingma & Welling, 2013), and an Adversarial Autoencoder (AAE) (Makhzani et al., 2015) – all with a Gaussian prior but they do not integrate our PWL idea. We test against MO-GAAL (Liu et al., 2019) and ALOCC (Sabokrou et al., 2018), two state-of-the-art deep generative outlier detection models. We also test against LOF (Breunig et al., 2000) and OC-SVM (Schölkopf et al., 2001), two popular state-of-the-art non-deep outlier detection methods.\nData Sets. We evaluated on a rich variety of real-world data sets from the ODDs 1 benchmark data store (Rayana, 2016). These datasets cover a wide range of dimensionality in the feature space from 6 to 274, and also different outlier contamination percentages from 0.2% to 32%.\n1http://odds.cs.stonybrook.edu/\nTable 1 breaks down the statistics of each dataset. We evaluate all methods on their ability to detect subjects who have a fever from smartphone sensible data using the MIT Social Evolution dataset (Madan et al., 2011) (RC Fever) 2 to demonstrate OP-DMA’s effectiveness for mobile healthcare. Finally, we also evalue on the MNIST dataset 3. We used all MNIST images of “7”s as inliers, and randomly sampled “0”s as outliers such that “0”s account for ∼ 1% of the data. Since outlier detection is unsupervised without any supervised training phase, we perform outlier detection in an unsupervised manner on the entire dataset instead of having to introduce train/test splits. In each dataset, all points are labeled as either inlier or outlier as ground truth. We emphasize that these ground truth labels are only used for evaluation but not for training all methods.\nMetrics. Due to the large class imbalance inherent to outlier detection, we use the F1 score as our performance metric (Lazarevic-McManus et al., 2008) as commonly used to evaluate outlier detection methods (An & Cho, 2015; Zhou & Paffenroth, 2017; Zong et al., 2018).\nParameter Configurations of Methods. Encoders and decoders of all methods consist of 3-layer neural networks, where the decoder in each pair mirrors the structure of its encoder. The number of nodes in the hidden layer of each network is a hyperparameter from {5, 6, 9, 15, 18, 100}. The number of nodes in the latent layer varies from {2, 3, 6, 9, 15}. The regularization parameter λ is chosen such that the reconstruction error is on the same order of magnitude as the MMD error for the first epoch. We use the standard parameters of MO-GAAL from the authors’ code 4. We also use the standard configuration of ALOCC from the authors code 5, except we add an additional dense layer at the beginning of each subnetwork. We do this as ALOCC assumes input to be images of a certain shape. The additional dense layer transforms the input data from its original dimensionality\n2http://realitycommons.media.mit.edu/socialevolution4.html 3http://yann.lecun.com/exdb/mnist/ 4https://github.com/leibinghe/GAAL-based-outlier-detection 5https://github.com/khalooei/ALOCC-CVPR2018\ninto the required shape. For these we use the standard parameters from Scikit-Learn.\nExperiment 1: Versatile Anomaly Detection. We validate the versatility of our OP-DMA method by showing that our method consistently outperforms state-of-the-art methods on a rich variety of benchmark datasets. As shown in Table 2, OP-DMA outperforms the majority (9/13) of the other methods on the benchmark datasets. We see that OP-DMA’s superior performance is not limited to datasets with either a high or low percentage of outliers. OP-DMA is the best performing method on the dataset with the largest ratio of outliers (Satellite) as well as that with the smallest ratio (Cover).\nExperiment 2: Sensitivity to Contamination Parameter. The contamination parameter α is used to fit the standard outlier method, Elliptic Envelope, plugged into our OP-DMA framework on the encoded data after training. Thus, we test the sensitivity of EllipticEnvelope to the value of the contamination parameter by evaluating the F1 score of outlier detection on the Satellite dataset mapped by OP-DMA. The results (Figure 3 (a)) show that as long as this parameter is not significantly underestimated, the F1-score is robust to different values of the contamination parameter.\nExperiment 3: Verifying that Outliers are Mapped To Low-Probability Regions. We transformed data from a multi-modal distribution in R4 consisting of a mixture of two Gaussians centered at (0,0,0,0) and (5,5,5,5) to a standard normal Gaussian in R2. Outliers in the original space were drawn from a uniform distribution and consisted of 2.4% of the total data. As Figure 3 b) shows, outliers are successfully mapped far from the inlier data points. Furthermore, the average value of the prior evaluated at the outlier points is 0.02, while the average for inliers is 0.08, confirming that outliers are mapped to lower-probability regions than inliers." }, { "heading": "5 CONCLUSION", "text": "We have introduced OP-DMA, an autoencoder-based solution that unlike prior methods is truly outlier preserving in its distribution mapping method. That is, OP-DMA maps outliers in the feature space to low probability regions in the latent space in which a multivariate standard normal Gaussian prior distribution is enforced. Outliers are consequently easily identifiable in the latent space. Our experimental study comparing OP-DMA to state-of-the-art methods on a collection of benchmark outlier detection datasets shows that it consistently outperforms these methods on the majority of the datasets. We have also demonstrated that there is not a significant increase in running time between our method and state-of-the-art methods." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREM 1\nProof. Since c is a Wasserstein divergence, we know that c(x1, x2) ≥ 0 (∀ x1, x2 ∈ supp(P )), (c(x, x) = 0) (∀ x ∈ supp(P )), and Eγ [c(x1, x2)] ≥ 0 (∀ γ ∈ Γ[P, PZ ]). Since PZ(z) ≥ 0 (∀ z), c′ will also fulfill the three aforementioned properties of c. Thus, Wc′ is a Wasserstein divergence.\nA.2 PROOF OF THEOREM 2\nProof. The Mahalanobis distance of Q(X) can itself be expressed as a random variable, δ =√ Q(X)Σ−1Q(X)T . Let Φδ be the CDF of δ. Then, Φδ(1 − α) = P (δ ≤ 1 − α) = P (δ2 ≤ (1− α)2) = Φδ2((1− α)2). Let Y = Q(X)M−1, where MTM = Σ is the Choleski decomposition of the covariance Σ. Since D(PQ, PZ ,F) = 0, and D(A,B,F) = 0 iff A = B, we thus know that Q(X) ∼ N (0,Σ). Thus sinceQ(X) is normally distributed and centered, Y is normally distributed with identity covariance. Since δ2 = Q(X)Σ−1Q(X)T = Y Y T , Φδ2 is the CDF of of the sum of squares of n normally distributed variables with mean 0 and σ = 1. Thus, Φδ2 is the Chi Squared distribution. The inverse Chi Squared CDF will thus give us the distance δ such that 1 − α percent of the points are\nwithin δ = √∫ 1−α\n0 t−n/2−1e\n1 2t\n2 n 2 Γ( n2 )\ndt Now, let us assume that for some parameter choice Θ′ for Q that\nαP (Q(X(A)|Θ′) ≤ δ) = β, β > 0. Consequently, (1 − α)P (Q(X(H)|Θ′) > δ) = β, since P (Q(X) > δ) = α and ∫ X(A)\npX(x)dx = α. Conversely, let us assume that there is a parameter configuration Θ such that αP (Q(X(A)|Θ) ≤ δ) = 0 and so (1− α)P (Q(X(H)|Θ) > δ) = 0. Since PZ ∼ N (0,Σ), PZ(d1) < PZ(d2) for ‖d1‖mahalanobis > ‖d2‖mahalanobis. Thus, since we assume c(a,G(Q(a)) > c(h,G(Q(h)) ∀ a ∈ X(A), h ∈ X(H), then\nEPXEQ(Z|X)c′(xp, G(Q(xp|Θ′))) = EPXEQ(Z|X)c(xp, G(Q(xp|Θ′)))PZ(xp) > EPXEQ(Z|X)c(xp, G(Q(xp|Θ)))PZ(xp) = EPXEQ(Z|X)c′(xp, G(Q(xp|Θ))).\nThus, the optimal solution for OP-DMA’s cost function is one that maps outliers to regions with a larger Mahalanobis distance than that of inliers.\nA.3 OP-DMA ALGORITHM\nAlgorithm 1: Unsupervised Outlier Detection with OP-DMA Require: Regularization coefficient λ Contamination parameter α Initialized encoder network QΦ and decoder network GΘ with random weights Φ and Θ Dataset X while Θ, Φ not converged do\nSample {x1, x1, ..., xn} from X , {z1, z1, ..., zn} from N (0, I), and {z̃1, z̃1, ..., z̃n} from QΦ(Z|X) Update weights Φ and Θ by descending\n1\nn n∑ i=1 c(xi, GΘ(z̃i)) · λ · PZ(z̃i) + 1 n2 − n (∑ h 6=j k(zh, zj) + ∑ h 6=j k(z̃h, z̃j) ) − 2 n2 ∑ h,j k(zh, z̃j)\nend Find Dmin = {QΦ(xi), QΦ(xj), ..., QΦ(xk)}, ‖Dmin‖ = (1− α)‖D‖ with Minimum Covariance Determinant estimator, infΣ̃Det{Σ̃}. Find estimated mean µ̃ from Dmin return ‖QΦ(xi)‖mahalanobis = (QΦ(xi)− µ̃)′Σ̃(QΦ(xi)− µ̃) for xi ∈ D as outlier scores" } ]
2,020
null
SP:d19db4a50cde893b283fb305d8ce11ef37f3edfc
[ "The authors apply algebraic geometry to program synthesis, by identifying programs with points of analytic varieties. They construct a smooth relaxation to the synthesis problem by considering the space of probability distributions over codes for universal turning machines (which is a smooth/continuous manifold), and then translate this probability into corresponding probabilities of generating a correct program. This allows them to extend the KL divergence on the discrete distribution of codes to a smooth function, whose zeros correspond to the desired programs. They use MCMC to find (approximate) these zeros." ]
We present a new perspective on program synthesis in which programs may be identified with singularities of analytic functions. As an example, Turing machines are synthesised from input-output examples by propagating uncertainty through a smooth relaxation of a universal Turing machine. The posterior distribution over weights is approximated using Markov chain Monte Carlo and bounds on the generalisation error of these models is estimated using the real log canonical threshold, a geometric invariant from singular learning theory.
[]
[ { "authors": [ "Shun-ichi Amari", "Tomoko Ozeki", "Hyeyoung Park" ], "title": "Learning and inference in hierarchical models with singularities", "venue": "Systems and Computers in Japan,", "year": 2003 }, { "authors": [ "Michael F Atiyah" ], "title": "Resolution of singularities and division of distributions", "venue": "Communications on Pure and Applied Mathematics,", "year": 1970 }, { "authors": [ "Alan W Biermann" ], "title": "On the inference of Turing machines from sample computations", "venue": "Artificial Intelligence,", "year": 1972 }, { "authors": [ "Rudy R Bunel", "Alban Desmaison", "Pawan K Mudigonda", "Pushmeet Kohli", "Philip Torr" ], "title": "Adaptive neural compilation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Tianqi Chen", "Emily Fox", "Carlos Guestrin" ], "title": "Stochastic gradient Hamiltonian Monte Carlo", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Execution-guided neural program synthesis", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "James Clift", "Daniel Murfet" ], "title": "Derivatives of Turing machines in linear logic", "venue": "arXiv preprint arXiv:1805.11813,", "year": 2018 }, { "authors": [ "Karel De Leeuw", "Edward F Moore", "Claude E Shannon", "Norman Shapiro" ], "title": "Computability by probabilistic machines", "venue": "Automata studies,", "year": 1956 }, { "authors": [ "Nan Ding", "Youhan Fang", "Ryan Babbush", "Changyou Chen", "Robert D Skeel", "Hartmut Neven" ], "title": "Bayesian sampling using stochastic gradient thermostats", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Richard Evans", "Edward Grefenstette" ], "title": "Learning explanatory rules from noisy data", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Cameron E Freer", "Daniel M Roy", "Joshua B Tenenbaum" ], "title": "Towards common-sense reasoning via conditional simulation: legacies of Turing in artificial intelligence", "venue": "Turing’s Legacy,", "year": 2014 }, { "authors": [ "Alexander L Gaunt", "Marc Brockschmidt", "Rishabh Singh", "Nate Kushman", "Pushmeet Kohli", "Jonathan Taylor", "Daniel Tarlow" ], "title": "Terpret: A probabilistic programming language for program induction", "venue": "arXiv preprint arXiv:1608.04428,", "year": 2016 }, { "authors": [ "Sumit Gulwani", "Oleksandr Polozov", "Rishabh Singh" ], "title": "Program synthesis", "venue": "Foundations and Trends in Programming Languages,", "year": 2017 }, { "authors": [ "Heisuke Hironaka" ], "title": "Resolution of singularities of an algebraic variety over a field of characteristic zero: I", "venue": "Annals of Mathematics,", "year": 1964 }, { "authors": [ "Matthew D Hoffman", "Andrew Gelman" ], "title": "The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Marcus Hutter" ], "title": "Universal artificial intelligence: Sequential decisions based on algorithmic probability", "venue": "Springer Science & Business Media,", "year": 2004 }, { "authors": [ "Łukasz Kaiser", "Ilya Sutskever" ], "title": "Neural GPUs learn algorithms", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Daniel Murfet", "Susan Wei", "Mingming Gong", "Hui Li", "Jesse Gell-Redman", "Thomas Quella" ], "title": "Deep learning is singular, and that’s good", "venue": "arXiv preprint arXiv:2010.11560,", "year": 2020 }, { "authors": [ "Arvind Neelakantan", "Quoc V. Le", "Ilya Sutskever" ], "title": "Neural programmer: Inducing latent programs with gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Discovering neural nets with low Kolmogorov complexity and high generalization capability", "venue": "Neural Networks,", "year": 1997 }, { "authors": [ "Ray J Solomonoff" ], "title": "A formal theory of inductive inference", "venue": "Part I. Information and control,", "year": 1964 }, { "authors": [ "Sumio Watanabe" ], "title": "Almost all learning machines are singular", "venue": "IEEE Symposium on Foundations of Computational Intelligence,", "year": 2007 }, { "authors": [ "Sumio Watanabe" ], "title": "Algebraic Geometry and Statistical Learning Theory, volume 25", "venue": null, "year": 2009 }, { "authors": [ "Sumio Watanabe" ], "title": "A widely applicable Bayesian information criterion", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient Langevin dynamics", "venue": "In Proceedings of the 28th International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Ruqi Zhang", "Chunyuan Li", "Jianyi Zhang", "Changyou Chen", "Andrew Gordon Wilson" ], "title": "Cyclical stochastic gradient MCMC for Bayesian deep learning", "venue": "In International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The idea of program synthesis dates back to the birth of modern computation itself (Turing, 1948) and is recognised as one of the most important open problems in computer science (Gulwani et al., 2017). However, there appear to be serious obstacles to synthesising programs by gradient descent at scale (Neelakantan et al., 2016; Kaiser & Sutskever, 2016; Bunel et al., 2016; Gaunt et al., 2016; Evans & Grefenstette, 2018; Chen et al., 2018) and these problems suggest that it would be appropriate to make a fundamental study of the geometry of loss surfaces in program synthesis, since this geometry determines the learning process. To that end, in this paper we explain a new point of view on program synthesis using the singular learning theory of Watanabe (2009) and the smooth relaxation of Turing machines from Clift & Murfet (2018).\nIn broad strokes this new geometric point of view on program synthesis says:\n• Programs to be synthesised are singularities of analytic functions. If U ⊆ Rd is open and K : U −→ R is analytic, then x ∈ U is a critical point of K if ∇K(x) = 0 and a singularity of the function K if it is a critical point where K(x) = 0.\n• The Kolmogorov complexity of a program is related to a geometric invariant of the associated singularity called the Real Log Canonical Threshold (RLCT). This invariant controls both the generalisation error and the learning process, and is therefore an appropriate measure of “complexity” in continuous program synthesis. See Section 3.\n• The geometry has concrete practical implications. For example, a MCMC-based approach to program synthesis will find, with high probability, a solution that is of low complexity (if it finds a solution at all). We sketch a novel point of view on the problem of “bad local minima” (Gaunt et al., 2016) based on these ideas. See Section 4.\nWe demonstrate all of these principles in experiments with toy examples of synthesis problems.\nProgram synthesis as inference. We use Turing machines, but mutatis mutandis everything applies to other programming languages. Let T be a Turing machine with tape alphabet Σ and set of states Q and assume that on any input x ∈ Σ∗ the machine eventually halts with output T (x) ∈ Σ∗. Then to the machine T we may associate the set {(x, T (x))}x∈Σ∗ ⊆ Σ∗ × Σ∗. Program synthesis is the study of the inverse problem: given a subset of Σ∗ × Σ∗ we would like to determine (if possible) a Turing machine which computes the given outputs on the given inputs.\nIf we presume given a probability distribution q(x) on Σ∗ then we can formulate this as a problem of statistical inference: given a probability distribution q(x, y) on Σ∗ × Σ∗ determine the most likely machine producing the observed distribution q(x, y) = q(y|x)q(x). If we fix a universal Turing machine U then Turing machines can be parametrised by codes w ∈ W code with U(x,w) = T (x) for all x ∈ Σ∗. We let p(y|x,w) denote the probability of U(x,w) = y (which is either zero or one)\nso that solutions to the synthesis problem are in bijection with the zeros of the Kullback-Leibler divergence between the true distribution and the model\nK(w) = ∫ ∫ q(y|x)q(x) log q(y|x)\np(y|x,w)dxdy . (1)\nSo far this is just a trivial rephrasing of the combinatorial optimisation problem of finding a Turing machine T with T (x) = y for all (x, y) with q(x, y) > 0.\nSmooth relaxation. One approach is to seek a smooth relaxation of the synthesis problem consisting of an analytic manifold W ⊇ W code and an extension of K to an analytic function K : W −→ R so that we can search for the zeros of K using gradient descent. Perhaps the most natural way to construct such a smooth relaxation is to takeW to be a space of probability distributions overW code and prescribe a model p(y|x,w) for propagating uncertainty about codes to uncertainty about outputs (Gaunt et al., 2016; Evans & Grefenstette, 2018). The particular model we choose is based on the semantics of linear logic (Clift & Murfet, 2018). Supposing that such a smooth relaxation has been chosen together with a prior ϕ(w) over W , smooth program synthesis becomes the study of the statistical learning theory of the triple (p, q, ϕ).\nThere are perhaps two primary reasons to consider the smooth relaxation. Firstly, one might hope that stochastic gradient descent or techniques like Markov chain Monte Carlo will be effective means of solving the original combinatorial optimisation problem. This is not a new idea (Gulwani et al., 2017, §6) but so far its effectiveness for large programs has not been proven. Independently, one might hope to find powerful new mathematical ideas that apply to the relaxed problem and shed light on the nature of program synthesis. This is the purpose of the present paper.\nSingular learning theory. We denote by W0 = {w ∈W |K(w) = 0} so that\nW0 ∩W code ⊆W0 ⊆W (2) whereW0∩W code is the discrete set of solutions to the original synthesis problem. We refer to these as the classical solutions. As the vanishing locus of an analytic function, W0 is an analytic space over R (Hironaka, 1964, §0.1), (Griffith & Harris, 1978) and it is interesting to study the geometry of this space near the classical solutions. Since K is a Kullback-Leibler divergence it is non-negative and so it not only vanishes onW0 but∇K also vanishes, hence every point ofW0 is a singular point. Beyond this the geometry ofW0 depends on the particular model p(y|x,w) that has been chosen, but some aspects are universal: the nature of program synthesis means that typically W0 is an extended object (i.e. it contains points other than the classical solutions) and the Hessian matrix of second order partial derivatives of K at a classical solution is not invertible - that is, the classical solutions are degenerate critical points of K. This means that singularity theory is the appropriate branch of mathematics for studying the geometry of W0 near a classical solution. It also means that the Fisher information matrix\nI(w)ij =\n∫ ∫ ∂\n∂wi\n[ log p(y|x,w) ] ∂ ∂wj [ log p(y|x,w) ] q(y|x)q(x)dxdy,\nis degenerate at a classical solution, so that the appropriate branch of statistical learning theory is singular learning theory (Watanabe, 2007; 2009). For an introduction to singular learning theory in the context of deep learning see (Murfet et al., 2020).\nBroadly speaking the contribution of this paper is to realise program synthesis within the framework of singular learning theory, at both a theoretical and an experimental level. In more detail the contents of the paper are:\n• We define a staged pseudo-UTM (Appendix E) which is well-suited to experiments with the ideas discussed above. Propagating uncertainty about the code through this UTM using the ideas of (Clift & Murfet, 2018) defines a triple (p, q, ϕ) associated to a synthesis problem. This formally embeds program synthesis within singular learning theory.\n• We realise this embedding in code by providing an implementation in PyTorch of this propagation of uncertainty through a UTM. Using the No-U-Turn variant of MCMC (Hoffman & Gelman, 2014) we can approximate the Bayesian posterior of any program synthesis problem (of course in practice we are limited by computational constraints in doing so).\n• We explain how the real log canonical threshold (a geometric invariant) is related to Kolmogorov complexity (Section 3).\n• We give a simple example (Appendix C) in whichW0 contains the set of classical solutions as a proper subset and every point of W0 is a degenerate critical point of K.\n• For two simple synthesis problems detectA and parityCheck we demonstrate all of the above, using MCMC to approximate the Bayesian posterior and theorems from Watanabe (2013) to estimate the RLCT (Section 5). We discuss how W0 is an extended object and how the RLCT relates to the local dimension of W0 near a classical solution.\nRELATED WORK\nThe idea of synthesising Turing machines can be traced back to the work of Solomonoff on inductive inference (Solomonoff, 1964). A more explicit form of the problem was given in Biermann (1972) who proposed an algorithmic method. Machine learning based approaches appear in Schmidhuber (1997) and Hutter (2004), which pay particular attention to model complexity, and Gaunt et al. (2016) and Freer et al. (2014), the latter using the notion of “universal probabilistic Turing machine” (De Leeuw et al., 1956). A different probabilistic extension of a universal Turing machine was introduced in Clift & Murfet (2018) via linear logic. Studies of the singular geometry of learning models go back to Amari et al. (2003) and notably, the extensive work of Watanabe (2007; 2009)." }, { "heading": "2 TURING MACHINE SYNTHESIS AS SINGULAR LEARNING", "text": "All known approaches to program synthesis can be formulated in terms of a singular learning problem. Singular learning theory is the extension of statistical learning theory to account for the fact that the set of learned parameters W0 has the structure of an analytic space as opposed to an analytic manifold (Watanabe, 2007; 2009). It is organised around triples (p, q, ϕ) consisting of a class of models {p(y|x,w) : w ∈W}, a true distribution q(y|x) and a prior ϕ on W . In our approach we fix a Universal Turing Machine (UTM), denoted U , with a description tape (which specifies the code of the Turing machine to be executed), a work tape (simulating the tape of that Turing machine during its operation) and a state tape (simulating the state of that Turing machine). The general statistical learning problem that can be formulated using U is the following: given some initial string x on the work tape, predict the state of the simulated machine and the contents of the work tape after some specified number of steps (Clift & Murfet, 2018, §7.1). For simplicity, in this paper we consider models that only predict the final state; the necessary modifications in the general case are routine. We also assume that W parametrises Turing machines whose tape alphabet Σ and set of states Q have been encoded by individual symbols in the tape alphabet of U . Hence U is actually what we call a pseudo-UTM (see Appendix E). Again, treating the general case is routine and for the present purposes only introduces uninteresting complexity.\nLet Σ denote the tape alphabet of the simulated machine, Q the set of states and let L, S,R stand for left, stay and right, the possible motions of the Turing machine head. We assume that |Q| > 1 since otherwise the synthesis problem is trivial. The set of ordinary codes W code for a Turing machine sits inside a compact space of probability distributions W over codes\nW code := ∏ σ,q Σ×Q× {L, S,R} ⊆ ∏ σ,q ∆Σ×∆Q×∆{L, S,R} =: W (3)\nwhere ∆X denotes the set of probability distributions over a set X , see (8), and the product is over pairs (σ, q) ∈ Σ×Q.1 For example the point {(σ′, q′, d)}σ,q ∈ W code encodes the machine which when it reads σ under the head in state q writes σ′, transitions into state q′ and moves in direction d. Given w ∈ W code let stept(x,w) ∈ Q denote the contents of the state tape of U after t timesteps (of the simulated machine) when the work tape is initialised with x and the description tape with w.\n1The space W of parameters is clearly semi-analytic, that is, it is cut out of Rd for some d by the vanishing f1(x) = · · · = fr(x) = 0 of finitely many analytic functions on open subsets of Rd together with finitely many inequalities g1(x) ≥ 0, . . . , gs(x) ≥ 0 where the gj(x) are analytic. In fact W is semi-algebraic, since the fi and gj may all be chosen to be polynomial functions.\nUnder review as a conference paper at ICLR 2021\nThere is a principled extension of this operation of U to a smooth function ∆ stept : Σ∗ ×W −→ ∆Q (4)\nwhich propagates uncertainty about the symbols on the description tape to uncertainty about the final state and we refer to this extension as the smooth relaxation of U . The details are given in Appendix F but at an informal level the idea behind the relaxation is easy to understand: to sample from ∆ stept(x,w) we run U to simulate t timesteps in such a way that whenever the UTM needs to “look at” an entry on the description tape we sample from the corresponding distribution specified by w.2 The significance of the particular smooth relaxation that we use is that its derivatives have a logical interpretation (Clift & Murfet, 2018, §7.1). The class of models that we consider is\np(y|x,w) = ∆ stept(x,w) (5) where t is fixed for simplicity in this paper. More generally we could also view x as consisting of a sequence and a timeout, as is done in (Clift & Murfet, 2018, §7.1). The construction of this model is summarised in Figure 1.\nw or\nk\n<latexit sha1_base64=\"TJWz1OU75YFDdX7wl0/SZiBZv6w=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0=</latexit>\nst at\ne\n<latexit sha1_base64=\"SC7T0UNtxJZeOTT+l5BLbIY5Owg=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU=</latexit>\nco d e <latexit sha1_base64=\"om3Rg/XzVobOPXTCvLtEyjMGzbs=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU=</latexit> w\n<latexit sha1_base64=\"R7WKE1UfW8UlgiOq9cSzJJOgiHo=\">AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHYJRo8kXjxCIo8ENmR2aGBkdnYzM6shG77AiweN8eonefNvHGAPClbSSaWqO91dQSy4Nq777eQ2Nre2d/K7hb39g8Oj4vFJS0eJYthkkYhUJ6AaBZfYNNwI7MQKaRgIbAeT27nffkSleSTvzTRGP6QjyYecUWOlxlO/WHLL7gJknXgZKUGGer/41RtELAlRGiao1l3PjY2fUmU4Ezgr9BKNMWUTOsKupZKGqP10ceiMXFhlQIaRsiUNWai/J1Iaaj0NA9sZUjPWq95c/M/rJmZ446dcxolByZaLhokgJiLzr8mAK2RGTC2hTHF7K2FjqigzNpuCDcFbfXmdtCplr1q+alRKtWoWRx7O4BwuwYNrqMEd1KEJDBCe4RXenAfnxXl3PpatOSebOYU/cD5/AOJ3jPM=</latexit>\nx\n<latexit sha1_base64=\"tfWAW1SjmyNhrA0capfB+UlJ15k=\">AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHYJRo8kXjxCIo8ENmR2aGBkdnYzM2skG77AiweN8eonefNvHGAPClbSSaWqO91dQSy4Nq777eQ2Nre2d/K7hb39g8Oj4vFJS0eJYthkkYhUJ6AaBZfYNNwI7MQKaRgIbAeT27nffkSleSTvzTRGP6QjyYecUWOlxlO/WHLL7gJknXgZKUGGer/41RtELAlRGiao1l3PjY2fUmU4Ezgr9BKNMWUTOsKupZKGqP10ceiMXFhlQIaRsiUNWai/J1Iaaj0NA9sZUjPWq95c/M/rJmZ446dcxolByZaLhokgJiLzr8mAK2RGTC2hTHF7K2FjqigzNpuCDcFbfXmdtCplr1q+alRKtWoWRx7O4BwuwYNrqMEd1KEJDBCe4RXenAfnxXl3PpatOSebOYU/cD5/AOP7jPQ=</latexit>\ny\n<latexit sha1_base64=\"GM5ZmXW1PJCOVOCvnUffSh1cn3U=\">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4Kkmp6LHgxWML9gPaUDbbSbt2swm7GyGU/gIvHhTx6k/y5r9x2+agrQ8GHu/NMDMvSATXxnW/nY3Nre2d3cJecf/g8Oi4dHLa1nGqGLZYLGLVDahGwSW2DDcCu4lCGgUCO8Hkbu53nlBpHssHkyXoR3QkecgZNVZqZoNS2a24C5B14uWkDDkag9JXfxizNEJpmKBa9zw3Mf6UKsOZwFmxn2pMKJvQEfYslTRC7U8Xh87IpVWGJIyVLWnIQv09MaWR1lkU2M6ImrFe9ebif14vNeGtP+UySQ1KtlwUpoKYmMy/JkOukBmRWUKZ4vZWwsZUUWZsNkUbgrf68jppVyterXLdrJbrtTyOApzDBVyBBzdQh3toQAsYIDzDK7w5j86L8+58LFs3nHzmDP7A+fwB5X+M9Q==</latexit>\nstep\n<latexit sha1_base64=\"NI9QQ7XStFY2oO5uCpCo3hbaGCA=\">AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG</latexit>\ninit\n<latexit sha1_base64=\"eVVGLMwSrwOU7JIqXWoswyQdy7Y=\">AAAB/nicbVBNSwMxEM36WevXqnjyEmwFT2W3VPRY8OKxgv2AdinZNG1Ds8mSzAplKfhXvHhQxKu/w5v/xmy7B219MPB4b4aZeWEsuAHP+3bW1jc2t7YLO8Xdvf2DQ/fouGVUoilrUiWU7oTEMMElawIHwTqxZiQKBWuHk9vMbz8ybbiSDzCNWRCRkeRDTglYqe+elnsqZpqA0pJELOWSw6zcd0texZsDrxI/JyWUo9F3v3oDRZOISaCCGNP1vRiClGjgVLBZsZcYFhM6ISPWtTRbZYJ0fv4MX1hlgIdK25KA5+rviZRExkyj0HZGBMZm2cvE/7xuAsObwL4UJ8AkXSwaJgKDwlkWeMA1oyCmlhCqub0V0zHRhIJNrGhD8JdfXiWtasWvVa7uq6V6LY+jgM7QObpEPrpGdXSHGqiJKErRM3pFb86T8+K8Ox+L1jUnnzlBf+B8/gCKDZXS</latexit>\nw or\nk\n<latexit sha1_base64=\"TJWz1OU75YFDdX7wl0/SZiBZv6w=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0=</latexit>\nst at\ne\n<latexit sha1_base64=\"SC7T0UNtxJZeOTT+l5BLbIY5Owg=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU=</latexit>\nco d e <latexit sha1_base64=\"om3Rg/XzVobOPXTCvLtEyjMGzbs=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU=</latexit>\nw or\nk\n<latexit sha1_base64=\"TJWz1OU75YFDdX7wl0/SZiBZv6w=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMA9IljA7mSRD5rHMzCphyS948aCIV3/Im3/jbLIHTSxoKKq66e6KYs6M9f1vr7CxubW9U9wt7e0fHB6Vj0/aRiWa0BZRXOluhA3lTNKWZZbTbqwpFhGnnWh6m/mdR6oNU/LBzmIaCjyWbMQItpn0pPR0UK74VX8BtE6CnFQgR3NQ/uoPFUkElZZwbEwv8GMbplhbRjidl/qJoTEmUzymPUclFtSE6eLWObpwyhCNlHYlLVqovydSLIyZich1CmwnZtXLxP+8XmJHN2HKZJxYKsly0SjhyCqUPY6GTFNi+cwRTDRztyIywRoT6+IpuRCC1ZfXSbtWDerVq/tapVHP4yjCGZzDJQRwDQ24gya0gMAEnuEV3jzhvXjv3seyteDlM6fwB97nD0s7jl0=</latexit>\nst at\ne\n<latexit sha1_base64=\"SC7T0UNtxJZeOTT+l5BLbIY5Owg=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKRY8FLx4rmFZoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSMUmmGfdZIhP9GFLDpVDcR4GSP6aa0ziUvBtObud+94lrIxL1gNOUBzEdKREJRtFKvkGKfFCtuXV3AbJOvILUoEB7UP3qDxOWxVwhk9SYnuemGORUo2CSzyr9zPCUsgkd8Z6lisbcBPni2Bm5sMqQRIm2pZAs1N8TOY2Nmcah7Ywpjs2qNxf/83oZRjdBLlSaIVdsuSjKJMGEzD8nQ6E5Qzm1hDIt7K2EjammDG0+FRuCt/ryOuk06l6zfnXfqLWaRRxlOINzuAQPrqEFd9AGHxgIeIZXeHOU8+K8Ox/L1pJTzJzCHzifPwBVjsU=</latexit>\nco d e <latexit sha1_base64=\"om3Rg/XzVobOPXTCvLtEyjMGzbs=\">AAAB63icbVDLSgNBEOyNrxhfUY9eBoPgKeyGiB4DXjxGMImQLGF2dpIMmccyMyuEJb/gxYMiXv0hb/6Ns8keNLGgoajqprsrSjgz1ve/vdLG5tb2Tnm3srd/cHhUPT7pGpVqQjtEcaUfI2woZ5J2LLOcPiaaYhFx2oumt7nfe6LaMCUf7CyhocBjyUaMYJtLRMV0WK35dX8BtE6CgtSgQHtY/RrEiqSCSks4NqYf+IkNM6wtI5zOK4PU0ASTKR7TvqMSC2rCbHHrHF04JUYjpV1Jixbq74kMC2NmInKdAtuJWfVy8T+vn9rRTZgxmaSWSrJcNEo5sgrlj6OYaUosnzmCiWbuVkQmWGNiXTwVF0Kw+vI66TbqQbN+dd+otZpFHGU4g3O4hACuoQV30IYOEJjAM7zCmye8F+/d+1i2lrxi5hT+wPv8AQ5RjjU=</latexit>\n· · ·\n<latexit sha1_base64=\"0bCcXyz6HriuD2v71FSD+L8Alno=\">AAAB73icbVBNSwMxEJ2tX7V+VT16CbaCp7JbKnosePFYwX5Au5RsNtuGZpM1yQpl6Z/w4kERr/4db/4b03YP2vpg4PHeDDPzgoQzbVz32ylsbG5t7xR3S3v7B4dH5eOTjpapIrRNJJeqF2BNORO0bZjhtJcoiuOA024wuZ373SeqNJPiwUwT6sd4JFjECDZW6lUHJJRGV4fliltzF0DrxMtJBXK0huWvQShJGlNhCMda9z03MX6GlWGE01lpkGqaYDLBI9q3VOCYaj9b3DtDF1YJUSSVLWHQQv09keFY62kc2M4Ym7Fe9ebif14/NdGNnzGRpIYKslwUpRwZiebPo5ApSgyfWoKJYvZWRMZYYWJsRCUbgrf68jrp1Gteo3Z1X680G3kcRTiDc7gED66hCXfQgjYQ4PAMr/DmPDovzrvzsWwtOPnMKfyB8/kDZpuPgw==</latexit>\nstep\n<latexit sha1_base64=\"NI9QQ7XStFY2oO5uCpCo3hbaGCA=\">AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG</latexit>\nstep\n<latexit sha1_base64=\"NI9QQ7XStFY2oO5uCpCo3hbaGCA=\">AAACBXicbVA9SwNBEN3zM8avqKUWi4lgFe5CRMuAFpYRzAckIextJsmSvdtjd04IRxob/4qNhSK2/gc7/417SQpNfDDweG+GmXl+JIVB1/12VlbX1jc2M1vZ7Z3dvf3cwWHdqFhzqHEllW76zIAUIdRQoIRmpIEFvoSGP7pO/cYDaCNUeI/jCDoBG4SiLzhDK3VzJ4X2DUhktK0i0AyVDlkAiUGIJoVuLu8W3SnoMvHmJE/mqHZzX+2e4nEAIXLJjGl5boSdhGkUXMIk244NRIyP2ABalqarTCeZfjGhZ1bp0b7StkKkU/X3RMICY8aBbzsDhkOz6KXif14rxv5VJxFhFCOEfLaoH0uKiqaR0J7QwFGOLWFcC3sr5UOmGUcbXNaG4C2+vEzqpaJXLl7clfKV8jyODDkmp+SceOSSVMgtqZIa4eSRPJNX8uY8OS/Ou/Mxa11x5jNH5A+czx92LpiG</latexit>\nFigure 1: The state of U is represented by the state of the work tape, state tape and description (code) tape. The work tape is initialised with a sequence x ∈ Σ∗, the code tape with w ∈ W and the state tape with some standard initial state, the smooth relaxation ∆ step of the pseudo-UTM is run for t steps and the final probability distribution over states is y.\nDefinition 2.1 (Synthesis problem). A synthesis problem for U consists of a probability distribution q(x, y) over Σ∗ × Q. We say that the synthesis problem is deterministic if there is f : Σ∗ −→ Q such that q(y = f(x)|x) = 1 for all x ∈ Σ∗. Definition 2.2. The triple (p, q, ϕ) associated to a synthesis problem is the model p of (5) together with the true distribution q and uniform prior ϕ on the parameter space W . The Kullback-Leibler function K(w) of the synthesis problem is defined by (1) and a solution to the synthesis problem is a point of W0. A classical solution is a point of W0 ∩W code.\nAs ∆ stept is a polynomial function, K is analytic and so W0 is a semi-analytic space (it is cut out of the semi-analytic space W by the vanishing of K). If the synthesis problem is deterministic and q(x) is uniform on some finite subset of Σ∗ then W0 is semi-algebraic (it is cut out of W by polynomial equations) and all solutions lie at the boundary of the parameter spaceW (Appendix D). However in general W0 is only semi-analytic and intersects the interior of W (Example C.2). We assume that q(y|x) is realisable that is, there exists w0 ∈W with q(y|x) = p(y|x,w0). A triple (p, q, ϕ) is regular if the model is identifiable, ie. for all inputs x ∈ Rn, the map sending w to the conditional probability distribution p(y|x,w) is one-to-one, and the Fisher information matrix is non-degenerate. Otherwise, the learning machine is strictly singular (Watanabe, 2009, §1.2.1). Triples arising from synthesis problems are typically singular: in Example 2.5 below we show an explicit example where multiple parameters w determine the same model, and in Example C.2 we give an example where the Hessian ofK is degenerate everywhere on W0 (Watanabe, 2009, §1.1.3).\n2Noting that this sampling procedure is repeated every time the UTM looks at a given entry.\nRemark 2.3. Non-deterministic synthesis problems arise naturally in various contexts, for example in the fitting of algorithms to the behaviour of deep reinforcement learning agents. Suppose an agent is acting in an environment with starting states encoded by x ∈ Σ∗ and possible episode end states by y ∈ Q. Even if the optimal policy is known to determine a computable function Σ∗ −→ Q the statistics of the observed behaviour after finite training time will only provide a function Σ∗ −→ ∆Q and if we wish to fit algorithms to behaviour it makes sense to deal with this uncertainty directly.\nDefinition 2.4. Let (p, q, ϕ) be the triple associated to a synthesis problem. The Real Log Canonical Threshold (RLCT) λ of the synthesis problem is defined so that −λ is the largest pole of the meromorphic extension (Atiyah, 1970) of the zeta function ζ(z) = ∫ K(w)zϕ(w)dw.\nThe more singular the analytic space W0 of solutions is, the smaller the RLCT. One way to think of the RLCT is as a count of the effective number of parameters near W0 (Murfet et al., 2020, §4). In Section 3 we relate the RLCT to Kolmogorov complexity and in Section 5 we estimate the RLCT of the synthesis problem detectA given below, using the method explained in Appendix A.\nExample 2.5 (detectA). The deterministic synthesis problem detectA has Σ = { , A,B}, Q = {reject, accept} and q(y|x) is determined by the function taking in a string x of A’s and B’s and returning the state accept if the string contains an A and state reject otherwise. The conditional true distribution q(y|x) is realisable because this function is computed by a Turing machine. Two solutions are shown in Figure 2. On the left is a parameter wl ∈ W0 \\ W code and on the right is wr ∈ W0 ∩W code. Varying the distributions in wl that have nonzero entropy we obtain a submanifold V ⊆ W0 containing wl of dimension 14. This leads by (Watanabe, 2009, Remark 7.3) to a bound on the RLCT of λ ≤ 12 (30 − 14) = 8 which is consistent with the experimental results in Table 1. This highlights that solutions need not lie at vertices of the probability simplex, and W0 may contain a high-dimensional submanifold around a given classical solution." }, { "heading": "2.1 THE SYNTHESIS PROCESS", "text": "Synthesis is a problem because we do not assume that the true distribution is known: for example, if q(y|x) is deterministic and the associated function is f : Σ∗ −→ Q, we assume that some example pairs (x, f(x)) are known but no general algorithm for computing f is known (if it were, synthesis would have already been performed). In practice synthesis starts with a sample Dn = {(xi, yi)}ni=1 from q(x, y) with associated empirical Kullback-Leibler distance\nKn(w) = 1\nn n∑ i=1 log q(yi|xi) p(yi|xi, w) . (6)\nIf the synthesis problem is deterministic and u ∈ W code then Kn(u) = 0 if and only if u explains the data in the sense that stept(xi, u) = yi for 1 ≤ i ≤ n. We now review two natural ways of finding such solutions in the context of machine learning.\nSynthesis by stochastic gradient descent (SGD). The first approach is to view the process of program synthesis as stochastic gradient descent for the function K : W −→ R. We view Dn as a large training set and further sample subsets Dm with m n and compute ∇Km to take gradient descent steps wi+1 = wi − η∇Km(wi) for some learning rate η. Stochastic gradient descent has the advantage (in principle) of scaling to high-dimensional parameter spaces W , but in practice it is challenging to use gradient descent to find points of W0 (Gaunt et al., 2016).\nSynthesis by sampling. The second approach is to consider the Bayesian posterior associated to the synthesis problem, which can be viewed as an update on the prior distribution ϕ after seeing Dn\np(w|Dn) = p(Dn|w)p(w)\np(Dn) =\n1\nZn ϕ(w) n∏ i=1 p(yi|xi, w) = 1 Z0n exp{−nKn(w) + logϕ(w)}\nwhere Z0n = ∫ ϕ(w) exp(−nKn(w))dw. If n is large the posterior distribution concentrates around solutionsw ∈W0 and so sampling from the posterior will tend to produce machines that are (nearly) solutions. The gold standard sampling is Markov Chain Monte Carlo (MCMC). Scaling MCMC to where W is high-dimensional is a challenging task with many attempts to bridge the gap with SGD (Welling & Teh, 2011; Chen et al., 2014; Ding et al., 2014; Zhang et al., 2020). Nonetheless in simple cases we demonstrate experimentally in Section 5 that machines may be synthesised by using MCMC to sample from the posterior." }, { "heading": "3 COMPLEXITY OF PROGRAMS", "text": "Every Turing machine is the solution of a deterministic synthesis problem, so Section 2 associates to any Turing machine a singularity of a semi-analytic spaceW0. To indicate that this connection is not vacuous, we sketch how the complexity of a program is related to the real log canonical threshold of a singularity. A more detailed discussion will appear elsewhere.\nLet q(x, y) be a deterministic synthesis problem for U which only involves input sequences in some restricted alphabet Σinput, that is, q(x) = 0 if x /∈ (Σinput)∗. Let Dn be sampled from q(x, y) and let u, v ∈ W code ∩W0 be two explanations for the sample in the sense that Kn(u) = Kn(v) = 0. Which explanation for the data should we prefer? The classical answer based on Occam’s razor (Solomonoff, 1964) is that we should prefer the shorter program, that is, the one using the fewest states and symbols.\nSet N = |Σ| and M = |Q|. Any Turing machine T using N ′ ≤ N symbols and M ′ ≤ M states has a code for U of length cM ′N ′ where c is a constant. We assume that Σinput is included in the tape alphabet of T so that N ′ ≥ |Σinput| and define the Kolmogorov complexity of q with respect to U to be the infimum c(q) of M ′N ′ over Turing machines T that give classical solutions for q. Let λ be the RLCT of the triple (p, q, ϕ) associated to the synthesis problem (Definition 2.4).\nTheorem 3.1. λ ≤ 12 (M +N)c(q).\nProof. Let u ∈W code ∩W0 be the code of a Turing machine realising the infimum in the definition of the Kolmogorov complexity and suppose that this machine only uses symbols in Σ′ and states in Q′ with N ′ = |Σ′| and M ′ = |Q′|. The time evolution of the staged pseudo-UTM U simulating u on x ∈ Σ∗input is independent of the entries on the description tape that belong to tuples of the form (σ, q, ?, ?, ?) with (σ, q) /∈ Σ′ × Q′. Let V ⊆ W be the submanifold of points which agree with u on all tuples with (σ, q) ∈ Σ′ × Q′ and are otherwise free. Then u ∈ V ⊆ W0 and codim(V ) = M ′N ′(M +N) and by (Watanabe, 2009, Theorem 7.3) we have λ ≤ 12 codim(V ).\nRemark 3.2. The Kolmogorov complexity depends only on the number of symbols and states used. The RLCT is a more refined invariant since it also depends on how each symbol and state is used (Clift & Murfet, 2018, Remark 7.8) as this affects the polynomials defining W0 (see Appendix D)." }, { "heading": "4 PRACTICAL IMPLICATIONS", "text": "Using singular learning theory we have explained how programs to be synthesised are singularities of analytic functions, and how the Kolmogorov complexity of a program bounds the RLCT of the associated singularity. We now sketch some practical insights that follow from this point of view.\nSynthesis minimises the free energy: the sampling-based approach to synthesis (Section 2.1) aims to approximate, via MCMC, sampling from the Bayesian posterior for the triple (p, q, ϕ) associated to a synthesis problem. To understand the behaviour of these Markov chains we follow the asymptotic analysis of (Watanabe, 2009, Section 7.6). If we cover W by small closed balls Vα around\nUnder review as a conference paper at ICLR 2021\npoints wα then we can compute the probability that a sample comes from Vα by\npα = 1\nZ0 ∫ Vα e−nKn(w)ϕ(w)dw\nand if n is sufficiently large this is proportional to e−fα where the quantity\nfα = Kαn+ λα log(n)\nis called the free energy. Here Kα is the smallest value of the Kullback-Leibler divergence K on Vα and λα is the RLCT of the setWKα∩Vα whereWc = {w ∈W |K(w) = c} is a level set ofK. The Markov chains used to generate approximate samples from the posterior are attempting to minimise the free energy, which involves a tradeoff between the energy Kαn and the entropy λα log(n).\nWhy synthesis gets stuck: the kind of local minimum of the free energy that we want the synthesis process to find are solutions wα ∈ W0 where λα is minimal. By Section 3 one may think of these points as the “lowest complexity” solutions. However it is possible that there are other local minima of the free energy. Indeed, there may be local minima where the free energy is lower than the free energy at any solution since at finite n it is possible to tradeoff an increase in Kα against a decrease in the RLCT λα. In practice, the existence of such “siren minima” of the free energy may manifest itself as regions where the synthesis process gets stuck and fails to converge to a solution. In such a region Kαn + λα log(n) < λ log(n) where λ is the RLCT of the synthesis problem. In practice it has been observed that program synthesis by gradient descent often fails for complex problems in the sense that it fails to converge to a solution (Gaunt et al., 2016). While synthesis by SGD and sampling are different, it is a reasonable hypothesis that these siren minima are a significant contributing factor in both cases.\nCan we avoid siren minima? If we let λc denote the RLCT of the level setWc then siren minima of the free energy will be impossible at a given value of n and c as long as λc ≥ λ−c nlog(n) . Recall that the more singular Wc is the lower the RLCT, so this lower bound says that the level sets should not become too singular too quickly as c increases. At any given value of n there is a “siren free” region in the range c ≥ λ log(n)n since the RLCT is non-negative (Figure 3). Thus the learning process will be more reliable the smaller λ log(n)n is. This can arranged either by increasing n (providing more examples) or decreasing λ.\nWhile the RLCT is determined by the synthesis problem, it is possible to change its value by changing the structure of the UTM U . As we have defined it U is a “simulation type” UTM, but one could for example add special states such that if a code specifies a transition into that state a series of steps is executed by the UTM (i.e. a subroutine). This amounts to specifying codes in a higher level programming language. Hence one of the practical insights that can be derived from the geometric point of view on program synthesis is that varying this language is a natural way to engineer the singularities of the level sets of K, which according to singular learning theory has direct implications for the learning process.\n<latexit sha1_base64=\"87p802AjPb7Cs/EP/Opj7RaGDGU=\">AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiSi6LLoxmUF+4A2lJvJpB06mYSZiVBCP8KNC0Xc+j3u/BunbRbaemDgcM65zL0nSAXXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTlLVoIhLVDVAzwSVrGW4E66aKYRwI1gnGdzO/88SU5ol8NJOU+TEOJY84RWOlTl/YaIiDas2tu3OQVeIVpAYFmoPqVz9MaBYzaahArXuemxo/R2U4FWxa6WeapUjHOGQ9SyXGTPv5fN0pObNKSKJE2ScNmau/J3KMtZ7EgU3GaEZ62ZuJ/3m9zEQ3fs5lmhkm6eKjKBPEJGR2Owm5YtSIiSVIFbe7EjpChdTYhiq2BG/55FXSvqh7l/Wrh8ta47aoowwncArn4ME1NOAemtACCmN4hld4c1LnxXl3PhbRklPMHMMfOJ8/P12PhQ==</latexit>\n<latexit sha1_base64=\"lsQVynnXbJ6Gfqi4JWsS+3V0pmU=\">AAACAnicbVDLSgMxFM3UV62vUVfiJliEuikzUtFl0Y3LCvYBnaFkMpk2NJMMSUYow+DGX3HjQhG3foU7/8a0nYW2HggczrmHm3uChFGlHefbKq2srq1vlDcrW9s7u3v2/kFHiVRi0saCCdkLkCKMctLWVDPSSyRBccBINxjfTP3uA5GKCn6vJwnxYzTkNKIYaSMN7CMvkghnHjOREEGPiWGNn+UZzwd21ak7M8Bl4hakCgq0BvaXFwqcxoRrzJBSfddJtJ8hqSlmJK94qSIJwmM0JH1DOYqJ8rPZCTk8NUoIIyHN4xrO1N+JDMVKTeLATMZIj9SiNxX/8/qpjq78jPIk1YTj+aIoZVALOO0DhlQSrNnEEIQlNX+FeIRMJ9q0VjEluIsnL5POed1t1C/uGtXmdVFHGRyDE1ADLrgETXALWqANMHgEz+AVvFlP1ov1bn3MR0tWkTkEf2B9/gAV3pc9</latexit>\n<latexit sha1_base64=\"XEi9uEsjWQ+XS0FVxzpFQZ5h7eQ=\">AAAB7HicbVBNSwMxEJ31s9avqkcvwVbwIGVXFPVkwYvHCm5baJeSTbNtaJJdkqxQlv4GLx4U8erv8Dd489+YbnvQ1gcDj/dmmJkXJpxp47rfztLyyuraemGjuLm1vbNb2ttv6DhVhPok5rFqhVhTziT1DTOcthJFsQg5bYbD24nffKRKs1g+mFFCA4H7kkWMYGMlv9I5JZVuqexW3RxokXgzUr75hBz1bumr04tJKqg0hGOt256bmCDDyjDC6bjYSTVNMBniPm1bKrGgOsjyY8fo2Co9FMXKljQoV39PZFhoPRKh7RTYDPS8NxH/89qpia6CjMkkNVSS6aIo5cjEaPI56jFFieEjSzBRzN6KyAArTIzNp2hD8OZfXiSNs6p3Xr24d8u162kaUIBDOIIT8OASanAHdfCBAIMneIFXRzrPzpvzPm1dcmYzB/AHzscP8FSO2Q==</latexit>\n<latexit sha1_base64=\"A9isavlJv74fj8B+wrJ3O/Bkfds=\">AAAB83icbVBNS8NAEJ34WetX1aOXxSJ4kJIURY8FLx4r2A9oQtlsJ+3SzSbsboQS+je8eFDEq3/Gm//GbZuDtj4YeLw3w8y8MBVcG9f9dtbWNza3tks75d29/YPDytFxWyeZYthiiUhUN6QaBZfYMtwI7KYKaRwK7ITju5nfeUKleSIfzSTFIKZDySPOqLGS719qrlCSSCH2K1W35s5BVolXkCoUaPYrX/4gYVmM0jBBte55bmqCnCrDmcBp2c80ppSN6RB7lkoaow7y+c1Tcm6VAYkSZUsaMld/T+Q01noSh7Yzpmakl72Z+J/Xy0x0G+RcpplByRaLokwQk5BZAGRgH2ZGTCyhTHF7K2EjqigzNqayDcFbfnmVtOs176p2/VCvNupFHCU4hTO4AA9uoAH30IQWMEjhGV7hzcmcF+fd+Vi0rjnFzAn8gfP5A437kVM=</latexit>\n<latexit sha1_base64=\"PMhVvaslfSLTYIx1aQVXJ67JT24=\">AAACCHicbVDLSgMxFM3UV62vUZcuDLZChVpmiqLLghuXFewDOkPJpJk2NJMMSUYoQ5du/BU3LhRx6ye4829M21lo64ELh3Pu5d57gphRpR3n28qtrK6tb+Q3C1vbO7t79v5BS4lEYtLEggnZCZAijHLS1FQz0oklQVHASDsY3Uz99gORigp+r8cx8SM04DSkGGkj9exjr+JVFBMxgaVzT4cS4ZRPUo+JQZmfTUo9u+hUnRngMnEzUgQZGj37y+sLnESEa8yQUl3XibWfIqkpZmRS8BJFYoRHaEC6hnIUEeWns0cm8NQofRgKaYprOFN/T6QoUmocBaYzQnqoFr2p+J/XTXR47aeUx4kmHM8XhQmDWsBpKrBPJcGajQ1BWFJzK8RDZMLQJruCCcFdfHmZtGpV96J6eVcr1mtZHHlwBE5AGbjgCtTBLWiAJsDgETyDV/BmPVkv1rv1MW/NWdnMIfgD6/MHMdWYxg==</latexit>" }, { "heading": "5 EXPERIMENTS", "text": "We estimate the RLCT for the triples (p, q, ϕ) associated to the synthesis problems detectA (Example 2.5) and parityCheck. Hyperparameters of the various machines are contained in Table 3 of Appendix B. The true distribution q(x) is defined as follows: we fix a minimum and maximum sequence length a ≤ b and to sample x ∼ q(x) we first sample a length l uniformly from [a, b] and then uniformly sample x from {A,B}l. We perform MCMC on the weight vector for the model class {p(y|x,w) : w ∈ W} where w is represented in our PyTorch implementation by three tensors of shape {[L, ni]}1≤i≤3 where L is the number of tuples in the description tape of the TM being simulated and {ni} are the number of symbols, states and directions respectively. A direct simulation of the UTM is used for all experiments to improve computational efficiency (Appendix G). We generate, for each inverse temperature β and dataset Dn, a Markov chain via the No-U-turn sampler from Hoffman & Gelman (2014). We use the standard uniform distribution as our prior ϕ.\nFor the problem detectA given in Example 2.5 the dimension of parameter space is dimW = 30. We use generalized least squares to fit the RLCT λ (with goodness-of-fit measured by R2), the algorithm of which is given in Appendix A. Our results are displayed in Table 1 and Figure 4. Our purpose in these experiments is not to provide high accuracy estimates of the RLCT, as these would require much longer Markov chains. Instead we demonstrate how rough estimates consistent with the theory can be obtained at low computational cost. If this model were regular the RLCT would be dimW/2 = 15.\nThe deterministic synthesis problem parityCheck has\nΣ = { , A,B,X} Q = {reject, accept, getNextAB, getNextA, getNextB, gotoStart}.\nThe distribution q(x) is as discussed in Section 5 and q(y|x) is determined by the function taking in a string of A’s and B’s, and terminating in state accept if the string contains the same number of A’s as B’s, and terminating in state reject otherwise. The string is assumed to contain no blank symbols. The true distribution is realisable because there is a Turing machine using Σ and Q which computes this function: the machine works by repeatedly overwriting pairs consisting of a single A and B with X’s; if there are any A’s without a matching B left over (or vice versa), we reject, otherwise we accept.\nIn more detail, the starting state getNextAB moves right on the tape until the first A or B is found, and overwrites it with an X . If it’s an A (resp. B) we enter state getNextB (resp. getNextA). If no A or B is found, we enter the state accept. The state getNextA (resp. getNextB) moves right until an A (resp. B) is found, overwrites it with an X and enters state gotoStart which moves left until a blank symbol is found (resetting the machine to the left end of the tape). If no A’s (resp. B’s) were left on the tape, we enter state reject. The dimension of the parameter space is dimW = 240. If this model were regular, the RLCT would be dimW/2 = 120. Our RLCT estimates are contained in Table 2." }, { "heading": "6 DISCUSSION", "text": "We have developed a theoretical framework in which all programs can in principle be learnt from input-output examples via an existing optimisation procedure. This is done by associating to each program a smooth relaxation which, based on Clift & Murfet (2018), can be argued to be more canonical than existing approaches. This realization has important implications for the building of intelligent systems.\nIn approaches to program synthesis based on gradient descent there is a tendency to think of solutions to the synthesis problem as isolated critical points of the loss function K, but this is a false intuition based on regular models. Since neural networks, Bayesian networks, smooth relaxations of UTMs and all other extant approaches to smooth program synthesis are strictly singular models (the map from parameters to functions is not injective) the set W0 of parameters w with K(w) = 0 is a complex extended object, whose geometry is shown by Watanabe’s singular learning theory to be deeply related to the learning process. We have examined this geometry in several specific examples and shown how to think about complexity of programs from a geometric perspective. It is our hope that algebraic geometry can assist in developing the next generation of synthesis machines." }, { "heading": "A ALGORITHM FOR ESTIMATING RLCTS", "text": "Given a sample Dn = {(xi, yi)}ni=1 from q(x, y) let Ln(w) := − 1n ∑n i=1 log p(yi|xi, w) be the negative log likelihood. We would like to estimate\nEβw[nLn(w)] := 1\nZβn\n∫ nLn(w)ϕ(w) n∏ i=1 p(yi|xi, w)βdw\nwhere Zβn = ∫ ϕ(w) ∏n i=1 p(yi|xi, w)βdw for some inverse temperature β. If β = β0logn for some constant β0, then by Theorem 4 of Watanabe (2013),\nEβw[nLn(w)] = nLn(w0) + λ log n\nβ0 + Un\n√ λ log n\n2β0 +Op(1) (7)\nwhere {Un} is a sequence of random variables satisfying E[Un] = 0 and λ is the RLCT. In practice, the last two terms often vary negligibly with 1/β and so Eβw[nLn(w)] approximates a linear function of 1/β with slope λ (Watanabe, 2013, Corollary 3). This is the foundation of the RLCT estimation procedure found in Algorithm 1 which is used in our experiments.\nAlgorithm 1 RLCT estimation Input: range of β’s, set of training sets T each of size n, approximate samples {w1, . . . , wR} from pβ(w|Dn) for each training set Dn and each β for training set Dn ∈ T do\nfor β in range of β’s do Approximate Eβw[nLn(w)] with 1R ∑R i=1 nLn(wr) where w1, . . . , wR are approximate sam-\nples from pβ(w|Dn) end for Perform generalised least squares to fit λ in Equation (7), call result λ̂(Dn)\nend for Output: 1|T | ∑ Dn∈T λ̂(Dn)\nEach RLCT estimate λ̂(Dn) in Algorithm 1 was performed by linear regression on the pairs {(1/βi,Eβiw [nLn(w)])}5i=1 where the five inverse temperatures βi are centered on the inverse temperature 1/T where T is the temperature reported for each experiment in Table 1 and Table 2.\nFrom a Bayesian perspective, predictions about outputs y should be made using the predictive distribution p∗(y|x,Dn) = ∫ p(y|x,w)p(w|Dn)dw .\nThe Bayesian generalisation error associated to the Bayesian predictor is defined as the KullbackLeibler distance to the true conditional distribution Bg(n) := DKL(q‖p∗) = ∫ q(y|x)q(x) log ( q(y|x) p∗(y|x) ) dydx.\nIf some fundamental conditions are satisfied (Definition 6.1 and Definition 6.3 of Watanabe (2009)), then by Theorem 6.8 of loc.cit., there exists a random variable B∗g such that as n→∞, E[nBg(n)] converges to E[B∗g ]. In particular, by Theorem 6.10 of Watanabe (2009), E[B∗g ] = λ." }, { "heading": "B HYPERPARAMETERS", "text": "The hyperparameters for the various synthesis tasks are contained in Table 3. The number of samples is R in Algorithm 1 and the number of datasets is |T |. Samples are taken according to the Dirichlet distribution, a probability distribution over the simplex, which is controlled by the concentration. When the concentration is a constant across all dimensions, as is assumed here, this corresponds to a density which is symmetric about the uniform probability mass function occurring in the centre of the simplex. The value α = 1.0 corresponds to the uniform distribution over the simplex. Finally, the chain temperature controls the default β value, ie. all inverse temperature values are centered around 1/T where T is the chain temperature." }, { "heading": "C THE SHIFT MACHINE", "text": "The pseudo-UTM U is a complicated Turing machine, and the models p(y|x,w) of Section 2 are therefore not easy to analyse by hand. To illustrate the kind of geometry that appears, we study the simple Turing machine shiftMachine of Clift & Murfet (2018) and formulate an associated statistical learning problem. The tape alphabet is Σ = { , A,B, 0, 1, 2} and the input to the machine will be a string of the form na1a2a3 where n is called the counter and ai ∈ {A,B}. The transition function, given in loc.cit., will move the string of A’s and B’s leftwards by n steps and fill the right hand end of the string with A’s, keeping the string length invariant. For example, if 2BAB is the input to M , the output will be 0BAA .\nSet W = ∆{0, 2} ×∆{A,B} and view w = (h, k) ∈ W as representing a probability distribution (1− h) · 0 + h · 2 for the counter and (1− k) ·B + k ·A for a1. The model is\np ( y|x = (a2, a3), w ) = (1− h)2k ·A+ (1− h)2(1− k) ·B + 3∑ i=2 ( 2 i− 1 ) hi−1(1− h)3−i · ai.\nThis model is derived by propagating uncertainty through shiftMachine in the same way that p(y|x,w) is derived from ∆ stept in Section 2 by propagating uncertainty through U . We assume that some distribution q(x) over {A,B}2 is given. Example C.1. Suppose q(y|x) = p(y|x,w0) where w0 = (1, 1). It is easy to see that\nK(w) = −1 4 ∑ a2,a3 log p ( y = a3|x = (a2, a3), w ) = −1 2 log[g(h, k)]\nwhere g(h, k) = ( (1− h)2k + h2 )( (1− h)2(1− k) + h2 ) is a polynomial in w. Hence\nW0 = {(h, k) ∈W : g(h, k) = 1} = V(g − 1) ∩ [0, 1]2\nis a semi-algebraic variety, that is, it is defined by polynomial equations and inequalities. Here V(h) denotes the vanishing locus of a function h. Example C.2. Suppose q(AB) = 1 and q(y|x = AB) = 12A + 12B. Then the Kullback-Leibler divergence is K(h, k) = − 12 log(4f(1 − f)) where f = (1 − h)2k + 2h(1 − h). Hence ∇K = (f − 12 ) 1f(1−f)∇f . Note that f has no critical points, and so∇K = 0 at (h, k) ∈ (0, 1)2 if and only if f(h, k) = 12 . Since K is non-negative, any w ∈W0 satisfies∇K(w) = 0 and so\nW0 = [0, 1] 2 ∩ V(4f(1− f)− 1) = [0, 1]2 ∩ V(f − 12 )\nis semi-algebraic. Note that the curve f = 12 is regular while the curve 4f(1 − f) = 1 is singular and it is the geometry of the singular curve that is related to the behaviour ofK. This curve is shown in Figure 5. It is straightforward to check that the determinant of the Hessian of K is identically zero on W0, so that every point on W0 is a degenerate critical point of K." }, { "heading": "D GENERAL SOLUTION FOR DETERMINISTIC SYNTHESIS PROBLEMS", "text": "In this section we consider the case of a deterministic synthesis problem q(x, y) which is finitely supported in the sense that there exists a finite set X ⊆ Σ∗ such that q(x) = c for all x ∈ X and q(x) = 0 for all x /∈ X . We first need to discuss the coordinates on the parameter space W of (3). To specify a point on W is to specify for each pair (σ, q) ∈ Σ × Q (that is, for each tuple on the description tape) a triple of probability distributions∑\nσ′∈Q xσ,qσ′ · σ′ ∈ ∆Σ ,∑\nq′∈Q yσ,qq′ · q′ ∈ ∆Q ,∑\nd∈{L,S,R}\nzσ,qd · d ∈ ∆{L, S,R} .\nThe space W of distributions is therefore contained in the affine space with coordinate ring RW = R [{ xσ,qσ′ } σ,q,σ′ , { yσ,qq′ } σ,q,q′ , { zσ,qd } σ,q,d ] .\nThe function F x = ∆ stept(x,−) : W −→ ∆Q is polynomial (Clift & Murfet, 2018, Proposition 4.2) and we denote for s ∈ Q by F xs ∈ RW the polynomial computing the associated component of the function F x. Let ∂W denote the boundary of the manifold with corners W , that is, the set of all points on W where at least one of the coordinate functions given above vanishes\n∂W = V (∏ σ,q [ ∏ σ′∈Q xσ,qσ′ ∏ q′∈Q yσ,qq′ ∏ d∈{L,S,R} zσ,qd ]) where V(h) denotes the vanishing locus of h. Lemma D.1. W0 6= W .\nProof. Choose x ∈ X with q(x) > 0 and let y be such that q(y|x) = 1. Let w ∈ W code be the code for the Turing machine which ignores the symbol under the head and current state, transitions to some fixed state s 6= y and stays. Then w /∈W0. Lemma D.2. The set W0 is semi-algebraic and W0 ⊆ ∂W .\nProof. Given x ∈ Σ∗ with q(x) > 0 we write y = y(x) for the unique state with q(x, y) 6= 0. In this notation the Kullback-Leibler divergence is\nK(w) = ∑ x∈X cDKL(y‖F x(w)) = −c ∑ x∈X logF xy (w) = −c log ∏ x∈X F xy (w) .\nHence\nW0 = W ∩ ⋂ x∈X V(1− F xy (w))\nis semi-algebraic.\nRecall that the function ∆ stept is associated to an encoding of the UTM in linear logic by the Sweedler semantics (Clift & Murfet, 2018) and the particular polynomials involved have a form that is determined by the details of that encoding (Clift & Murfet, 2018, Proposition 4.3). From the design of our UTM we obtain positive integers lσ,mq, nd for σ ∈ Σ, q ∈ Q, d ∈ {L, S,R} and a function π : Θ −→ Q where\nΘ = ∏ σ,q Σlσ ×Qmq × {L, S,R}nd .\nWe represent elements of Θ by tuples (µ, ζ, ξ) ∈ Θ where µ(σ, q, i) ∈ Σ for σ ∈ Σ, q ∈ Q and 1 ≤ i ≤ lσ and similarly ζ(σ, q, j) ∈ Q and ξ(σ, q, k) ∈ {L, S,R}. The polynomial F xs is\nF xs = ∑\n(µ,ζ,ξ)∈Θ δ(s = π(µ, ζ, ξ)) ∏ σ,q [ lσ∏ i=1 xσ,qµ(σ,q,i) mq∏ j=1 yσ,qζ(σ,q,j) nd∏ k=1 zσ,qξ(σ,q,k) ] where δ is a Kronecker delta. With this in hand we may compute\nW0 = W ∩ ⋂ x∈X V(1− F xy (w))\n= W ∩ ⋂ x∈X ⋂ s6=y V(F xs (w)) .\nBut F xs is a polynomial with non-negative integer coefficients, which takes values in [0, 1] for w ∈ W . Hence it vanishes on w if and only if for each triple µ, ζ, ξ with s = π(µ, ζ, ξ) one or more of the coordinate functions xσ,qµ(σ,q,i), y σ,q ζ(σ,q,j), z σ,q ξ(σ,q,k) vanishes on w.\nThe desired conclusion follows unless for every x ∈ X and (µ, ζ, ξ) ∈ Θ we have π(µ, ζ, ξ) = y so that F xs = 0 for all s 6= y. But in this case case W0 = W which contradicts Lemma D.1." }, { "heading": "E STAGED PSEUDO-UTM", "text": "Simulating a Turing machineM with tape alphabet Σ and set of statesQ on a standard UTM requires the specification of an encoding of Σ andQ in the tape alphabet of the UTM. From the point of view of exploring the geometry of program synthesis, this additional complexity is uninteresting and so here we consider a staged pseudo-UTM whose alphabet is\nΣUTM = Σ ∪Q ∪ {L,R, S} ∪ {X, } where the union is disjoint where is the blank symbol (which is distinct from the blank symbol of M ). Such a machine is capable of simulating any machine with tape alphabet Σ and set of states Q but cannot simulate arbitrary machines and is not a UTM in the standard sense. The adjective staged refers to the design of the UTM, which we now explain. The set of states is\nQUTM = { compSymbol, compState, copySymbol, copyState, copyDir, ¬compState, ¬copySymbol, ¬copyState, ¬copyDir, updateSymbol, updateState, updateDir, resetDescr }.\nThe UTM has four tapes numbered from 0 to 3, which we refer to as the description tape, the staging tape, the state tape and the working tape respectively. Initially the description tape contains a string of the form\nXs0q0s ′ 0q ′ 0d0s1q1s ′ 1q ′ 1d1 . . . sNqNs ′ Nq ′ NdNX,\ncorresponding to the tuples which define M , with the tape head initially on s0. The staging tape is initially a string XXX with the tape head over the second X . The state tape has a single square containing some distribution in ∆Q, corresponding to the initial state of the simulated machine M , with the tape head over that square. Each square on the the working tape is some distribution in ∆Σ with only finitely many distributions different from . The UTM is initialized in state compSymbol.\nThe operation of the UTM is outlined in Figure 6. It consists of two phases; the scan phase (middle and right path), and the update phase (left path). During the scan phase, the description tape is scanned from left to right, and the first two squares of each tuple are compared to the contents of the working tape and state tape respectively. If both agree, then the last three symbols of the tuple are written to the staging tape (middle path), otherwise the tuple is ignored (right path). Once the X at the end of the description tape is reached, the UTM begins the update phase, wherein the three symbols on the staging tape are then used to print the new symbol on the working tape, to update the simulated state on the state tape, and to move the working tape head in the appropriate direction. The tape head on the description tape is then reset to the initial X . Remark E.1. One could imagine a variant of the UTM which did not include a staging tape, instead performing the actions on the work and state tape directly upon reading the appropriate tuple on the description tape. However, this is problematic when the contents of the state or working tape are distributions, as the exact time-step of the simulated machine can become unsynchronised, increasing entropy. As a simple example, suppose that the contents of the state tape were 0.5q + 0.5p, and the symbol under the working tape head was s. Upon encountering the tuple sqs′q′R, the machine would enter a superposition of states corresponding to the tape head having both moved right and not moved, complicating the future behaviour.\nWe define the period of the UTM to be the smallest nonzero time interval taken for the tape head on the description tape to return to the initial X , and the machine to reenter the state compSymbol. If the number of tuples on the description tape is N , then the period of the UTM is T = 10N + 5. Moreover, other than the working tape, the position of the tape heads are T -periodic." }, { "heading": "F SMOOTH TURING MACHINES", "text": "Let U be the staged pseudo-UTM of Appendix E. In defining the model p(y|x,w) associated to a synthesis problem in Section 2 we use a smooth relaxation ∆ stept of the step function of U . In this appendix we define the smooth relaxation of any Turing machine following Clift & Murfet (2018).\nLet M = (Σ, Q, δ) be a Turing machine with a finite set of symbols Σ, a finite set of states Q and transition function δ : Σ×Q→ Σ×Q×{−1, 0, 1}. We write δi = proji ◦ δ for the ith component of δ for i ∈ {1, 2, 3}. For ∈ Σ, let\nΣZ, = {f : Z→ Σ|f(i) = except for finitely many i}.\nWe can associate to M a discrete dynamical system M̂ = (ΣZ, ×Q, step) where\nstep : ΣZ, ×Q→ ΣZ, ×Q is the step function defined by\nstep(σ, q) = ( αδ3(σ0,q) ( . . . , σ−2, σ−1, δ1(σ0, q), σ1, σ2, . . . ) , δ2(σ0, q) ) .\nwith shift map αδ3(σ0,q)(σ)u = σu+δ3(σ0,q).\nLet X be a finite set. The standard X-simplex is defined as ∆X = { ∑ x∈X λxx ∈ RX| ∑ x λx = 1 andλx ≥ 0 for allx ∈ X} (8) where RX is the free vector space on X . We often identify X with the vertices of ∆X under the canonical inclusion i : X → ∆X given by i(x) = ∑x′∈X δx=x′x′. For example {0, 1} ⊂ ∆({0, 1}) ' [0, 1]. A tape square is said to be at relative position u ∈ Z if it is labelled u after enumerating all squares in increasing order from left to right such that the square currently under the head is assigned zero. Consider the following random variables at times t ≥ 0:\n• Yu,t ∈ Σ: the content of the tape square at relative position u at time t. • St ∈ Q: the internal state at time t. • Wrt ∈ Σ: the symbol to be written, in the transition from time t to t+ 1. • Mvt ∈ {L, S,R}: the direction to move, in the transition from time t to t+ 1.\nWe call a smooth dynamical system a pair (A, φ) consisting of a smooth manifold A with corners together with a smooth transformation φ : A→ A. Definition F.1. Let M = (Σ, Q, δ) be a Turing machine. The smooth relaxation of M is the smooth dynamical system ((∆Σ)Z, ×∆Q,∆step) where\n∆step : (∆Σ)Z, ×∆Q→ (∆Σ)Z, ×∆Q is a smooth transformation sending a state ({P (Yu,t)}u∈Z, P (St)) to ({P (Yu,t+1)}u∈Z, P (St+1)) determined by the equations\n• P (Mvt = d|C) = ∑ σ,q δδ3(σ,q)=dP (Y0,t = σ|C)P (St = q|C), • P (Wrt = σ|C) = ∑ σ′,q δδ1(σ′,q)=σP (Y0,t = σ ′|C)P (St = q|C), • P (St+1 = q|C) = ∑ σ,q′ δδ2(σ,q′)=qP (Y0,t = σ|C)P (St = q′|C), • P (Yu,t+1 = σ|C) = P (Mvt = L|C) ( δu6=1P (Yu−1,t = σ|C) + δu=1P (Wrt = σ|C)\n) + P (Mvt = S|C) ( δu6=0P (Yu,t = σ|C) + δu=0P (Wrt = σ|C)\n) + P (Mvt = R|C) ( δu6=−1P (Yu+1,t = σ|C) + δu=−1P (Wrt = σ|C) ) ,\nwhere C ∈ (∆Σ)Z, ×∆Q is an initial state.\nWe will call the smooth relaxation of a Turing machine a smooth Turing machine. A smooth Turing machine encodes uncertainty in the initial configuration of a Turing machine together with an update rule for how to propagate this uncertainty over time. We interpret the smooth step function as updating the state of belief of a “naive” Bayesian observer. This nomenclature comes from the assumption of conditional independence between random variables in our probability functions. Remark F.2. Propagating uncertainty using standard probability leads to a smooth dynamical system which encodes the state evolution of an “ordinary” Bayesian observer of the Turing machine. This requires the calculation of various joint distributions which makes such an extension computationally difficult to work with. Computation aside, the naive probabilistic extension is justified from the point of view of derivatives of algorithms according to the denotational semantics of differential linear logic. See Clift & Murfet (2018) for further details.\nWe call the smooth extension of a universal Turing machine a smooth universal Turing machine. Recall that the staged pseudo-UTM U has four tapes: the description tape, the staging tape, the state tape and working tape. The smooth relaxation of U is a smooth dynamical system\n∆stepU : [(∆ΣUTM) Z, ]4 ×∆QUTM → [(∆ΣUTM)Z, ]4 ×∆QUTM .\nIf we use the staged pseudo-UTM to simulate a Turing machine with tape alphabet Σ ⊆ ΣUTM and states Q ⊆ ΣUTM then with some determined initial state the function ∆step restricts to\n∆stepU : (∆Σ) Z, ×W ×∆Q×X −→ (∆Σ)Z, ×W ×∆Q×X\nwhere the first factor is the configuration of the work tape, W is as in (3) and\nX = [(∆ΣUTM)Z, ]×∆QUTM where the first factor is the configuration of the staging tape. Since U is periodic of period T = 10N + 5 (Appendix E) the iterated function (∆ stepU ) T takes an input with staging tape in its\ndefault state XXX and UTM state compSymbol and returns a configuration with the same staging tape and state, but with the configuration of the work tape, description tape and state tape updated by one complete simulation step. That is,\n(∆ stepU ) T ( x,w, q,XXX, compSymbol) = (F (x,w, q), XXX, compSymbol ) for some smooth function\nF : (∆Σ)Z, ×W ×∆Q −→ (∆Σ)Z, ×W ×∆Q . (9) Finally we can define the function ∆ stept of (4). We assume all Turing machines are initialised in some common state init ∈ Q. Definition F.3. Given t ≥ 0 we define ∆ stept : Σ∗ ×W −→ ∆Q by\n∆ stept(x,w) = ΠQF t(x,w, init)\nwhere ΠQ is the projection onto ∆Q." }, { "heading": "G DIRECT SIMULATION", "text": "For computational efficiency in our PyTorch implementation of the staged pseudo-UTM we implement F of (9) rather than ∆ stepU . We refer to this as direction simulation since it means that we update in one step the state and working tape of the UTM for a full cycle where a cycle consists of T = 10N + 5 steps of the UTM.\nLet S(t) and Yu(t) be random variables describing the contents of state tape and working tape in relative positions 0, u respectively after t ≥ 0 time steps of the UTM. We define S̃(t) := S(4 + Tt) and Ỹu(t) := Yu(4 + Tt) where t ≥ 0 and u ∈ Z. The task then is to define functions f, g such that\nS̃(t+ 1) = f(S̃(t))\nỸu(t+ 1) = g(Ỹu(t)).\nThe functional relationship is given as follows: for 1 ≤ i ≤ N indexing tuples on the description tape, while processing that tuple, the UTM is in a state distribution λi · q̄ + (1 − λi) · ¬q̄ where q̄ ∈ {copySymbol, copyState, copyDir}. Given the initial state of the description tape, we assume uncertainty about s′, q′, d only. This determines a map\nθ : {1, . . . , N} → Σ×Q where the description tape at tuple number i is given by θ(i)1θ(i)2P (s′i)P (q ′ i)P (di). We define the conditionally independent joint distribution between {Ỹ0,t−1, S̃t−1} by\nλi = ∑ σ∈Σ δθ(i)1=σP (Ỹ0,t−1 = σ) · ∑ q∈Q δθ(i)2=qP (S̃t−1 = q)\n= P (Ỹ0,t−1 = θ(i)1) · P (S̃t−1 = θ(i)2).\nWe then calculate a recursive set of equations for 0 ≤ j ≤ N describing distributions P (ŝj), P (q̂j) and P (d̂j) on the staging tape after processing all tuples up to and including tuple j. These are given by P (ŝ0) = P (q̂0) = P (d̂0) = 1 ·X and\nP (ŝi) = ∑ σ∈Σ {λi · P (s′i = σ) + (1− λi) · P (ŝi−1 = σ)} · σ + (1− λi) · P (ŝi−1 = X) ·X\nP (q̂i) = ∑ q∈Q {λi · P (q′i = q) + (1− λi) · P (q̂i−1 = q)} · q + (1− λi) · P (q̂i−1 = X) ·X\nP (d̂i) = ∑\na∈{L,R,S}\n{λi · P (di = a) + (1− λi) · P (d̂i−1 = a)} · a+ (1− λi) · P (d̂i−1 = X) ·X.\nLet Aσ = P (ŝN = X) · P (Ỹ0,t−1 = σ) + P (ŝN = σ). In terms of the above distributions P (S̃t) = ∑ q∈Q ( P (q̂N = X) · P (S̃t−1 = q) + P (q̂N = q) ) · q\nand\nP (Ỹu,t = σ) = P (d̂N = L) ( δu6=1P (Ỹu−1,t−1 = σ) + δu=1Aσ ) + P (d̂N = R) ( δu 6=−1P (Ỹu+1,t−1 = σ) + δu=−1Aσ\n) + P (d̂N = S) ( δu 6=0P (Ỹu,t−1 = σ) + δu=0Aσ\n) + P (d̂N = X) ( δu6=0P (Ỹu,t−1 = σ) + δu=0Aσ ) .\nUsing these equations, we can state efficient update rules for the staging tape. We have\nP (ŝN = X) = N∏ j=1 (1− λj), P (ŝN = σ) = N∑ j=1 λj · P (s′j = σ) N∏ l=j+1 (1− λl)\nP (q̂N = X) = N∏ j=1 (1− λj), P (q̂N = q) = N∑ j=1 λj · P (q′j = q) N∏ l=j+1 (1− λl)\nP (d̂N = X) = N∏ j=1 (1− λj), P (d̂N = a) = N∑ j=1 λj · P (dj = a) N∏ l=j+1 (1− λl).\nTo enable efficient computation, we can express these equations using tensor calculus. Let λ = (λ1, . . . , λN ) ∈ RN . We view θ : RN → RΣ⊗ RQ as a tensor and so θ = ∑N i=1 i⊗ θ(i)1 ⊗ θ(i)2 ∈ RN ⊗ RΣ⊗ RQ. Then\nθy ( P (Ỹ0,t−1)⊗ P (S̃t−1) ) = N∑ i=1 i · P (Ỹ0,t−1 = θ(i)1) · P (S̃t−1 = θ(i)2) = λ.\nIf we view P (s′∗ = •) ∈ RN ⊗ RΣ as a tensor, then\nP (ŝN ) = N∑ j=1 P (s′j = •)· λj N∏ l=j+1 (1− λl) = λ·( N∏ l=2 (1− λl), N∏ l=3 (1− λl), . . . , (1− λN ), 1 )\ncan be expressed in terms on the vector λ only. Similarly, P (q′∗ = •) ∈ RN ⊗ RQ with\nP (q̂N ) = N∑ j=1 P (q′j = •) · λj N∏ l=j+1 (1− λl) = λ ·( N∏ l=2 (1− λl), N∏ l=3 (1− λl), . . . , (1− λN ), 1 )\nand P (d∗ = •) ∈ RN ⊗ R3 with\nP (d̂N ) = N∑ j=1 P (dj = •)· λj N∏ l=j+1 (1− λl) = λ·( N∏ l=2 (1− λl), N∏ l=3 (1− λl), . . . , (1− λN ), 1 ) ." } ]
2,020
null
SP:8ddb96d9abf2c524bd664360a755cbe76703c109
[ "The paper proposes to use student-teacher training as a way of knowledge transfer between neural networks with different architectures without access to the source data. Instead the authors propose to use a separate dataset to transfer the knowledge of the teacher network and a potential different dataset for fine-tuning. The paper evaluates their method with various segmentation architectures by pretraining a DeepLab v3+ on an internal breast lesion dataset and testing transfer and fine-tuning using different medical datasets. The authors find that knowledge transfer performs similar to regular transfer learning in most combinations of datasets." ]
Conventional transfer learning leverages weights of pre-trained networks, but mandates the need for similar neural architectures. Alternatively, knowledge distillation can transfer knowledge between heterogeneous networks but often requires access to the original training data or additional generative networks. Knowledge transfer between networks can be improved by being agnostic to the choice of network architecture and reducing the dependence on original training data. We propose a knowledge transfer approach from a teacher to a student network wherein we train the student on an independent transferal dataset, whose annotations are generated by the teacher. Experiments were conducted on five state-of-the-art networks for semantic segmentation and seven datasets across three imaging modalities. We studied knowledge transfer from a single teacher, combination of knowledge transfer and fine-tuning, and knowledge transfer from multiple teachers. The student model with a single teacher achieved similar performance as the teacher; and the student model with multiple teachers achieved better performance than the teachers. The salient features of our algorithm include: 1) no need for original training data or generative networks, 2) knowledge transfer between different architectures, 3) ease of implementation for downstream tasks by using the downstream task dataset as the transferal dataset, 4) knowledge transfer of an ensemble of models, trained independently, into one student model. Extensive experiments demonstrate that the proposed algorithm is effective for knowledge transfer and easily tunable.
[]
[ { "authors": [ "Walid Al-Dhabyani", "Mohammed Gomaa", "Hussien Khaled", "Aly Fahmy" ], "title": "Dataset of breast ultrasound images", "venue": "Data in Brief,", "year": 2020 }, { "authors": [ "Liang-Chieh Chen", "Yukun Zhu", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Encoderdecoder with atrous separable convolution for semantic image segmentation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Vladimir Iglovikov", "Alexey Shvets" ], "title": "Ternausnet: U-Net with VGG11 encoder pre-trained on imagenet for image segmentation", "venue": "arXiv preprint arXiv:1801.05746,", "year": 2018 }, { "authors": [ "Ata Jodeiri", "Reza A Zoroofi", "Yuta Hiasa", "Masaki Takao", "Nobuhiko Sugano", "Yoshinobu Sato", "Yoshito Otake" ], "title": "Region-based convolution neural network approach for accurate segmentation of pelvic radiograph", "venue": "26th National and 4th International Iranian Conference on Biomedical Engineering (ICBME),", "year": 2019 }, { "authors": [ "Jaeyong Kang", "Jeonghwan Gwak" ], "title": "Ensemble of instance segmentation models for polyp segmentation in colonoscopy", "venue": "images. IEEE Access,", "year": 2019 }, { "authors": [ "Alexander Kirillov", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Panoptic feature pyramid networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tianhong Li", "Jianguo Li", "Zhuang Liu", "Changshui Zhang" ], "title": "Knowledge distillation from few samples", "venue": null, "year": 2018 }, { "authors": [ "Geert Litjens", "Thijs Kooi", "Babak Ehteshami Bejnordi", "Arnaud Arindra Adiyoso Setio", "Francesco Ciompi", "Mohsen Ghafoorian", "Jeroen Awm Van Der Laak", "Bram Van Ginneken", "Clara I Sánchez" ], "title": "A survey on deep learning in medical image analysis", "venue": "Medical Image Analysis,", "year": 2017 }, { "authors": [ "Raphael Gontijo Lopes", "Stefano Fenu", "Thad Starner" ], "title": "Data-free knowledge distillation for deep neural networks", "venue": "arXiv preprint arXiv:1710.07535,", "year": 2017 }, { "authors": [ "Saman Motamed", "Isha Gujrathi", "Dominik Deniffel", "Anton Oentoro", "Masoom A Haider", "Farzad Khalvati" ], "title": "A transfer learning approach for automated segmentation of prostate whole gland and transition zone in diffusion weighted mri", "venue": "arXiv preprint arXiv:1909.09541,", "year": 2019 }, { "authors": [ "Ozan Oktay", "Jo Schlemper", "Loic Le Folgoc", "Matthew Lee", "Mattias Heinrich", "Kazunari Misawa", "Kensaku Mori", "Steven McDonagh", "Nils Y Hammerla", "Bernhard Kainz" ], "title": "Attention U-Net: Learning where to look for the pancreas", "venue": "arXiv preprint arXiv:1804.03999,", "year": 2018 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding transfer learning for medical imaging", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-Net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical Image Computing and Computer-assisted Intervention,", "year": 2015 }, { "authors": [ "H Scudder" ], "title": "Probability of error of some adaptive pattern-recognition machines", "venue": "IEEE Transactions on Information Theory,", "year": 1965 }, { "authors": [ "Chuanqi Tan", "Fuchun Sun", "Tao Kong", "Wenchang Zhang", "Chao Yang", "Chunfang Liu" ], "title": "A survey on deep transfer learning", "venue": "In International conference on artificial neural networks,", "year": 2018 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning spatiotemporal features with 3D convolutional networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V Le" ], "title": "Self-training with noisy student", "venue": null, "year": 2020 }, { "authors": [ "Springer" ], "title": "A APPENDIX A.1 DATA PRE-PROCESSING AND AUGMENTATION In addition to pre-processing, we employed five methods for data augmentation. Pre-processing: All the images were resized to 384× 384 and the color images in Skin Lesion", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning often requires a sufficiently large training dataset, which is expensive to build and not easy to share between users. For example, a big challenge with semantic segmentation of medical images is the limited availability of annotated data (Litjens et al., 2017). Due to ethical concerns and confidentiality constraints, medical datasets are not often released with the trained networks. This highlights the need for knowledge transfer between neural networks, wherein the original training dataset does not need to be accessed. On the other hand, according to the black-box metaphor in deep learning based methods, transferring knowledge is difficult between heterogeneous neural networks. To address these limitations, algorithms were proposed to reuse or share the knowledge of neural networks, such as network weight transfer (Tan et al., 2018), knowledge distillation (Hinton et al., 2015), federated learning (Yang et al., 2019), and self-training (Xie et al., 2020b).\nSome conventional algorithms directly transfer the weights of standard large models that were trained on natural image datasets for different tasks (Kang & Gwak, 2019; Motamed et al., 2019; Jodeiri et al., 2019; Raghu et al., 2019). For example, Iglovikov & Shvets (2018) adopted VGG11 pre-trained on ImageNet as the encoder of U-Net for 2D image segmentation. Similarly, the convolutional 3D (Tran et al., 2015), pre-trained on natural video datasets, was used as the encoder of 3D U-Net for the 3D MR (Magnetic Resonance) medical image segmentation (Zeng et al., 2017). Transferring the network weights generally requires adjustments to be made to the architecture of the receiver model, this in turn, limits the flexibility of the receiver network.\nAnother technique that involves knowledge transfer is federated learning (Yang et al., 2019) ; it has received attention for its capability to train a large-scale model in a decentralized manner without requiring users’ data. In general, federated learning approaches adopt the central model to capture the\nshared knowledge of all users by aggregating their gradients. Due to the difficulties in transferring knowledge between heterogeneous networks, federated learning often requires all devices, including both the central servers and local users, to use the same neural architecture (Xie et al., 2020a). To our best knowledge, there has been no federated learning system that uses heterogeneous networks.\nKnowledge distillation is the process of transferring the knowledge of a large neural network or an ensemble of neural networks (teacher) to a smaller network (student) (Hinton et al., 2015). Given a set of trained teacher models, one feeds training data to them and uses their predictions instead of the true labels to train the student model. For effective transfer of knowledge, however, it is essential that a reasonable fraction of the training examples are observable by the student (Li et al., 2018) or the metadata at each layer is provided (Lopes et al., 2017). Yoo et al. (2019) used a generative network to extract the knowledge of a teacher network, which generated labeled artificial images to train another network. As can be seen, Yoo et al.’s method had to train an additional generative network for each teacher network.\nDifferent from knowledge distillation, self-training aims to transfer knowledge to a more capable model. The self-training framework (Scudder, 1965) has three main steps: train a teacher model on labeled images; use the teacher to generate pseudo labels on unlabeled images; and train a student model on the combination of labeled images and pseudo labeled images. Xie et al. (2020b) proposed self-training with a noisy student for classification, which iterates this process a few times by treating the student as a teacher to relabel the unlabeled data and training a new student. These studies required labeled images for training the student; moreover, they implicitly required that the pseudolabeled images should be similar in content to the original labeled images.\nInspired by self-training, we propose a network-agnostic knowledge transfer algorithm for medical image segmentation. This algorithm transfers the knowledge of a teacher model to a student model by training the student on a transferal dataset whose annotations are generated by the teacher. The algorithm has the following characteristics: the transferal dataset requires no manual annotation and is independent of the teacher-training dataset; the student does not need to inherit the weights of the teacher, and such, the knowledge transfer can be conducted between heterogeneous neural architectures; it is straightforward to implement the algorithm with fine-tuning to solve downstream task, especially by using the downstream task dataset as the transferal dataset; the algorithm is able to transfer the knowledge of an ensemble of models that are trained independently into one model.\nWe conducted extensive experiments on semantic segmentation using five state-of-the-art neural networks and seven datasets. The neural networks include DeepLabv3+ (Chen et al., 2018), UNet (Ronneberger et al., 2015), AttU-Net (Oktay et al., 2018), SDU-Net (Wang et al., 2020), and Panoptic-FPN (Kirillov et al., 2019). Out of the seven datasets, four public datasets involve breast lesion, nerve structure, skin lesion, and natural image objects, and three internal/in-house datasets involve breast lesion (a single dataset with two splits) and thyroid nodule. Experiments showed that the proposed algorithm performed well for knowledge transfer on semantic image segmentation." }, { "heading": "2 ALGORITHM", "text": "The main goal of the proposed algorithm is to transfer the knowledge of one or more neural networks to an independent one, without access to the original training datasets. This section presents the proposed knowledge transfer algorithm for semantic segmentation in Algorithm 1 and its application with fine-tuning for a downstream task in Algorithm 2 (A.4).\nThe knowledge transfer algorithm first employs one or more teacher models to generate pseudo masks for the transferal dataset and then trains the student model on the pseudo-annotated transferal dataset. For an image x in the transferal dataset D, we get the pseudo mask by the weighted average of the teacher models’ outputs: y = ∑ Ti∈T wi · Ti(x), where wi is the weight for the model Ti. The output of a model Ti(x) is either a soft mask (pixel value varies from 0 to 1) or a binary mask (pixel value is 0 or 1). Since the ensemble of teacher models is not our primary focus, we simply set the weights equally for all teacher models.\nWe adopt two constraints to exclude images without salient targets, as shown in Figure 1. The first constraint is on the target pixel number in the pseudo mask, where target pixel indicates the pixel with a value above 0.5. We exclude the image (of size 384 ×384 pixels) if the target pixel number is less than a threshold of 256. The second constraint is on the gray-level entropy of the pseudo\nAlgorithm 1 Network-Agnostic Knowledge Transfer for Semantic Segmentation Require: Teacher set T with trained models, randomly initialized student model S Require: Transferal dataset D Require: Target pixel number threshold α, gray-level entropy threshold β\n1: for each datapoint x in D do 2: Obtain pseudo mask y = ∑ Ti∈T wi · Ti(x) 3: N ← number of target pixels (value above 0.5) in y 4: E ← gray-level entropy of y 5: if N < α or E > β then 6: Exclude x from D 7: else 8: Update x as (x, y) 9: end if\n10: end for 11: Train student model S on pseudo-annotated D 12: return Trained S\nmask. A high entropy value implies the target and the background have no obvious difference and vice versa. We exclude images with a gray-level entropy higher than a threshold of 2.5. We employ the second constraint only in the case that the teacher model outputs a soft mask rather than a binary mask.\nIn this algorithm, the teacher models and student model can be independent of each other and do not rely on each other during the training or testing phase. The transferal dataset is independent of the teacher-training datasets, which are not observable for the student model. The student model gains the knowledge of the teacher by the transferal dataset. As a result, the student model is trained to work well on datasets similar to the teacher-training dataset, while it is not aimed to predict the ground truth of the transferal dataset.\nIn the case that a small dataset Dt with ground truth is available for a downstream segmentation task (target task), it is straightforward to further fine-tune the student model that has been trained on pseudo-annotated transferal dataset, as Algorithm 2 shows. The transferal dataset D is independent of the target task, while it can also be a non-annotated dataset from the target task." }, { "heading": "3 RELATED NEURAL NETWORKS", "text": "We posit that if the teacher model is ineffective, a more capable student model can easily achieve similar performance; if the student model, however, is ineffective, the cause cannot be easily identified and the reason can be attributed to the knowledge transfer algorithm and the inherent limitations of the student model. We experimented with five neural networks, all of which have different architectures and have been shown to provide the best outcome on various segmentation tasks.\nDeepLabv3+: DeepLabv3+ (Chen et al., 2018) is one of Google’s latest and best performing semantic segmentation models. It combines the advantages of spatial pyramid pooling with an encoderdecoder structure. Spatial pyramid pooling captures rich contextual information while the decoder path is able to gradually recover object boundaries.\nU-Net: U-Net (Ronneberger et al., 2015) has been the most widely used network for medical image segmentation. U-Net consists of an encoder-decoder structure with the encoder path extracting rich semantic information, and the decoder path recovering resolution and enabling contextual information.\nAttU-Net: AttU-Net (Oktay et al., 2018) is a modification of U-Net, where attention gates are used to control the skip connections from the encoder to the decoder. Attention gates make the model focus on target structures by highlighting important features and suppressing irrelevant features.\nSDU-Net: SDU-Net (Wang et al., 2020) utilizes parallel atrous convolutions with different dilation rates in each layer, which is effective in capturing features in both small and large receptive fields. SDU-Net has demonstrated better performance using merely 40 percent of the parameters in U-Net.\nPanoptic-FPN: Panoptic-FPN (Kirillov et al., 2019) merges semantic segmentation and object detection, such that each pixel is given a class as in semantic segmentation and each object is given a unique ID as in object detection. Panoptic-FPN provides a more rich and complete segmentation." }, { "heading": "4 DATASETS", "text": "Table 1 presents an overview of the seven image datasets that were used for the experiments. One example for each dataset is presented in Figure 2.\nXXX Breast Lesion: Patients evaluated and referred for Ultrasound (US)-guided biopsy (BI-RADS 4 or 5 mass) at XXX Institute between September 2017 and October 2019 were recruited. Eligible subjects were adult patients with breast masses detected by conventional US imaging. A total of 1385 US images were adopted and manually annotated by two radiologists. These images were randomly assigned to XXX Breast Lesion-1 (n=1237) and XXX Breast Lesion-2 (n=148).\nBaheya Breast Lesion: The Baheya Breast Lesion dataset was released by Al-Dhabyani et al. (2020) and it contains 780 breast US images among women between 25 and 75 years old. Radiologists from Baheya Hospital, Egypt manually annotated the lesion boundaries. The images are categorized into three classes, which are normal (n=133), benign (n=487), and malignant (n=210). In our experiment, we only used images with benign or malignant lesions, totaling 697 images.\nThyroid Nodule: A total of 3,830 US images of thyroid nodules were retrospectively collected form the XXX Thyroid Nodule Dataset. Data from patients (more than 18 years old) who underwent US examination before their thyroid biopsy between April 2010 and April 2012 were included. Manual delineation of thyroid nodules by two expert radiologists were the reference standard.\nNerve Structure: Hosted on Kaggle, the US Nerve segmentation dataset formed the basis of a competition in segmenting nerve structures in US images of the neck. The dataset includes potential noise, artifacts, and mistakes when labeling the ground truth as well as near-identical images due to the inherent image acquisition technique. In our experiment, we utilized all 11,143 original images.\nSkin Lesion: The skin-lesion dataset was from a challenge hosted by the International Skin Imaging Collaboration (ISIC) in 2019 where the task was to perform multiclass classification to distinguish dermoscopic images among nine distinct diagnostic categories of skin cancers. We employed the original images of the training set with 25,331 images across 8 classes.\nPASCAL VOC2012: Pascal VOC2012 challenge dataset is popular for object detection and segmentation tasks. This dataset consists of natural images belonging to 20 different types of objects. We adopted a total of 17,125 original images both for object detection and segmentation." }, { "heading": "5 EXPERIMENT", "text": "Experimental evaluation was conducted in the space of semantic image segmentation. The experiments included: knowledge transfer with a single teacher, combination of knowledge transfer and fine-tuning, and knowledge transfer with multiple teachers. As mentioned in Algorithms 1 and 2, we excluded images without salient targets in their pseudo masks. So we also tested and compared the algorithm’s performance with and without image exclusion.\nIn addition, we evaluated the knowledge transfer capability of each transferal dataset, which is defined as the ratio of the highest Dice score of all student models that are trained on the transferal dataset to the Dice score of the teacher model that generates pseudo masks for the transferal dataset. It is obvious the student and the teacher Dice scores should be calculated on the same test dataset." }, { "heading": "5.1 KNOWLEDGE TRANSFER WITH A SINGLE TEACHER", "text": "We adopted DeepLabv3+ as the teacher model with XXX Breast Lesion-1 as the teacher-training dataset, and employed U-Net, AttU-Net, SDU-Net, and Panoptic-FPN as the student models. We used Baheya Breast Lesion, Thyroid Nodule, Nerve Structure, Skin Lesion, and PASCAL VOC2012 as transferal datasets. The trained teacher processed the transferal datasets to generate pseudo masks, which were used to train the students. We tested the teacher and students on XXX Breast Lesion-2 and Baheya Breast Lesion. The performance was evaluated by computing the Dice score between the model output and manual annotations, i.e., ground truth.\nThe teacher model had an average Dice score of 87.04% (±16.01%) on XXX Breast Lesion-2 and an average Dice score of 65.93%(±32.55%) on Baheya Breast Lesion. As shown in Tables 2 and 3, all student model obtained impressive knowledge. For XXX Breast Lesion-2, SDU-Net and PanopticFPN obtained Dice scores similar to and even better than the teacher in most of the cases, even though the transferal datasets have different kinds of objects and even different image modalities, such as PASCAL VOC2012. For Baheya Breast Lesion, all students trained on pseudo-annotated Baheya Breast Lesion resulted in higher Dice scores than the teacher model even though the size of Baheya Breast Lesion is much smaller than the teacher-training dataset. The superior performance of the students could be partly due to domain adaptation, which is one of the advantages of our algorithm when the tansferral dataset is from the target task.\nThe performance of student models was improved by excluding images that had few target pixels or high gray-level entropy from the transferal dataset, especially when the transferal dataset modality was different from the teacher-training dataset. It is because the image of a different modality may result in more pseudo masks with few target pixels or high gray-level entropy, which could be seen from the entropy distribution in Figure 3 (A.2). However, image exclusion did not work that well on Baheya Breast Lesion. It is probably because Baheya Breast Lesion is similar to the teacher-training dataset and of a much smaller size; image exclusion could further harm its representation capability rather than improve its representation quality.\nAll the transferal datasets have strong knowledge transfer capabilities that are close to and even above 100%, except the Skin Lesion. Although Skin Lesion did not perform as well as the other transferal datasets, its knowledge transfer capability achieved 81.16% and 87.30% according to the two tests. In general it is not a challenge to build a transferal dataset with a strong knowledge transfer capability. At the same time, it is necessary to point out that the four student networks have different learning abilities on the same transferal dataset. However, it is common for different networks to learn differently even on the dataset with manual annotation." }, { "heading": "5.2 COMBINATION WITH FINE-TUNING", "text": "To evaluate the usefulness of knowledge transfer for downstream tasks, we conducted fine-tuning using 50 images with ground truth annotations (from Baheya Breast Lesion and Thyroid Nodule, respectively). Both the teacher model and student models that were trained in the previous experiments were further fine-tuned. For comparison, we also conducted the experiment by training the student models from scratch on these images with ground truth. All the models were tested on the rest of the images of Baheya Breast Lesion and Thyroid Nodule, respectively\nThe fine-tuned teacher model, DeepLabv3+, resulted in an average Dice score of 69.90%±31.01% on Baheya Breast Lesion and an average Dice score of 66.56%±27.78% on Thyroid Nodule. The results of the student models are presented in Tables 4 and 5. All fine-tuned student models outperformed the models trained from scratch by a large margin. With the same fine-tuning dataset, some of the student models, such as SDU-Net and Panoptic-FPN, performed similar to or even better than\nthe fine-tuned teacher model. This further verified that the effect of knowledge transfer between heterogeneous neural networks and demonstrate that the proposed algorithm can be well applied to downstream task." }, { "heading": "5.3 KNOWLEDGE TRANSFER WITH MULTIPLE TEACHERS", "text": "We also tested if it was possible to transfer the knowledge of multiple weak independent teachers to build a stronger student. We adopted U-Net, AttU-Net, SDU-Net, and Panoptic-FPN as the teacher models and employed DeepLabv3+ as a student model. We train the teacher models on XXX Breast Lesion-1 for a few epochs to ensure under-fitting, thereby resulting in weak teachers. Baheya Breast Lesion was used as the transferal dataset. The trained teachers processed each image in the transferal dataset, with the average soft mask as pseudo mask. Then we trained the student model on the pseudo-annotated transferal dataset.\nWe compared the performance of the teacher models and the student model on XXX Breast Lesion2 and Baheya Breast Lesion (Table 6). The student model outperformed all the teacher models on both test datasets. This is primarily due to fact that the student model actually learned from the ensemble of the teacher models, rather than any individual weak teacher model. Since the process of generating ensemble networks is not the primary focus of this study, we simply average their outputs. With an advanced ensemble, we may be able to train the student model better." }, { "heading": "6 DISCUSSION", "text": "The salient features of our algorithm include: no need for original training data or extra generative networks; facilitating knowledge transfer between networks with different architectures; ease of implementation with fine-tuning to solve downstream task especially when using the downstream task dataset as the transferal dataset; capability of knowledge transfer from multiple models, trained independently, into one student model. The algorithm can also perform domain adaptation without extra computation in the case that the downstream task dataset is used as the transferal dataset.\nTo the best of our knowledge, this is the first demonstration of image segmentation using pseudo annotations only. A transferal dataset with a large number of images of various contents has higher possibility to capture rich targets in the pseudo masks, but it may also include many images without salient targets. Future work may include optimization of a single transferal dataset or a combination of multiple transferal datasets to build a better one. Different models may have different abilities to learn from the pseudo-annotated transferal dataset. Understanding the differences in student model learning on pseudo-annotated transferal dataset and manually annotated dataset can help generate confidence in the proposed algorithm. Further study on knowledge transfer between neural networks with different input data structures or different number of prediction classes is also warranted. Another possible direction includes extending the ensemble of teacher models to produce pseudo masks.\nMedical imaging applications require interpretable algorithms which generate confidence in the clinical user. Knowledge transfer employs pseudo annotation for training; these pseudo annotations have no physical meaning. It is imperative to examine and quantify the interpretability of student model before deploying models clinically." }, { "heading": "7 CONCLUSION", "text": "Knowledge transfer can be achieved between neural networks having different architectures and using an independent transferal dataset. Knowledge transfer from multiple networks improves the performance of the student network over the teacher networks." }, { "heading": "A APPENDIX", "text": "A.1 DATA PRE-PROCESSING AND AUGMENTATION\nIn addition to pre-processing, we employed five methods for data augmentation.\nPre-processing: All the images were resized to 384× 384 and the color images in Skin Lesion and PASCAL VOC2012 were converted into gray-scale images.\nRandom Cropping: A percentage of consecutive rows and columns were cropped from the original image. The percentage was a random value in the range [70%, 100%] and [80%, 100%] for segmentation and classification, respectively.\nHorizontal Flipping: The image was reversed horizontally, that is, from left to right.\nRandom Rotation: The image was rotated around its center. The rotation angle was a random value in the range [-45◦, 45◦] and [-30◦, 30◦] for segmentation and classification, respectively.\nGamma Adjustment: Gamma correction was conducted on an image by the gamma value, which was randomly sampled in the range [0.7, 1.3].\nContrast Adjustment: The image contrast of an image was adjusted by the contrast factor, which was a random value in the range [0.7, 1.3].\nA.2 DATA DISTRIBUTION AND SELECTION\nWe used DeepLabv3+ that was trained on XXX Breast Lesion-1 to process the images in the five transferal datasets including Baheya Breast Lesion, Thyroid Nodule, Nerve Structure, Skin Lesion, and PASCAL VOC2012, and obtained the pseudo mask. Figure 3 presents the gray-level entropy distribution of the pseudo mask for each transferal dataset.\nBy excluding images without obvious targets, where the target pixel number is less than 256 and the gray-level entropy is over 2.5, we could reduce the dataset to a smaller one. Table 7 presents the comparison of image numbers with and without image exclusion.\nA.3 TRAINING DETAILS\nTwo common loss functions were adopted with the same weight for the segmentation tasks, the Dice loss function and binary cross entropy loss (BCE). The Dice loss is defined as:\nLdice = 1− 2 ∑ ph,w · p̂h,w + ∑\nph,w + ∑ p̂h,w +\n(1)\nwhere (h,w) represents the pixel coordinate, ph,w ∈ {0, 1} is the mask ground truth, ph,w = 1 indicates the pixel belonging to the target, 0 ≤ p̂h,w ≤ 1 is the prediction probability for the pixel belonging to the target, is a small real number.\nThe binary cross entropy loss is defined as:\nLbce = − 1\nN\n∑ ph,w · log(p̂h,w) + (1− ph,w) · log(1− p̂h,w) (2)\nwhere (h,w) represents the pixel coordinate, ph,w ∈ {0, 1} is the mask ground truth, ph,w = 1 indicates the pixel belonging to the target, 0 ≤ p̂h,w ≤ 1 is the prediction probability for the pixel belonging to the target, N is the number of pixels.\nAdam optimizer was used for training all the networks. The learning rate was set to 0.0001, and all the other parameters were set to the default values in PyTorch. The batch size was set to 4. The epoch number was set to 500 for the training on the teacher-training dataset. Inversely proportional to the image number, the epoch numbers were set to 500, 175, 60, and 30 to for the training on pseudoannotated Baheya Breast Lesion, Thyroid Nodule, Nerve Structure, and Skin Lesion. Moreover, the training epoch number for fine-tuning was set to 50.\nA.4 ALGORITHM OF KNOWLEDGE TRANSFER COMBINED WITH FINE-TUNING\nAlgorithm 2 combines knowledge transfer and fine-tuning for a downstream task of semantic image segmentation. In the algorithm, the transferal dataset is independent of the downstream task, but if it is from the downstream task, the algorithm can benefit from domain adaptation without extra computation.\nAlgorithm 2 Knowledge Transfer Combined with Fine-tuning for Semantic Segmentation Require: Teacher set T with trained models, randomly initialized student model S Require: Transferal dataset D, target task dataset Dt with annotation Require: Target pixel number threshold α, gray-level entropy threshold β\n1: for each datapoint x in D do 2: Obtain pseudo mask y = ∑ Ti∈T wi · Ti(x) 3: N ← number of target pixels (value above 0.5) in y 4: E ← gray-level entropy of y 5: if N < α or E > β then 6: Exclude x from D 7: else 8: Update x as (x, y) 9: end if\n10: end for 11: Train student model S on pseudo-annotated D 12: Fine-tune student model S on target task dataset Dt 13: return Trained student model S" } ]
2,020
null
SP:8c8d93b1668b5497a4d2b318b6d709f200788262
[ "This paper proposes a defense against the adversarial attacks on explanation methods described in Slack et al. (2020). In particular, by using sampling methods that more closely resemble the original data distribution, the authors make it difficult for the Out-of-Distribution detector to successfully discriminate between instances used for predicting and instances used for explaining." ]
Machine learning models are used in many sensitive areas where, besides predictive accuracy, their comprehensibility is also important. Interpretability of prediction models is necessary to determine their biases and causes of errors and is a prerequisite for users’ confidence. For complex state-of-the-art black-box models, post-hoc model-independent explanation techniques are an established solution. Popular and effective techniques, such as IME, LIME, and SHAP, use perturbation of instance features to explain individual predictions. Recently, Slack et al. (2020) put their robustness into question by showing that their outcomes can be manipulated due to poor perturbation sampling employed. This weakness would allow dieselgate type cheating of owners of sensitive models who could deceive inspection and hide potentially unethical or illegal biases existing in their predictive models. This could undermine public trust in machine learning models and give rise to legal restrictions on their use. We show that better sampling in these explanation methods prevents malicious manipulations. The proposed sampling uses data generators that learn the training set distribution and generate new perturbation instances much more similar to the training set. We show that the improved sampling increases the LIME and SHAP’s robustness, while the previously untested method IME is already the most robust of all.
[]
[ { "authors": [ "David Alvarez-Melis", "Tommi S Jaakkola" ], "title": "On the robustness of interpretability methods", "venue": "arXiv preprint arXiv:1806.08049,", "year": 2018 }, { "authors": [ "Joymallya Chakraborty", "Kewen Peng", "Tim Menzies" ], "title": "Making fair ML software using trustworthy explanation", "venue": "arXiv preprint arXiv:2007.02893,", "year": 2020 }, { "authors": [ "Botty Dimanov", "Umang Bhatt", "Mateja Jamnik", "Adrian Weller" ], "title": "You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods", "venue": "In SafeAI@ AAAI,", "year": 2020 }, { "authors": [ "Ann-Kathrin Dombrowski", "Maximillian Alber", "Christopher Anders", "Marcel Ackermann", "KlausRobert Müller", "Pan Kessel" ], "title": "Explanations can be manipulated and geometry is to blame", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository. http://archive.ics.uci", "venue": "edu/ml,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Amirata Ghorbani", "Abubakar Abid", "James Zou" ], "title": "Interpretation of neural networks is fragile", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Juyeon Heo", "Sunghwan Joo", "Taesup Moon" ], "title": "Fooling neural network interpretations via adversarial model manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zachary Chase Lipton" ], "title": "The mythos of model interpretability", "venue": "CoRR, abs/1606.03490,", "year": 2016 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kristian Miok", "Deng Nguyen-Doan", "Daniela Zaharie", "Marko Robnik-Šikonja" ], "title": "Generating data using Monte Carlo dropout", "venue": "IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP),", "year": 2019 }, { "authors": [ "J. Moody", "C.J. Darken" ], "title": "Fast learning in networks of locally-tuned processing units", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Michael Redmond", "Alok Baveja" ], "title": "A data-driven software tool for enabling cooperative information sharing among police departments", "venue": "European Journal of Operational Research,", "year": 2002 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should I trust you?”", "venue": null, "year": 2021 }, { "authors": [ "Sean Saito", "Eugene Chua", "Nicholas Capel", "Rocco Hu" ], "title": "Improving lime robustness with smarter", "venue": "Journal of Computational and Applied Mathematics,", "year": 1987 }, { "authors": [ "2018. Lloyd S. Shapley" ], "title": "A value for n-person games", "venue": null, "year": 2018 }, { "authors": [ "F COMPARING" ], "title": "EXPLANATIONS OF ORIGINAL AND MODIFIED METHODS We check if improved data generators affect explanations in non-adversary environment. We split the dataset into training and evaluation set in the ratio 90% : 10%, and trained four classifiers from Python scikit-learn (Pedregosa et al. (2011)) library: Gaussian naive Bayes, linear SVC (SVM", "venue": null, "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning models are used in many areas where besides predictive performance, their comprehensibility is also important, e.g., in healthcare, legal domain, banking, insurance, consultancy, etc. Users in those areas often do not trust a machine learning model if they do not understand why it made a given decision. Some models, such as decision trees, linear regression, and naı̈ve Bayes, are intrinsically easier to understand due to the simple representation used. However, complex models, mostly used in practice due to better accuracy, are incomprehensible and behave like black boxes, e.g., neural networks, support vector machines, random forests, and boosting. For these models, the area of explainable artificial intelligence (XAI) has developed post-hoc explanation methods that are model-independent and determine the importance of each feature for the predicted outcome. Frequently used methods of this type are IME (Štrumbelj & Kononenko, 2013), LIME (Ribeiro et al., 2016), and SHAP (Lundberg & Lee, 2017).\nTo determine the features’ importance, these methods use perturbation sampling. Slack et al. (2020) recently noticed that the data distribution obtained in this way is significantly different from the original distribution of the training data as we illustrate in Figure 1a. They showed that this can be a serious weakness of these methods. The possibility to manipulate the post-hoc explanation methods is a critical problem for the ML community, as the reliability and robustness of explanation methods are essential for their use and public acceptance. These methods are used to interpret otherwise black-box models, help in debugging models, and reveal models’ biases, thereby establishing trust in their behavior. Non-robust explanation methods that can be manipulated can lead to catastrophic consequences, as explanations do not detect racist, sexist, or otherwise biased models if the model owner wants to hide these biases. This would enable dieselgate-like cheating where owners of sensitive prediction models could hide the socially, morally, or legally unacceptable biases present in their models. As the schema of the attack on explanation methods on Figure 1b shows,\nowners of prediction models could detect when their models are examined and return unbiased predictions in this case and biased predictions in normal use. This could have serious consequences in areas where predictive models’ reliability and fairness are essential, e.g., in healthcare or banking. Such weaknesses can undermine users’ trust in machine learning models in general and slow down technological progress.\nIn this work, we propose to change the main perturbation-based explanation methods and make them more resistant to manipulation attempts. In our solution, the problematic perturbation-based sampling is replaced with more advanced sampling, which uses modern data generators that better capture the distribution of the training dataset. We test three generators, the RBF network based generator (Robnik-Šikonja, 2016), random forest-based generator, available in R library semiArtificial (Robnik-Šikonja, 2019), as well as the generator using variational autoencoders (Miok et al., 2019). We show that the modified gLIME and gSHAP methods are much more robust than their original versions. For the IME method, which previously was not analyzed, we show that it is already quite robust. We release the modified explanation methods under the open-source license1.\nIn this work, we use the term robustness of the explanation method as a notion of resilience against adversarial attacks, i.e. as the ability of an explanation method to recognize the biased classifier in an adversary environment. This type of robustness could be more formally defined as the number of instances where the adversarial model’s bias was correctly recognized. We focus on the robustness concerning the attacks described in Slack et al. (2020). There are other notions of robustness in explanation methods; e.g., (Alvarez-Melis & Jaakkola, 2018) define the robustness of the explanations in the sense that similar inputs should give rise to similar explanations.\nThe remainder of the paper is organized as follows. In Section 2, we present the necessary background and related work on explanation methods, attacks on them, and data generators. In Section 3, we propose a defense against the described weaknesses of explanation methods, and in Section 4, we empirically evaluate the proposed solution. In Section 5, we draw conclusions and present ideas for further work." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "In this section, we first briefly describe the background on post-hoc explanation methods and attacks on them, followed by data generators and related works on the robustness of explanation methods." }, { "heading": "2.1 POST-HOC EXPLANATION METHODS", "text": "The current state-of-the-art perturbation-based explanation methods, IME, LIME, and SHAP, explain predictions for individual instances. To form an explanation of a given instance, they measure the difference in prediction between the original instance and its neighboring instances, obtained\n1https://anonymous.4open.science/r/5d550c62-5c5c-4ee3-81ef-ab96fe0838ca/\nwith perturbation sampling. Using the generated instances, the LIME method builds a local interpretable model, e.g., a linear model. The SHAP and IME methods determine the impact of the features as Shapley values from the coalitional game theory (Shapley, 1988). In this way, they assure that the produced explanations obey the four Shapley fairness axioms (Štrumbelj & Kononenko, 2013). Due to the exponential time complexity of Shapley value calculation, both methods try to approximate them. The three methods are explained in detail in the above references, and a formal overview is presented in Appendix A, while below, we present a brief description. In our exposition of the explanation methods, we denote with f the predictive model and with x the instance we are explaining.\nExplanations of instances with the LIME method is obtained with an interpretable model g. The model g has to be both locally accurate (so that it can obtain correct feature contributions) and simple (so that it is interpretable). Therefore in LIME, g is a linear regression model trained on the instances sampled in the neighborhood of the explained instance x, which is first normalized.The perturbed explanation instances are generated by adding Gaussian noise to each feature of x independently. Using linear models to determine the contributions of features, LIME is fast but may produce poor explanations for instances close to complex decision boundaries.\nSHAP estimates Shapley values using the approach of LIME (i.e. local linear models computed with the weighted least squares algorithm) but using a different neighborhood and loss function for local accuracy, which ensures that Shapley values are obtained. The sampling of explanation instances in SHAP relies on the distribution set D, obtained from the training set, by computing the centroids of k-means clusters.\nIME computes Shapley values using a pure sampling approach (no proxy with an interpretable model). Its sampling replaces part of the explained instance with existing values of selected features and then averages differences in the prediction of explained instance and samples. This makes the method slower but also potentially more robust as the method does not assume the shape of the dependency." }, { "heading": "2.2 ATTACKS ON EXPLANATIONS", "text": "The idea of the attack on explanations presented in Slack et al. (2020) is that the attacker creates an adversarial (biased) model whose behavior it wants to hide from explanation methods, e.g., a racist model that does not grant credits to blacks. As Figure 1b illustrates, a part of the attacker’s adversarial model is an unbiased model, which, e.g., does not take disputed attributes such as race into account. The adversarial model tries to manipulate the explanation method by behaving like an unbiased model on perturbed instances used for the explanation – in this case, it uses the unbiased model’s output. On instances from the original distributions that are a part of normal model use, the adversarial model uses the biased model’s output. To distinguish between the original and perturbation-sampled data, the adversarial model contains a decision model that selects the right type of model (biased or unbiased) based on the predicted type of instances.\nThe reason that the decision function can distinguish between the original data and sampled instances are the shortcomings of explanation methods that use the perturbation sampling (see Figure 1a). For that graph, we created new instances based on the COMPAS dataset (see Section 4) with the Gaussian noise as in the LIME explanation method. We mapped the original and sampled instances to 2D space using the PCA transformation. We can see that the sampled data is distributed differently than the original data. As the difference is revealed with PCA, other machine learning algorithms can also effectively distinguish between sampled and original data.\nThe attack can be formally defined in the following way. The attacker has a biased model b and a sample S from the distribution Xdist. The attacker has no additional information about the distribution Xdist. The attacker’s goal is to produce an adversarial model e that behaves like b on the data from Xdist but does not reveal b’s bias to the explanation methods. We assume that the attacker also has an unbiased model ψ that hides the f ’s bias. The attacker creates a decision model d, that should output 1 if the input instance x is from Xdist and 0 otherwise. The model d is trained on S and generated perturbation samples. The attacker creates the adversarial model e with its output defined by the following equation:\ne(x) =\n{ b(x), d(x) = 1\nψ(x), d(x) = 0 (1)" }, { "heading": "2.3 DATA GENERATORS", "text": "We used three different data generators based on different algorithms, modeling the distribution of the training set: variational autoencoder with Monte Carlo dropout (Miok et al., 2019), RBF network (Robnik-Šikonja, 2016), and random forest ensemble (Robnik-Šikonja, 2019). In the remainder of the paper, we refer to the listed generators consecutively as MCD-VAE, rbfDataGen, and TreeEnsemble.\nAutoencoder (AE) consists of two neural networks called encoder and decoder. It aims to compress the input instances by passing them through the encoder and then reconstructing them to the original values with the decoder. Once the AE is trained, it can be used to generate new instances. Variational autoencoder (Doersch, 2016) is a special type of autoencoder, where the vectors z in the latent dimension (output of the encoder and input of the decoder) are normally distributed. Encoder is therefore approximating the posterior distribution p(z|x), where we assume p(z|x) ∼ N (µx,Σx). The generator proposed by Miok et al. (2019) uses the Monte Carlo dropout (Gal & Ghahramani, 2016) on the trained decoder. The idea of this generator is to propagate the instance x through the encoder to obtain its latent encoding z. This can be propagated many times through the decoder, obtaining every time a different result due to the Monte Carlo dropout but preserving similarity to the original instance x.\nThe RBF network (Moody & Darken, 1989) uses Gaussian kernels as hidden layer units in a neural network. Once the network’s parameters are learned, the rbfDataGen generator (Robnik-Šikonja, 2016) can sample from the normal distributions, defined with obtained Gaussian kernels, to generate new instances.\nThe TreeEnsemble generator (Robnik-Šikonja, 2019) builds a set of random trees (forest) that describe the data. When generating new instances, the generator traverses from the root to the leaves of a randomly chosen tree, setting values of features in the decision nodes on the way. When reaching a leaf, it assumes that it has captured the dependencies between features. Therefore, the remaining features can be generated independently according to the observed empirical distribution in this leaf. For each generated instance, all attributes can be generated in one leaf, or another tree can be randomly selected where unassigned feature values are filled in. By selecting different trees, different features are set in the interior nodes and leaves." }, { "heading": "2.4 RELATED WORK ON ROBUSTNESS OF EXPLANATIONS", "text": "The adversarial attacks on perturbation based explanation methods were proposed by Slack et al. (2020), who show that LIME and SHAP are vulnerable due to the perturbation based sampling used. We propose the solution to the exposed weakness in SHAP and IME based on better sampling using data generators adapted to the training set distribution.\nIn general, the robustness of explanation methods has been so far poorly researched. There are claims that post-hoc explanation methods shall not be blindly trusted, as they can mislead users (deliberately or not) and disguise gender and racial discrimination (Lipton, 2016). Selbst & Barocas (2018) and Kroll et al. (2017) showed that even if a model is completely transparent, it is hard to detect and prevent bias due to the existence of correlated variables.\nSpecifically, for deep neural networks and images, there exist adversarial attacks on saliency map based interpretation of predictions, which can hide the model’s bias (Dombrowski et al., 2019; Heo et al., 2019; Ghorbani et al., 2019). Dimanov et al. (2020) showed that a bias of a neural network could be hidden from post-hoc explanation methods by training a modified classifier that has similar performance to the original one, but the importance of the chosen feature is significantly lower.\nThe kNN-based explanation method, proposed by Chakraborty et al. (2020), tries to overcomes the inadequate perturbation based sampling used in explanation methods by finding similar instances to the explained one in the training set instead of generating new samples. This solution is inadequate for realistic problems as the problem space is not dense enough to get reliable explanations. Our defense of current post-hoc methods is based on superior sampling, which has not yet been tried. Saito et al. (2020) use the neural CT-GAN model to generate more realistic samples for LIME and prevent the attacks described in Slack et al. (2020). We are not aware of any other defenses against the adversarial attacks on post-hoc explanations." }, { "heading": "3 ROBUSTNESS THROUGH BETTER SAMPLING", "text": "We propose the defense against the adversarial attacks on explanation methods that replaces the problematic perturbation sampling with a better one, thereby making the explanation methods more robust. We want to generate the explanation data in such a way that the attacker cannot determine whether an instance is sampled or obtained from the original data. With an improved sampling, the adversarial model shown in Figure 1b shall not determine whether the instance x was generated by the explanation method, or it is the original instance the model has to label. With a better data generator, the adversarial model cannot adjust its output properly and the deception, described in Section 2.2, becomes ineffective.\nThe reason for the described weakness of LIME and SHAP is inadequate sampling used in these methods. Recall that LIME samples new instances by adding Gaussian noise to the normalized feature values. SHAP samples new instances from clusters obtained in advance with the k-means algorithm from the training data.\nInstead of using the Gaussian noise with LIME, we generate explanation samples for each instance with one of the three better data generators, MCD-VAE, rbfDataGen, or TreeEnsemble (see Section 2.3). We call the improved explanation methods gLIME and gSHAP (g stands for generatorbased). Using better generators in the explanation methods, the decision function in the adversarial model will less likely determine which predicted instances are original and which are generated for the explanation purposes.\nConcerning gLIME, we generate data in the vicinity of the given instance using MCD-VAE, as the LIME method builds a local model. Using the TreeEnsemble and rbfDataGen generators, we do not generate data in the neighborhood of the given instance but leave it to the proximity measure of the LIME method to give higher weights to instances closer to the explained one.\nIn SHAP, the perturbation sampling replaces the values of hidden features in the explained instance with the values from the distribution set D. The problem with this approach is that it ignores the dependencies between features. For example, in a simple dataset with two features, house size, and house price, let us assume that we hide the house price, but not the house size. These two features are not independent because the price of a house increases with its size. Suppose we are explaining an instance that represents a large house. Disregarding the dependency, using the sampled set D, SHAP creates several instances with a low price, as such instances appeared in the training set. In this way, the sampled set contains many instances with a small price assigned to a large house, from which the attacker can determine that these instances were created in perturbation sampling and serve only for the explanation.\nIn the proposed gSHAP, using the MCD-VAE and TreeEnsemble generators, the distribution set D is generated in the vicinity of the explained instance. In the sampled instances, some feature values of the explained instance are replaced with the generated values, but the well-informed generators consider dependencies between features detected in the original distribution. This will make the distribution of the generated instances very similar to the original distribution. In our example, the proposed approach generates new instances around the instance representing a large house, and most of these houses will be large. As the trained generators capture the original dataset’s dependencies, these instances will also have higher prices. This will make it difficult for the attacker to recognize the generated instances used in explanations. The advantage of generating the distribution set close to the observed instance is demonstrated in Appendix B.\nThe rbfDataGen generator does not support the generation of instances around a given instance. Therefore, we generate the sampled set based on the whole training set and not for each instance separately (we have this option also for TreeEnsemble). This is worse than generating the distribution set around the explained instance but still better than generating it using the k-means sampling in SHAP. There are at least three advantages. First, the generated distribution set D can be larger. The size of the k-means distribution set cannot be arbitrarily large because it is limited by the number of clusters in the training set. Second, the centroids of the clusters obtained by the k-means algorithm do not necessarily provide a good summary of the training set and may not be valid instances from training set distribution. They also do not capture well the density of instances (e.g., most of the data from the training set could be grouped in one cluster). Third, using the proposed generators, SHAP becomes easier to use compared to the k-means generator, where users have to determine\nthe number of clusters, while the data generators we tested can be used in the default mode without parameters," }, { "heading": "4 EVALUATION", "text": "To evaluate the proposed improvements in the explanation methods, we first present the used datasets in Section 4.1, followed by the experiments. In Section 4.2, we test the robustness of gLIME, gSHAP, and gIME against the adversarial models. To be more realistic, we equipped the adversarial models with the same improved data generators we used in the explanation methods. It is reasonable to assume that attackers could also use better data generators when preparing their decision function, making their attacks much stronger. As the evaluation metric for the success of deception, we use the proportion of instances where the adversarial model deceived the explanation methods so that they did not detect sensitive attributes as being important in the prediction models. In Section 4.3, we test if enhanced generators produce different explanations than the original ones. As the attacker might decide to employ deception only when it is really certain that the predicted instance is used inside the explanation method, we test different thresholds of the decision function d from Equation (1) (currently set to 0.5). We report on this analysis in Section 4.4." }, { "heading": "4.1 SENSITIVE DATASETS PRONE TO DECEPTION", "text": "Following (Slack et al., 2020), we conducted our experiments on three data sets from domains where a biased classifier could pose a critical problem, such as granting a credit, predicting crime recidivism, and predicting the violent crime rate. The basic information on the data sets is presented in Table 1. The statistics were collected after removing the missing values from the data sets and before we encoded categorical features as one-hot-encoded vectors.\nCOMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment used by the courts in some US states to determine the crime recurrence risk of a defendant. The dataset (Angwin et al., 2016)) includes criminal history, time in prison, demographic data (age, gender, race), and COMPAS risk assessment of the individual. The dataset contains data of 6,172 defendants from Broward Couty, Florida. The sensitive attribute in this dataset (the one on which the adversarial model will be biased) is race. African Americans, whom biased model associates with a high risk of recidivism, represent 51.4% of instances from the data set. This set’s target variable is the COMPAS score, which is divided into two classes: a high and low risk. The majority class is the high risk, which represents 81.4% of the instances.\nThe German Credit dataset (German for the rest of the paper) from the UCI repository (Dua & Graff, 2019) includes financial (bank account information, loan history, loan application information, etc.) and demographic data (gender, age, marital status, etc.) for 1,000 loan applicants. A sensitive attribute in this data set is gender. Men, whom the biased model associates with a low-risk, represent 69% of instances. The target variable is the loan risk assessment, divided into two classes: a good and a bad customer. The majority class is a good customer, which represents 70% of instances.\nCommunities and Crime (CC) data set (Redmond & Baveja, 2002) contains data about the rate of violent crime in US communities in 1994. Each instance represents one community. The features are numerical and represent the percentage of the community’s population with a certain property or the average of population in the community. Features include socio-economic (e.g., education, house size, etc.) and demographic (race, age) data. The sensitive attribute is the percentage of the white race. The biased model links instances where whites’ percentage is above average to a low rate of violent crime. The target variable is the rate of violent crime divided into two classes: high and low. Both classes are equally represented in the data set." }, { "heading": "4.2 ROBUSTNESS OF EXPLANATION METHODS", "text": "To evaluate the robustness of explanation methods with added generators (i.e. gLIME, gSHAP, and gIME), we split the data into training and evaluation set in the ratio 90% : 10%. We used the same training set for the training of adversarial models and explanation methods. We encoded categorical features as one-hot-encoded vectors.\nWe simulated the adversarial attack for every combination of generators used in explanation methods and adversarial models (except for method IME, where we did not use rbfDataGen, which cannot generate instances in the neighborhood of a given instance). When testing SHAP, we used two variants of the TreeEnsemble generator for explanation: generating new instances around the explained instance and generating new instances according to the whole training set distribution. In the LIME testing, we used only the whole training set variant of TreeEnsemble inside the explanation. In the IME testing, we only used the variant of Tree Ensemble that fills in the hidden values (called TEnsFillIn variant). For training the adversarial models, we used the whole training set based variant of TreeEnsemble, except for IME, where we used the TEnsFillIn variant. These choices reflect the capabilities of different explanation method and attempt to make both defense and attack realistic (as strong as possible). More details on training the decision model d inside the adversarial model can be found in Appendix D.\nIn all cases, the biased model b (see Section 2.2) was a simple function depending only on the value of the sensitive feature. The unbiased model ψ depended only on the values of unrelated features. The sensitive and unrelated features are shown on the right-hand side of Table 1. Features random1 and random2 were uniformly randomly generated from the {0, 1} set. On COMPAS and CC, we simulated two attacks, one with a single unrelated feature (the result of ψ depends only on the value of the unrelated feature 1), and another one with two unrelated features.\nFor every instance in the evaluation set, we recorded the most important feature according to the used explanation method (i.e. the feature with the largest absolute value of the contribution as determined by the explanation method). The results are shown as a heatmap in Figure 2. The green color means that the explanation method was deceived in less than 30% of cases (i.e. the sensitive feature was recognized as the most important one in more than 70 % of the cases as the scale on the heatmap suggests), and the red means that it was deceived in more than 70% of cases (the sensitive feature was recognized as the most important one in less than 30 % of the cases as the scale on the heatmap suggests). We consider deception successful if the sensitive feature was not recognized as the most important by the explanation method (the sensitive features are the only relevant features in biased models b).\nThe gLIME method is more robust with the addition of rbfDataGen and TreeEnsemble than LIME and less robust with the addition of MCD-VAE. This suggests that parameters for MCD-VAE were not well-chosen, and this pattern repeats for other explanation methods. Both TreeEnsemble and rbfDataGen make gLIME considerably more robust on COMPAS and German datasets, but not on the CC dataset. We believe the reason for that is that features in CC are strongly interdependent, and many represent the same attribute as a fraction of value, e.g., we have the share of the white population, the share of the Asian population, and so on. This interdependence dictates that all fractions have to add up to 1. The data generators are unlikely to capture such strong conditional dependencies, but the adversarial model’s decision function is likely to approximate it.\nThe gSHAP method is most robust when using the TreeEnsemble generator, but it shows less robust behavior than the gLIME method. The reason for that could be in the feature value replacement strategy used by the SHAP and gSHAP methods, which change only some of the feature’s values with the help of the distribution set. This can lead to out-of-distribution instances if the instances in the distribution set are not close enough to the explained instance. With gLIME, we generate complete instances that are more likely to be in the distribution of the training set.\nIME is quite robust even with the perturbation sampling, as we expected. This suggests that IME is the most robust of all three tested explanation methods and shall be considered the chosen method in sensitive situations. The gIME results when using the TreeEnsemble generator (TEnsFillIn variant) are comparable to the original IME variant results. This suggests that sampling from a smaller data set, which represents the neighborhood of the explained instance, does not decrease the method’s robustness." }, { "heading": "4.3 COMPARING EXPLANATIONS OF ORIGINAL AND MODIFIED METHODS", "text": "We checked if improved data generators affect explanations in a non-adversary environment. We compared the explanations of original and modified methods for four different classification models on three different datasets. As a measure of differences, we use the mean of the absolute difference between the original and enhanced explanations over the whole evaluation set. Details of these experiments are reported in Appendix F, below we shortly summarize the results, contained in Table 4.\nThe differences between original LIME and gLIME explanations are considerable (see the top part of Table 4). This is not surprising since LIME fits local linear models in its local explanations, which can strongly differ even for small perturbations of the model’s input. The differences in SHAP and IME explanations are small (the average MAD is almost negligible). We can conclude that explanations of gSHAP and gIME are not significantly different from SHAP and IME in the non-adversary environment." }, { "heading": "4.4 BEHAVIOR OF ADVERSARIAL CLASSIFIER WITH DIFFERENT PREDICTION THRESHOLDS", "text": "The attacker might be wary of being discovered by the regulator and decide to employ deception only when it is really certain that the predicted instance is used inside the explanation method. We express different levels of attacker’s conservatism by changing the decision function threshold d from Equation (1), where it is currently set to 0.5. We tested the robustness of modified explanation methods on the COMPAS dataset with adversarial models that use different certainty thresholds for deploying the biased classifier (simulating different levels of aggressiveness). For results and a more detailed description of the experiment, see Appendix G. Below, we shortly summarize the results.\nEven with different thresholds, gIME is still the most robust of the three explanation methods, and treeEnsemble still gives the best results as the data generator. The sensitive feature is recognized as the most important more often when the threshold is lower, which is expected as in this case the adversarial model behaves more aggressively. While the percentage of the instances on which the racist behavior of the adversarial model is recognized drops with higher values of the thresholds, it remains high enough to warn the regulator about the problematic behavior of the prediction model (especially in the case of gSHAP and gIME using treeEnsemble as data generator). We can conclude that the enhanced explanation methods remain robust enough, even with more conservative adversarial models." }, { "heading": "5 CONCLUSIONS", "text": "We presented the defense against adversarial attacks on explanation methods. The attacks exploit the shortcomings of perturbation sampling in post-hoc explanation methods. This sampling used in these methods produces instances too different from the original distribution of the training set. This allows unethical owners of biased prediction models to detect which instances are meant for explanation and label them in an unbiased way. We replaced the perturbation sampling with data generators that better capture the distribution of a given data set. This prevents the detection of instances used in explanation and disarms attackers. We have shown that the modified gLIME and gSHAP explanation methods, which use better data generators, are more robust than the original variants, while IME is already quite robust. The difference in explanation values between original and enhanced gSHAP and gIME is negligible, while for gLIME, it is considerable. Our preliminary results in Appendix C show that using the TreeEnsemble generator, the gIME method converges faster and requires from 30-50% fewer samples.\nThe effectiveness of the proposed defense depends on the choice of the data generator and its parameters. While the TreeEnsemble generator turned out the most effective in our evaluation, in practice, several variants might need to be tested to get a robust explanation method. Inspecting authorities shall be aware of the need for good data generators and make access to training data of sensitive prediction models a legal requirement. Luckily, even a few non-deceived instances would be enough to raise the alarm about unethical models.\nThis work opens a range of possibilities for further research. The proposed defense and attacks shall be tested on other data sets with a different number and types of features, with missing data, and on different types of problems such as text, images, and graphs. The work on useful generators shall be extended to find time-efficient generators with easy to set parameters and the ability to generate new instances in the vicinity of a given one. Generative adversarial networks (Goodfellow et al., 2014) may be a promising line of research. The TreeEnsemble generator, currently written in pure R, shall be rewritten in a more efficient programming language. The MCD-VAE generator shall be studied to allow automatic selection of reasonable parameters for a given dataset. In SHAP sampling, we could first fix the values of the features we want to keep, and the values of the others would be generated using the TreeEnsemble generator." }, { "heading": "A DETAILS ON POST-HOC EXPLANATION METHODS", "text": "For the sake of completeness, we present further details on the explanation methods LIME (Ribeiro et al., 2016), SHAP (Lundberg & Lee, 2017), and IME (Štrumbelj & Kononenko, 2013). Their complete description can be found in the above-stated references. In our exposition of the explanation methods, we denote with f the predictive model, with x the instance we are explaining, and with n the number of features describing x.\nA.1 LIME\nExplanations of instances with the LIME method is obtained with an interpretable model g. The model g has to be both locally accurate (so that it can obtain correct feature contributions) and simple (so that it is interpretable).\nThe explanation of the instance x for the predictive model f , obtained with the LIME method, is defined with the following equation:\nξ(x) = arg min g∈G\n(L(f, g, πx) + Ω(g)), (2)\nwhere Ω(g) denotes the measure of complexity for interpretable model, πx(z) denotes the proximity measure between x and generated instances z, L(f, g, πx) denotes the measure of local fidelity of interpretable model g to the prediction model f , and G denotes the set of interpretable models. We use the linear version of the LIME method, where G represents the set of linear models. With x′ we denote the normalized presentation of instance x, i.e. the numerical attributes have their mean set to 0 and variance to 1, and the categorical attributes contain value 1 if they have the same value\nas the exoplained instance and 0 otherwise. The proximity function πx is defined on the normalized instances (hence we use the notation πx′ ), and uses the exponential kernel: πx′(z′) = e d(x′,z′) σ2 , where d(x′, z′) denotes a distance measure between x′ and z′. The local fidelity measure L from Equation (2) is defined as:\nL(f, g, πx′) = ∑ z′∈Z πx′(z ′)(f(z)− g(z′))2, (3)\nwhereZ denotes the set of samples. In LIME, each generated sample is obtained by adding Gaussian noise to each feature of x′ independently.\nUsing linear models as the set of interpretable models, LIME is relatively fast but may produce poor explanations for instances close to complex decision boundaries.\nA.2 SHAP\nWe refer to SHAP as the method called Kernel SHAP by (Lundberg & Lee, 2017). SHAP essentially estimates Shapley values using LIME’s approach, which means that the calculation of explanations is fast due to the use of local linear models computed with the weighted least squares algorithm. The explanation of the instance x with the SHAP method are feature contributions φi, i = 1, 2, ..., n that are coefficients of the linear model g(x′) = φ0 + ∑n i=1 φi · x′i, obtained with LIME, where x′i ∈ {0, 1} for i ∈ {1, 2, ..., n}. As LIME does not compute Shapley values, this property is assured with the proper selection of functions Ω(g), πx′(z′), and L(f, g, πx′). Let us first define the function hx(z′) that maps from the {0, 1}n to the original feature space. The function hx is defined implicitly with the equation f(hx(z\n′)) = E[f(z)|zi = xi ∀i ∈ {j; z′j = 1}]. This is the expected value of the model f when we do not know the values of features at indices where z′ equals 0 (these values are hidden). Functions Ω(g), πx′(z ′) and L(f, g, πx′) that enforce the computation of Shapley values are:\nΩ(g) = 0,\nπx′(z ′) = n( n |z′| ) · |z′| · (n− |z′|) ,\nL(f, g, πx′) = ∑ z′∈Z (f(hx(z ′))− g(z′))2 · πx′(z′),\nwhere |z′| denotes the number of nonzero features of z′ and Z ⊆ 2{0,1}n . The main purpose of the sampling in this method is to determine the value f(hx(z′)) because in general predictive models cannot work with hidden values. To determine f(hx(z′)), SHAP uses the distribution set D which we obtain from the training set. For D, SHAP takes the centroids of clusters obtained by the k-means clustering algorithm on the training set. The number of clusters is set by the user. Value of f(hx(z′)) is determined by the following sampling:\nf(hx(z ′)) = E[f(z)|zi = xi ∀i ∈ {j; z′j = 1}] =\n1 |D| ∑ d∈D f(x[xi=di,z′i=0]), (4)\nwhere x[xi=di,z′i=0] denotes instance x with features that are 0 in z ′ being set to the feature values from d.\nA.3 IME\nThe explanation of x with the method IME are feature contributions φi, i = 1, 2, ..., n. Štrumbelj & Kononenko (2013) shoved that Shapley values are the only solution that takes into account contributions of interactions for every subset of features in a fair way. The i-th feature contribution can be calculated with the following expresiion:\nφi(x) = 1\nn ∑ π∈Sn ∑ w∈X p(w) · (f(w[wi=xi,i∈Prei(π)∪{i}])− f(w[wi=xi,i∈Prei(π)])), (5)\nwhere Sn denotes a group of permutations of n elements, X denotes the training set instances, Prei(π) represents the set of indices that precedes i in the permutation π, i.e. Prei(π) = {j;π(j) < π(i)}). Let p(w) denote the probability of the instance w in X and let w[formula] denote the instance w with some of its features values changed according to the formula. To calculate φi(x), we have to go through |X | · n! iterations, which can be slow. Therefore, the method IME uses the following sampling. The sampling population of i-th feature is Vπ,w = (f(w[wi=xi,i∈Prei(π)∪{i}]) − f(w[wi=xi,i∈Prei(π)])) for every combination of the permutation π and instance w. IME draws mi samples V1, ..., Vmi at random with repetition. The estimate of φi(x) is defined with the equation:\nφ̂i = 1\nmi mi∑ i=1 Vi. (6)\nContrary to SHAP, IME does not use approximation with linear models, which compute all features’ contributions at once but has to compute the Shapley values by averaging over a large enough sample for each feature separately. This makes the method slower but also potentially more robust as the method does not assume the shape of the dependency in the space of normalized features." }, { "heading": "B DEMONSTRATION OF BETTER SAMPLING IN SHAP", "text": "The graphs in Figure 3 show the PCA based 2D space of the evaluation part of the COMPAS dataset (see Section 4 for the dataset description). The left-hand side shows the SHAP-generated sampled instances using the k-means algorithm (14 clusters determined by the silhouette score (Rousseeuw (1987)). The sample produced with the MCD-VAE generator in gSHAP is shown on the right-hand side. This sample is much more similar to the original distribution compared to the SHAP sampling.\nC IMPROVED IME CONVERGENCE RATE WITH THE TREEENSEMBLE GENERATOR\nPreliminary, we tested how better generators affect the convergence rate of the IME explanation method. The preliminary results in Table 2 show a significant reduction in the number of needed samples and a slight increase in the error compared to the original perturbation sampling. Note that the error measure deployed is biased in favor of the perturbation sampling, which was used to determine the gold standard. This was determined with the sampling population’s variance, as described in Štrumbelj & Kononenko (2010)." }, { "heading": "D TRAINING DISCRIMINATOR FUNCTION OF THE ATTACKER", "text": "Details of training attacker’s decision models d (see Figure 1b) is described in Algorithms 1, 2, and 3. We used a slightly different algorithm for each of the three explanation methods, LIME, SHAP," }, { "heading": "CC dataset", "text": "and IME, as each method uses a different sampling. Algorithms first create out-of-distribution instances by method-specific sampling. The training sets for decision models are created by labeling the created instances with 0; the instances from sample S (to which the attacker has access) from distribution Xdist are labeled with 1. Finally, the machine learning model dModel is trained on this training set and returned as d. In our experiments, we used random forest classifier as dModel and the training part of each evaluation dataset as S." }, { "heading": "E HEATMAPS AS TABLES", "text": "We present the information contained in Figure 2 in a more detailed tabular form in Table 3." }, { "heading": "F COMPARING EXPLANATIONS OF ORIGINAL AND MODIFIED METHODS", "text": "We check if improved data generators affect explanations in non-adversary environment. We split the dataset into training and evaluation set in the ratio 90% : 10%, and trained four classifiers from Python scikit-learn (Pedregosa et al. (2011)) library: Gaussian naive Bayes, linear SVC (SVM for classification), random forest, and neural network. We explained the predictions of each classifier on the evaluation set with every combination of explanation methods and generators used in the adversarial attack experiments. For instances in the evaluation set, we measured the mean absolute\nAlgorithm 1: Training of the decision model d, used by the attacker to distinguish between instances from distributionXdist and samples produced by explanation methods LIME or gLIME. Input: S = {(xi)mi=1}: training set, nSamples: number of generated instances for each instance xi ∈ D, gen: data generator, dModel: machine learning algorithm Output: Classifier d that outputs 1 if its input x is from Xdist and 0 otherwise X ← ∅ // Training set for dModel gen.fit(S) // Train the data generator on S for i = 1 to m do\nX ← X ∪ (xi, 1) // Add an instance from distribution G← gen.newdata(nSamples, xi) // Generate nSamples new samples around xi for j = 1 to nSamples do // Add nSamples out of distribution instances\nX ← X ∪ (G[j], 0) // Add j-th instance from set G to X end\nend d← dModel.fit(X) // Fit model dModel to set X and save it in d return d\nAlgorithm 2: Training of the decision model d, used by attacker to distinguish between instances from distribution Xdist and samples produced by explanation methods SHAP or gSHAP. Input: S = {(xi)mi=1}: training set, nSamples: number of generated instances for each instance\nxi ∈ D, k: size of the generated distribution set, gen: data generator, dModel: machine learning algorithm\nOutput: Classifier d that outputs 1 if its input x is from Xdist and 0 otherwise X ← ∅ // Training set for dModel gen.fit(S) // Train the data generator on S if gen == KMeans or gen == rbfDataGen or gen == treeEnsemble then\nD ← gen.newdata(k) // Generate the distribution set with KMeans, rbfDataGen or treeEnsemble end for i = 1 to nSamples do // Add nSamples out of distribution instances\nx← random instance from S if gen == MCD − V AE or gen == treeEnsembleF ill then\nw ← gen.newdata(1, x) // Generate an instance w in the vicinity of x end else // KMeans, treeEnsemble or rbfDataGen\nw ← take a random instance from D end M ← choose a random subset of {1, 2, ..., len(x)} // Choose random features x[M ]← w[M ] // Replace the values of chosen features in x with values from w as in SHAP method X ← X ∪ (x, 0) // Add out of distribution instance\nend for i = 1 to m do // Add instances from distribution\nX ← X ∪ (xi, 1) end d← dModel.fit(X) // Fit model dModel to set X and save it in d return d\ndifference (MAD) of modified explanation methods, defined with the following equation:\nMADgen(x) = 1\nn n∑ i=1 |φgeni (x)− φi(x)|, (7)\nwhere φgeni (x) and φi(x) represent the explanations of i-th feature returned by the modified and original explanation method, respectively (recall that n denotes the number of features in the data set).\nWe experimented on three datasets. The COMPAS dataset is described in Section 4.1. In addition to that, we used synthetic dataset condInd from Robnik-Šikonja & Kononenko (2008), and Ionosphere\nAlgorithm 3: Training of the decision model d, used by attacker to distinguish between instances from distribution Xdist and samples produced by explanation methods IME or gIME. Input: S = {(xi)mi=1}: training set, nSamples: number of generated instances for each instance xi ∈ D, gen: data generator, dModel: machine learning algorithm Output: Classifier d that outputs 1 if its input x is from Xdist and 0 otherwise X ← ∅ // Training set for dModel gen.fit(S) // Train the data generator on set S for i = 1 to m do\nX ← X ∪ (xi, 1) // Add an instance from distribution for j = 1 to nSamples do // Add nSamples out of distribution instances\nw ← gen.newdata(1, xi) // Generate an instance w in the vicinity of xi b1 ← w // First out of distribution instance b2 ← w // Second out of distribution instance π ← choose a random permutation from Slen(xi) // i.e. a random permutation of xi’s\nfeatures idx← choose a random number from {1, 2, ..., len(xi)} M1 ← {k ∈ {1, 2, ..., len(xi)}, π(k) < π(idx)} // Features that precede idx in permutation π M2 ←M1 ∪ {idx} b1 ← xi[M1] // Vector b1 as in IME method (Štrumbelj & Kononenko (2013)) b2 ← xi[M2] // Vector b2 as in IME method (Štrumbelj & Kononenko (2013)) X ← X ∪ {(b1, 0), (b2, 0)} // Add out of distribution instances\nend end d← dModel.fit(X) // Fit model dModel to set X and save it in d return d\ndataset from UCI repository (Dua & Graff (2019)). Both datasets represent a binary classification problem. Apart from the target variable, condInd consists of 8 binary features, while Ionosphere consists of 34 numerical attributes. The condInd datasets contains 2000 instances and Ionosphere contains 351 instances.\nThe results are shown in Table 4. The differences between original LIME and gLIME explanations are considerable (see the top table). This is not surprising since LIME fits local linear models in its local explanations, which can strongly differ even for small perturbations of the model’s input. SHAP and IME explanations are very similar (the average MAD is almost negligible). We can conclude that explanations of gSHAP and gIME are not significantly different from SHAP and IME in the non-adversary environment." }, { "heading": "G ADVERSARIAL CLASSIFIERS WITH DIFFERENT PREDICTION THRESHOLDS", "text": "In a realistic scenario, the attacker might be wary of being discovered by the regulator and decide to employ deception only when it is really certain that the predicted instance is used inside the explanation method. In these circumstances, the number of times when the sensitive variable is used inside the model will also depend on how aggressively the biased classifier is used. This reflection leads to another metric for the defense’s success, which measures the proportion of instances when the sensitive variable is identified when it is being used, i.e. at different levels of conservatism.\nWe express different levels of attacker’s conservatism by changing the threshold of the decision function d. The output of the adversarial model et, which uses threshold t is now defined with the equation:\net(x) = { b(x), dprob(x) ≥ t ψ(x), dprob(x) < t , (8)\nwhere dprob(x) denotes the probability that x is from distribution Xdist, returned by decision model d. Threshold t therefore represents the required certainty of the attacker that given instance is from Xdist before it deploys the biased classifier. Larger values of t will result in biased classifier being deployed fewer times but with larger certainty that it has to be used. If we use t = 0.5 in Equation (8), we get the adversarial model that behaves identically to the one, described with Equation (1) in Section 2.2.\nIn this experiment, we used only the COMPAS dataset, which we split into the training and evaluation set in ratio 90% : 10%. We performed the same experiment as in section 4.2, but with different thresholds for the adversarial models. We used five different thresholds: 0.3, 0.4, 0.5, 0.6 and 0.7.\nWe first checked how different thresholds affect the deployment of the biased model. For each generator used in the adversarial model’s training and the certainty threshold used by the adversarial model, we measured the proportion of instances from the evaluation set, where the biased classifier was deployed. The results are shown in Table 5. As expected, the percentage of instances where the biased classifier is deployed decreases with increasing t. The biased classifier is deployed less frequently when using treeEnsemble and rbfDataGen in its training, which is also expected, as these two generators give the best results inside the explanation methods. This means that for these generators, it is more difficult for an attacker to distinguish between the actual instances and generated instances. The biased classifier is deployed the most often in the case of the LIME adversarial model and least often in the SHAP adversarial model.\nNext, we tested the proportion of instances where the sensitive feature (in case of COMPAS dataset this is race) was recognized as the most important feature by the explanation method on two different subsets of instances: instances from the evaluation set on which the biased classifier was deployed and the whole evaluation set. We measured these numbers for all combinations of thresholds and explanation methods. The results are shown in Tables 6 (gLIME), 7 (gSHAP) and 8 (gIME) for gLIME, gSHAP, and gIME, respectively. We can observe the same pattern as in section 4.2: gIME is still the most robust, and the treeEnsemble generator still gives the best results. The sensitive attribute is recognized as the most important feature more often when the decision threshold values are lower, which is expected as in these cases, the adversarial model behaves more aggressively. Whilst the percentage of the instances on which the racist behavior of the adversarial model is recognized drops with higher thresholds, it remains high enough to warn the regulator about the problematic behavior of the prediction model (especially in the case of gSHAP and gIME using treeEnsemble as data generator). From that, we can conclude that modified explanation methods remain robust enough, even with more conservative adversaries." } ]
2,020
null
SP:2d40321225b606569305bd303ebad2e1711fd07b
[ "The paper theoretically studies the beneficial effect of mixup on robustness and generalization of machine models. The mixup loss is rewritten to be the sum of the original empirical loss and a regularization term (plus a high order term). For robustness, the regularization term is proven to be upper bound of first and second order terms of the adversarial loss's Taylor expansion. Hence, the mixup loss can upper bound the approximate adversarial loss. For generalization, the regularization term is used to control the hypothesis to have small Rademacher complexity. The paper is clearly written and well organized." ]
Mixup is a popular data augmentation technique based on taking convex combinations of pairs of examples and their labels. This simple technique has been shown to substantially improve both the robustness and the generalization of the trained model. However, it is not well-understood why such improvement occurs. In this paper, we provide theoretical analysis to demonstrate how using Mixup in training helps model robustness and generalization. For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss. This explains why models obtained by Mixup training exhibits robustness to several kinds of adversarial attacks such as Fast Gradient Sign Method (FGSM). For generalization, we prove that Mixup augmentation corresponds to a specific type of data-adaptive regularization which reduces overfitting. Our analysis provides new insights and a framework to understand Mixup.
[ { "affiliations": [], "name": "Linjun Zhang" }, { "affiliations": [], "name": "Zhun Deng" }, { "affiliations": [], "name": "Kenji Kawaguchi" }, { "affiliations": [], "name": "James Zou" } ]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": null, "year": 1904 }, { "authors": [ "Raman Arora", "Peter Bartlett", "Poorya Mianjy", "Nathan Srebro" ], "title": "Dropout: Explicit forms and capacity control", "venue": "arXiv preprint arXiv:2003.03397,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "R Ge", "B Neyshabur", "Y Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Olivier Bousquet", "Shahar Mendelson" ], "title": "Localized rademacher complexities", "venue": "In International Conference on Computational Learning Theory,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christopher Beckham", "Sina Honari", "Vikas Verma", "Alex M Lamb", "Farnoosh Ghadiri", "R Devon Hjelm", "Yoshua Bengio", "Chris Pal" ], "title": "On adversarial mixup resynthesis", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Olivier Bousquet", "André Elisseeff" ], "title": "Stability and generalization", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Lars Buitinck", "Gilles Louppe", "Mathieu Blondel", "Fabian Pedregosa", "Andreas Mueller", "Olivier Grisel", "Vlad Niculae", "Peter Prettenhofer", "Alexandre Gramfort", "Jaques Grobler", "Robert Layton", "Jake VanderPlas", "Arnaud Joly", "Brian Holt", "Gaël Varoquaux" ], "title": "API design for machine learning software: experiences from the scikit-learn project", "venue": "In ECML PKDD Workshop: Languages for Data Mining and Machine Learning,", "year": 2013 }, { "authors": [ "Luigi Carratino", "Moustapha Cissé", "Rodolphe Jenatton", "Jean-Philippe Vert" ], "title": "On mixup regularization", "venue": "arXiv preprint arXiv:2006.06049,", "year": 2020 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature", "venue": "In NeurIPS Creativity Workshop", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Mojtaba Faramarzi", "Mohammad Amini", "Akilesh Badrinaaraayanan", "Vikas Verma", "Sarath Chandar" ], "title": "Patchup: A regularization technique for convolutional neural networks", "venue": "arXiv preprint arXiv:2006.07794,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "David P Helmbold", "Philip M Long" ], "title": "On the inductive bias of dropout", "venue": "The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Kenji Kawaguchi", "Leslie Pack Kaelbling", "Yoshua Bengio" ], "title": "Generalization in deep learning", "venue": "arXiv preprint arXiv:1710.05468,", "year": 2017 }, { "authors": [ "Jang-Hyun Kim", "Wonho Choo", "Hyun Oh Song" ], "title": "Puzzle mix: Exploiting saliency and local statistics for optimal mixup", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Lamb", "Vikas Verma", "Juho Kannala", "Yoshua Bengio" ], "title": "Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy", "venue": "In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security,", "year": 2019 }, { "authors": [ "Poorya Mianjy", "Raman Arora", "Rene Vidal" ], "title": "On the implicit bias of dropout", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li" ], "title": "Towards understanding the role of over-parametrization in generalization of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Shizhao Sun", "Wei Chen", "Liwei Wang", "Xiaoguang Liu", "Tie-Yan Liu" ], "title": "On the depth of deep neural networks: A theoretical view", "venue": "arXiv preprint arXiv:1506.05232,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "arXiv preprint arXiv:1805.12152,", "year": 2018 }, { "authors": [ "V Vapnik" ], "title": "Estimation of dependences based on empirical data nauka", "venue": null, "year": 1979 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 2013 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David LopezPaz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation consistency training for semi-supervised learning", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Stefan Wager", "Sida Wang", "Percy S Liang" ], "title": "Dropout training as adaptive regularization", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Sida Wang", "Christopher Manning" ], "title": "Fast dropout training", "venue": "In international conference on machine learning,", "year": 2013 }, { "authors": [ "Colin Wei", "Sham Kakade", "Tengyu Ma" ], "title": "The implicit and explicit regularization effects of dropout", "venue": "arXiv preprint arXiv:2002.12915,", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine learning,", "year": 2012 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hinton" ], "title": "Fashion-MNIST (Xiao et al., 2017), and Kuzushiji-MNIST (Clanuwat et al., 2019). For each dataset, we consider two cases: with and without standard additional data augmentation for each dataset. We used the standard pre-activation ResNet with 18 layers (He et al., 2016b). Stochastic gradient descent (SGD) was used to train the models with mini-batch size = 64, the momentum coefficient = 0.9, and the learning rate = 0.1", "venue": null, "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Mixup was introduced by Zhang et al. (2018) as a data augmentation technique. It has been empirically shown to substantially improve test performance and robustness to adversarial noise of state-of-the-art neural network architectures (Zhang et al., 2018; Lamb et al., 2019; Thulasidasan et al., 2019; Zhang et al., 2018; Arazo et al., 2019). Despite the impressive empirical performance, it is still not fully understood why Mixup leads to such improvement across the different aspects mentioned above. We first provide more background about robustness and generalization properties of deep networks and Mixup. Then we give an overview of our main contributions.\nAdversarial robustness. Although neural networks have achieved remarkable success in many areas such as natural language processing (Devlin et al., 2018) and image recognition (He et al., 2016a), it has been observed that neural networks are very sensitive to adversarial examples — prediction can be easily flipped by human imperceptible perturbations (Goodfellow et al., 2014; Szegedy et al., 2013). Specifically, in Goodfellow et al. (2014), the authors use fast gradient sign method (FGSM) to generate adversarial examples, which makes an image of panda to be classified as gibbon with high confidence. Although various defense mechanisms have been proposed against adversarial attacks, those mechanisms typically sacrifice test accuracy in turn for robustness (Tsipras et al., 2018) and many of them require a significant amount of additional computation time. In contrast, Mixup training tends to improve test accuracy and at the same time also exhibits a certain degree of resistance to adversarial examples, such as those generated by FGSM (Lamb et al., 2019). Moreover, the corresponding training time is relatively modest. As an illustration, we compare the robust test\n∗Equal contribution.\naccuracy between a model trained with Mixup and a model trained with standard empirical risk minimization (ERM) under adversarial attacks generated by FGSM (Fig. 1a). The model trained with Mixup loss has much better robust accuracy. Robustness of Mixup under other attacks have also been empirically studied in Lamb et al. (2019).\nGeneralization. Generalization theory has been a central focus of learning theory (Vapnik, 1979; 2013; Bartlett et al., 2002; Bartlett & Mendelson, 2002; Bousquet & Elisseeff, 2002; Xu & Mannor, 2012), but it still remains a mystery for many modern deep learning algorithms (Zhang et al., 2016; Kawaguchi et al., 2017). For Mixup, from Fig. (1b), we observe that Mixup training results in better test performance than the standard empirical risk minimization. That is mainly due to its good generalization property since the training errors are small for both Mixup training and empirical risk minimization (experiments with training error results are included in the appendix). While there have been many enlightening studies trying to establish generalization theory for modern machine learning algorithms (Sun et al., 2015; Neyshabur et al., 2015; Hardt et al., 2016; Bartlett et al., 2017; Kawaguchi et al., 2017; Arora et al., 2018; Neyshabur & Li, 2019), few existing studies have illustrated the generalization behavior of Mixup training in theory.\nOur contributions. In this paper, we theoretically investigate how Mixup improves both adversarial robustness and generalization. We begin by relating the loss function induced by Mixup to the standard loss with additional adaptive regularization terms. Based on the derived regularization terms, we show that Mixup training minimizes an upper bound on the adversarial loss,which leads to the robustness against single-step adversarial attacks. For generalization, we show how the regularization terms can reduce over-fitting and lead to better generalization behaviors than those of standard training. Our analyses provides insights and framework to understand the impact of Mixup.\nOutline of the paper. Section 2 introduces the notations and problem setup. In Section 3, we present our main theoretical results, including the regularization effect of Mixup and the subsequent analysis to show that such regularization improves adversarial robustness and generalization. Section 4 concludes with a discussion of future work. Proofs are deferred to the Appendix." }, { "heading": "1.1 RELATED WORK", "text": "Since its advent, Mixup training (Zhang et al., 2018) has been shown to substantially improve generalization and single-step adversarial robustness among a wide rage of tasks, on both supervised (Lamb et al., 2019; Verma et al., 2019a; Guo et al., 2019), and semi-supervised settings (Berthelot et al., 2019; Verma et al., 2019b). This has motivated a recent line of work for developing a number of variants of Mixup, including Manifold Mixup (Verma et al., 2019a), Puzzle Mix (Kim et al., 2020), CutMix (Yun et al., 2019), Adversarial Mixup Resynthesis (Beckham et al., 2019), and PatchUp (Faramarzi et al., 2020). However, theoretical understanding of the underlying mechanism of why Mixup and its variants perform well on generalization and adversarial robustness is still limited.\nSome of the theoretical tools we use in this paper are related to Wang & Manning (2013) and Wager et al. (2013), where the authors use second-order Taylor approximation to derive a regularized loss function for Dropout training. This technique is then extended to drive more properties of Dropout, including the inductive bias of Dropout (Helmbold & Long, 2015), the regularization effect in matrix factorization (Mianjy et al., 2018), and the implicit regularization in neural networks (Wei et al., 2020). This technique has been recently applied to Mixup in a parallel and independent work (Carratino et al., 2020) to derive regularization terms. Compared with the results in Carratino et al. (2020), our derived regularization enjoys a simpler form and therefore enables the subsequent analysis of adversarial robustness and generalization. We clarify the detailed differences in Section 3.\nTo the best of our knowledge, our paper is the first to provide a theoretical treatment to connect the regularization, adversarial robustness, and generalization for Mixup training." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we state our notations and briefly recap the definition of Mixup.\nNotations. We denote the general parameterized loss as l(θ, z), where θ ∈ Θ ⊆ Rd and z = (x, y) is the input and output pair. We consider a training dataset S = {(x1, y1), · · · , (xn, yn)}, where xi ∈ X ⊆ Rp and yi ∈ Y ⊆ Rm are i.i.d. drawn from a joint distribution Px,y . We further denote x̃i,j(λ) = λxi + (1 − λ)xj , ỹi,j(λ) = λyi + (1 − λ)yj for λ ∈ [0, 1] and let z̃i,j(λ) = (x̃i,j(λ), ỹi,j(λ)). Let L(θ) = Ez∼Px,y l(θ, z) denote the standard population loss and Lstdn (θ, S) = ∑n i=1 l(θ, zi)/n denote the standard empirical loss. For the two distributions D1 and D2, we use pD1 + (1− p)D2 for p ∈ (0, 1) to denote the mixture distribution such that a sample is drawn with probabilities p and (1 − p) from D1 and D2 respectively. For a parameterized function fθ(x), we use∇fθ(x) and∇θfθ(x) to respectively denote the gradient with respect to x and θ. For two vectors a and b, we use cos(x, y) to denote 〈x, y〉/(‖x‖ · ‖y‖).\nMixup. Generally, for classification cases, the output yi is the embedding of the class of xi, i.e. the one-hot encoding by taking m as the total number of classes and letting yi ∈ {0, 1}m be the binary vector with all entries equal to zero except for the one corresponding to the class of xi. In particular, if we take m = 1, it degenerates to the binary classification. For regression cases, yi can be any real number/vector. The Mixup loss is defined in the following form:\nLmixn (θ, S) = 1\nn2 n∑ i,j=1 Eλ∼Dλ l(θ, z̃ij(λ)), (1)\nwhere Dλ is a distribution supported on [0, 1]. Throughout the paper, we consider the most commonly used Dλ – Beta distribution Beta(α, β) for α, β > 0." }, { "heading": "3 MAIN RESULTS", "text": "In this section, we first introduce a lemma that characterizes the regularization effect of Mixup. Based on this lemma, we then derive our main theoretical results on adversarial robustness and generalization error bound in Sections 3.2 and 3.3 respectively." }, { "heading": "3.1 THE REGULARIZATION EFFECT OF MIXUP", "text": "As a starting point, we demonstrate how Mixup training is approximately equivalent to optimizing a regularized version of standard empirical loss Lstdn (θ, S). Throughout the paper, we consider the following class of loss functions for the prediction function fθ(x) and target y:\nL = {l(θ, (x, y))|l(θ, (x, y)) = h(fθ(x))− yfθ(x) for some function h}. (2) This function class L includes many commonly used losses, including the loss function induced by Generalized Linear Models (GLMs), such as linear regression and logistic regression, and also crossentropy for neural networks. In the following, we introduce a lemma stating that the Mixup training with λ ∼ Dλ = Beta(α, β) induces a regularized loss function with the weights of each regularization specified by a mixture of Beta distributions D̃λ = αα+βBeta(α+ 1, β) + β α+βBeta(β + 1, α).\nLemma 3.1. Consider the loss function l(θ, (x, y)) = h(fθ(x)) − yfθ(x), where h(·) and fθ(·) for all θ ∈ Θ are twice differentiable. We further denote D̃λ as a uniform mixture of two Beta distributions, i.e., αα+βBeta(α+ 1, β) + β α+βBeta(β+ 1, α), andDX as the empirical distribution of the training dataset S = (x1, · · · , xn), the corresponding Mixup loss Lmixn (θ, S), as defined in Eq. (1) with λ ∼ Dλ = Beta(α, β), can be rewritten as\nLmixn (θ, S) = L std n (θ, S) + 3∑ i=1 Ri(θ, S) + Eλ∼D̃λ [(1− λ) 2ϕ(1− λ)],\nwhere lima→0 ϕ(a) = 0 and\nR1(θ, S) = Eλ∼D̃λ [1− λ]\nn\nn∑ i=1 (h′(fθ(xi))− yi)∇fθ(xi)>Erx∼DX [rx − xi],\nR2(θ, S) = Eλ∼D̃λ [(1− λ) 2]\n2n\nn∑ i=1 h′′(fθ(xi))∇fθ(xi)>Erx∼DX [(rx − xi)(rx − xi)>]∇fθ(xi),\nR3(θ, S) = Eλ∼D̃λ [(1− λ) 2]\n2n\nn∑ i=1 (h′(fθ(xi))− yi)Erx∼DX [(rx − xi)∇2fθ(xi)(rx − xi)>].\nBy putting the higher order terms of approximation in ϕ(·), this result shows that Mixup is related to regularizing ∇fθ(xi) and ∇2fθ(xi), which are the first and second directional derivatives with respect to xi. Throughout the paper, our theory is mainly built upon analysis of the quadratic approximation of Lmixn (θ, S), which we further denote as\nL̃mixn (θ, S) := L std n (θ, S) + 3∑ i=1 Ri(θ, S). (3)\nComparison with related work. The result in Lemma 3.1 relies on the second-order Taylor expansion of the loss function Eq. (1). Similar approximations have been proposed before to study the regularization effect of Dropout training, see Wang & Manning (2013); Wager et al. (2013); Mianjy et al. (2018); Wei et al. (2020). Recently, Carratino et al. (2020) independently used similar approximation to study the regularization effect of Mixup. However, the regularization terms derived in Carratino et al. (2020) is much more complicated than those in Lemma 3.1. For example, in GLM, our technique yields the regularization term as shown in Lemma 3.3, which is much simpler than those in Corollaries 2 and 3 in Carratino et al. (2020). One technical step we use here to simplify the regularization expression is to equalize Mixup with input perturbation, see more details in the proof in the Appendix. This simpler expression enables us to study the robustness and generalization of Mixup in the subsequent sections.\nValidity of the approximation. In the following, we present numerical experiments to support the approximation in Eq. (3). Following the setup of numerical validations in Wager et al. (2013); Carratino et al. (2020), we experimentally show that the quadratic approximation is generally very accurate. Specifically, we train a Logistic Regression model (as one example of a GLM model, which we study later) and a two layer neural network with ReLU activations. We use the two-moons dataset (Buitinck et al., 2013). Fig. 2 shows the training and test data’s loss functions for training two models with different loss functions: the original Mixup loss and the approximate Mixup loss. Both models had the same random initialization scheme. Throughout training, we compute the test and training loss of each model using its own loss function. The empirical results shows the approximation of Mixup loss is quite close to the original Mixup loss." }, { "heading": "3.2 MIXUP AND ADVERSARIAL ROBUSTNESS", "text": "Having introduced L̃mixn (θ, S) in Eq. (3), we are now ready to state our main theoretical results. In this subsection, we illustrate how Mixup helps adversarial robustness. We prove that minimizing L̃mixn (θ, S) is equivalent to minimizing an upper bound of the second order Taylor expansion of an adversarial loss.\nThroughout this subsection, we study the logistic loss function\nl(θ, z) = log(1 + exp(fθ(x)))− yfθ(x),\nwhere y ∈ Y = {0, 1}. In addition, let g be the logistic function such that g(s) = es/(1 + es) and consider the case where θ is in the data-dependent space Θ, defined as\nΘ = {θ ∈ Rd : yifθ(xi) + (yi − 1)fθ(xi) ≥ 0 for all i = 1, . . . , n}.\nNotice that Θ contains the set of all θ with zero training errors:\nΘ ⊇ {θ ∈ Rq : the label prediction ŷi = 1{fθ(xi) ≥ 0} is equal to yi for all i = 1, . . . , n }. (4)\nIn many practical cases, the training error (0-1 loss) becomes zero in finite time although the training loss does not. Equation (4) shows that the condition of θ ∈ Θ is satisfied in finite time in such practical cases with zero training errors.\nLogistic regression. As a starting point, we study the logistic regression with fθ(x) = θ>x, in which case the number of parameters coincides with the data dimension, i.e. p = d. For a given ε > 0, we consider the adversarial loss with `2-attack of size ε √ d, that is, Ladvn (θ, S) =\n1/n ∑n i=1 max‖δi‖2≤ε √ d l(θ, (xi + δi, yi)). We first present the following second order Taylor approximation of Ladvn (θ, S).\nLemma 3.2. The second order Taylor approximation of Ladvn (θ, S) is ∑n i=1 l̃adv(ε √ d, (xi, yi))/n, where for any η > 0, x ∈ Rp and y ∈ {0, 1},\nl̃adv(η, (x, y)) = l(θ, (x, y)) + η|g(x>θ)− y| · ‖θ‖2 + η2\n2 · g(x>θ)(1− g(x>θ)) · ‖θ‖22. (5)\nBy comparing l̃adv(δ, (x, y)) and L̃mixn (θ, S) applied to logistic regression, we prove the following. Theorem 3.1. Suppose that fθ(x) = x>θ and there exists a constant cx > 0 such that ‖xi‖2 ≥ cx √ d for all i ∈ {1, . . . , n}. Then, for any θ ∈ Θ, we have\nL̃mixn (θ, S) ≥ 1\nn n∑ i=1 l̃adv(εi √ d, (xi, yi)) ≥ 1 n n∑ i=1 l̃adv(εmix √ d, (xi, yi))\nwhere εi = RicxEλ∼D̃λ [1 − λ] with Ri = | cos(θ, xi)|, and εmix = R · cxEλ∼D̃λ [1 − λ] with R = mini∈{1,...,n} | cos(θ, xi)|.\nTheorem 3.1 suggests that L̃mixn (θ, S) is an upper bound of the second order Taylor expansion of the adversarial loss with `2-attack of size εmix √ d. Note that εmix depends on θ; one can think the final radius is taken at the minimizer of L̃mixn (θ, S). Therefore, minimizing the Mixup loss would result in a small adversarial loss. Our analysis suggests that Mixup by itself can improve robustness against small attacks, which tend to be single-step attacks (Lamb et al., 2019). An interesting direction for future work is to explore whether combining Mixup with adversarial training is able to provide robustness against larger and more sophisticated attacks such as iterative projected gradient descent and other multiple-step attacks.\nRemark 3.1. Note that Theorem 3.1 also implies adversarial robustness against `∞ attacks with size ε since for any attack δ, ‖δ‖∞ ≤ ε implies ‖δ‖2 ≤ ε √ d, and therefore max‖δ‖∞≤ l(θ, (x+δ, y)) ≤ max‖δ‖26 √ d· l(θ, (x+ δ, y)).\nIn the following we provide more discussion about the range of R = mini∈{1,...,n} | cos(θ, xi)|. We first show that under additional regularity conditions, we can obtain a high probability lower bound that does not depend on sample size. We then numerically demonstrate that R tends to increase during training for both cases of linear models and neural networks at the end of this subsection.\nA constant lower bound for logistic regression. Now, we show how to obtain a constant lower bound by adding some additional conditions.\nAssumption 3.1. Let us denote Θ̂n ⊆ Θ as the set of minimizers of L̃mixn (θ, S). We assume there exists a set Θ∗ 1, such that for all n ≥ N , where N is a positive integer, Θ̂n ⊆ Θ∗ with probability at least 1− δn where δn → 0 as n→ 0. Moreover, there exists a τ ∈ (0, 1) such that\npτ = P ({x ∈ X : | cos(x, θ)| ≥ τ for all θ ∈ Θ∗}) ∈ (0, 1].\nSuch condition generally holds for regular optimization problems, where the minimizers are not located too dispersedly in the sense of solid angle (instead of Euclidean distance). More specifically, if we normalize all the minimizers’ `2 norm to 1, this assumption requires that the set of minimizers should not be located all over the sphere. In addition, Assumption 3.1 only requires that the probability pτ and the threshold τ to be non-zero. In particular, if the distribution of x has positive mass in all solid angles, then when the set of minimizers is discrete, this assumption holds. For more complicated cases in which the set of minimizers consists of sub-manifolds, as long as there exists a solid angle in X that is disjoint with the set of minimizers, the assumption still holds. Theorem 3.2. Under Assumption 3.1, for fθ(x) = x>θ, if there exists constants bx, cx > 0 such that cx √ d ≤ ‖xi‖2 ≤ bx √ d for all i ∈ {1, . . . , n}. Then, with probability at least 1 − δn − 2 exp(−np2τ/2), there exists constants κ > 0, κ2 > κ1 > 0, such that for any θ ∈ Θ̂n, we have\nL̃mixn (θ, S) ≥ 1\nn n∑ i=1 l̃adv(ε̃mix √ d, (xi, yi))\nwhere ε̃mix = R̃cxEλ∼D̃λ [1− λ] and R̃ = min { pτκ1 2κ2−pτ (κ2−κ1) , √ 4κpτ 2−pτ+4κpτ } τ.\nNeural networks with ReLU / Max-pooling. The results in the above subsection can be extended to the case of neural networks with ReLU activation functions and max-pooling. Specifically, we\n1Under some well-separation and smoothness conditions, we would expect all elements in Θ̂n will fall into a neighborhood Nn of minimizers of ESL̃mixn (θ, S), and Nn will shrink as n increases, i.e. Nn+1 ⊂ Nn. One can think Θ∗ is a set containing allNn for n ≥ N .\nconsider the logistic loss, l(θ, z) = log(1 + exp(fθ(x))) − yfθ(x) with y ∈ {0, 1}, where fθ(x) represents a fully connected neural network with ReLU activation function or max-pooling:\nfθ(x) = β >σ ( WN−1 · · · (W2σ(W1x) ) .\nHere, σ represents nonlinearity via ReLU and max pooling, each Wi is a matrix, and β is a column vector: i.e., θ consists of {Wi}N−1i=1 and β. With the nonlinearity σ for ReLU and max-pooling, the function fθ satisfies that fθ(x) = ∇fθ(x)>x and ∇2fθ(x) = 0 almost everywhere, where the gradient is taken with respect to input x. Under such conditions, similar to Lemma 3.2, the adversarial loss function ∑n i=1 max‖δi‖2≤ε √ d l(θ, (xi + δi, yi))/n can be written as\nLstdn (θ, S)+εmix √ d( 1\nn n∑ i=1 |g(fθ(xi))−yi|‖∇fθ(xi)‖2)+ ε2mixd 2 ( 1 n n∑ i=1 |h′′(fθ(xi))|‖∇fθ(xi)‖22).\n(6) With a little abuse of notations, we also denote\nl̃adv(δ, (x, y)) = l(θ, (x, y)) + δ|g(fθ(x))− y|‖∇fθ(x)‖2 + δ2d\n2 |h′′(fθ(x))|‖∇fθ(x)‖22.\nThe following theorem suggests that minimizing the Mixup loss in neural nets also lead to a small adversarial loss. Theorem 3.3. Assume that fθ(xi) = ∇fθ(xi)>xi, ∇2fθ(xi) = 0 (which are satisfied by the ReLU and max-pooling activation functions) and there exists a constant cx > 0 such that ‖xi‖2 ≥ cx √ d for all i ∈ {1, . . . , n}. Then, for any θ ∈ Θ, we have\nL̃mixn (θ, S) ≥ 1\nn n∑ i=1 l̃adv(εi √ d, (xi, yi)) ≥ 1 n n∑ i=1 l̃adv(εmix √ d, (xi, yi))\nwhere εi = RicxEλ∼D̃λ [1 − λ], εmix = R · cxEλ∼D̃λ [1 − λ] and Ri = | cos(∇fθ(xi), xi)|, R = mini∈{1,...,n} | cos(∇fθ(xi), xi)|.\nSimilar constant lower bound can be derived to the setting of neural networsk. Due to limited space, please see the detailed discussion in the appendix.\nOn the value of R = miniRi via experiments. For both linear models and neural networks, after training accuracy reaches 100%, the logistic loss is further minimized when ‖fθ(xi)‖2 increases. Since ‖fθ(xi)‖2 = ‖∇fθ(xi)>xi‖2 = ‖∇fθ(xi)‖2‖xi‖2Ri, this suggests that Ri and R tend to increase after training accuracy reaches 100% (e.g., ∇fθ(xi) = θ in the case of linear models). We confirm this phenomenon in Fig. 3. In the figure, R is initially small but tends to increase after training accuracy reaches 100%, as expected. For example, for ANN, the value of R was initially 2.27×10−5 but increased to 6.11×10−2 after training. Fig. 3 (c) and (d) also show thatRi for each i-th data point tends to increase during training and that the values of Ri for many points are much larger than the pessimistic lower bound R: e.g., whereas R = 6.11 × 10−2, we have Ri > 0.8 for several data points in Fig. 3 (d). For this experiment, we generated 100 data points as xi ∼ N (0, I) and yi = 1{x>i θ∗ > 0} where xi ∈ R10 and θ∗ ∼ N (0, I). We used SGD to train linear models and ANNs with ReLU activations and 50 neurons per each of two hidden layers. We set the learning rate to be 0.1 and the momentum coefficient to be 0.9. We turned off weight decay so that R is not maximized as a result of bounding ‖∇fθ(xi)‖, which is a trivial case from the discussion above." }, { "heading": "3.3 MIXUP AND GENERALIZATION", "text": "In this section, we show that the data-dependent regularization induced by Mixup directly controls the Rademacher complexity of the underlying function classes, and therefore yields concrete generalization error bounds. We study two models – the Generalized Linear Model (GLM) and two-layer ReLU nets with squared loss.\nGeneralized linear model. A Generalized Linear Model is a flexible generalization of ordinary linear regression, where the corresponding loss takes the following form:\nl(θ, (x, y)) = A(θ>x)− yθ>x,\nwhere A(·) is the log-partition function, x ∈ Rp and y ∈ R. For instance, if we take A(θ>x) = log(1 + eθ\n>x) and y ∈ {0, 1}, then the model corresponds to the logistic regression. In this paragraph, we consider the case where Θ, X and Y are all bounded. By further taking advantage of the property of shift and scaling invariance of GLM, we can further simplify the regularization terms in Lemma 3.1 and obtain the following results.\nLemma 3.3. Consider the centralized dataset S, that is, 1/n ∑n i=1 xi = 0. and denote Σ̂X = 1 nxix > i . For a GLM, if A(·) is twice differentiable, then the regularization term obtained by the second-order approximation of L̃mixn (θ, S) is given by\n1\n2n [ n∑ i=1 A′′(θ>xi)] · Eλ∼D̃λ [ (1− λ)2 λ2 ]θ>Σ̂Xθ, (7)\nwhere D̃λ = αα+βBeta(α+ 1, β) + α α+βBeta(β + 1, α).\nGiven the above regularization term, we are ready to investigate the corresponding generalization gap. Following similar approaches in Arora et al. (2020), we shed light upon the generalization problem by investigating the following function class that is closely related to the dual problem of Eq. (7): Wγ := {x→ θ>x, such that θ satisfying ExA′′(θ>x) · θ>ΣXθ 6 γ}, where α > 0 and ΣX = E[xix>i ]. Further, we assume that the distribution of x is ρ-retentive for some ρ ∈ (0, 1/2], that is, if for any non-zero vector v ∈ Rd, [ Ex[A′′(x>v)]\n]2 ≥ ρ · min{1,Ex(v>x)2}. Such an assumption has been similarly assumed in Arora et al. (2020) and is satisfied by general GLMs when θ has bounded `2 norm. We then have the following theorem. Theorem 3.4. Assume that the distribution of xi is ρ-retentive, and let ΣX = E[xx>]. Then the empirical Rademacher complexity ofWγ satisfies\nRad(Wγ , S) ≤ max{( γ\nρ )1/4, (\nγ ρ )1/2} ·\n√ rank(ΣX)\nn .\nThe above bound on Rademacher complexity directly implies the following generalization gap of Mixup training. Corollary 3.1. Suppose A(·) is LA-Lipchitz continuous, X , Y and Θ are all bounded, then there exists constantsL,B > 0, such that for all θ satisfying ExA′′(θ>x)·θ>ΣXθ 6 γ (the regularization induced by Mixup), we have\nL(θ) 6 Lstdn (θ, S) + 2L · LA ·\n( max{(γ\nρ )1/4, (\nγ ρ )1/2} ·\n√ rank(ΣX)\nn\n) +B √ log(1/δ)\n2n ,\nwith probability at least 1− δ. Remark 3.2. This result shows that the Mixup training would adapt to the intrinsic dimension of x and therefore has a smaller generalization error. Specifically, if we consider the general ridge penalty and consider the function class Wridgeγ := {x → θ>x, ‖θ‖2 6 γ}, then the similar technique would yield a Rademacher complexity bound Rad(Wγ , S) ≤ max{(γ/ρ)1/4, (γ/ρ)1/2} ·√ p/n, where p is the dimension of x. This bound is much larger than the result in Theorem 3.4 when the intrinsic dimension rank(ΣX) is small.\nNon-linear cases. The above results on GLM can be extended to the non-linear neural network case with Manifold Mixup (Verma et al., 2019a). In this section, we consider the two-layer ReLU neural networks with the squared loss L(θ, S) = 1n ∑n i=1(yi − fθ(xi))2, where y ∈ R and fθ(x) is a two-layer ReLU neural network, with the form of\nfθ(x) = θ > 1 σ ( Wx ) + θ0.\nwhere W ∈ Rp×d, θ1 ∈ Rd, and θ0 denotes the bias term. Here, θ consists of W , θ0 and θ1. If we perform Mixup on the second layer (i.e., mix neurons on the hidden layer as proposed by Verma et al. (2019a)), we then have the following result on the induced regularization.\nLemma 3.4. Denote Σ̂σX as the sample covariance matrix of {σ(Wxi)}ni=1, then the regularization term obtained by the second-order approximation of L̃mixn (θ, S) is given by\nEλ∼D̃λ [ (1− λ)2 λ2 ]θ>1 Σ̂ σ Xθ1, (8)\nwhere D̃λ ∼ αα+βBeta(α+ 1, β) + β α+βBeta(β + 1, α).\nTo show the generalization property of this regularizer, similar to the last section, we consider the following distribution-dependent class of functions indexed by θ:\nWNNγ := {x→ fθ(x), such that θ satisfying θ>1 ΣσXθ1 6 γ},\nwhere ΣσX = E[Σ̂σX ] and α > 0. We then have the following result. Theorem 3.5. Let µσ = E[σ(Wx)] and denote the generalized inverse of ΣσX by Σ σ† X . Suppose X , Y and Θ are all bounded, then there exists constants L,B > 0, such that for all fθ inWNNγ (the regularization induced by Manifold Mixup), we have, with probability at least 1− δ,\nL(θ) 6 Lstdn (θ, S) + 4L ·\n√ γ · (rank(ΣσX) + ‖Σ σ†/2 X µσ‖2)\nn +B\n√ log(1/δ)\n2n ." }, { "heading": "4 CONCLUSION AND FUTURE WORK", "text": "Mixup is a data augmentation technique that generates new samples by linear interpolation of multiple samples and their labels. The Mixup training method has been empirically shown to have better generalization and robustness against attacks with adversarial examples than the traditional training method, but there is a lack of rigorous theoretical understanding. In this paper, we prove that the Mixup training is approximately a regularized loss minimization. The derived regularization terms are then used to demonstrate why Mixup has improved generalization and robustness against onestep adversarial examples. One interesting future direction is to extend our analysis to other Mixup variants, for example, Puzzle Mix (Kim et al., 2020) and Adversarial Mixup Resynthesis (Beckham et al., 2019), and investigate if the generalization performance and adversarial robustness can be further improved by these newly developed Mixup methods." }, { "heading": "ACKNOWLEDGMENTS", "text": "The research of Linjun Zhang is supported by NSF DMS-2015378. The research of James Zou is supported by NSF CCF 1763191, NSF CAREER 1942926 and grants from the Silicon Valley Foundation and the Chan-Zuckerberg Initiative. The research of Kenji Kawaguchi is partially supported by the Center of Mathematical Sciences and Applications at Harvard University. This work is also in part supported by NSF award 1763665." }, { "heading": "Appendix", "text": "In this appendix, we provide proofs of the main theorems and the corresponding technical lemmas. Additional discussion on the range of R in the case of neural nets, and some further numerical experiments are also provided." }, { "heading": "A TECHNIQUE PROOFS", "text": "" }, { "heading": "A.1 PROOF OF LEMMA 3.1", "text": "Consider the following problem with loss function lx,y(θ) := l(θ, (x, y)) = h(fθ(x))−yfθ(x), that is\nLstdn (θ, S) = 1\nn n∑ i=1 [h(fθ(xi))− yifθ(xi)].\nThe corresponding Mixup version, as defined in Eq.(1), is\nLmixn (θ, S) = 1\nn2 Eλ∼Beta(α,β) n∑ i,j=1 [h(fθ(x̃i,j(λ)))− (λyi + (1− λ)yj)fθ(x̃i,j(λ))],\nwhere x̃i,j(λ) = λxi + (1− λ)xj . Further transformation leads to\nLmixn (θ, S) = 1\nn2 Eλ∼Beta(α,β) n∑ i,j=1 { λh(fθ(x̃i,j(λ))))− λyifθ(x̃i,j(λ))\n+ (1− λ)h(fθ(x̃i,j(λ)))− (1− λ)yjfθ(x̃i,j(λ)) }\n= 1\nn2 Eλ∼Beta(α,β)EB∼Bern(λ) n∑ i,j=1 { B[h(fθ(x̃i,j(λ)))− yifθ(x̃i,j(λ))]\n+ (1−B)[h(fθ(x̃i,j(λ)))− yjfθ(x̃i,j(λ))] }\nNote that λ ∼ Beta(α, β), B|λ ∼ Bern(λ), by conjugacy, we can exchange them in order and have\nB ∼ Bern( α α+ β ), λ | B ∼ Beta(α+B, β + 1−B).\nAs a result,\nLmixn (θ, S) = 1\nn2 n∑ i,j=1 { α α+ β Eλ∼Beta(α+1,β)[h(fθ(x̃i,j(λ)))− yifθ(x̃i,j(λ))]\n+ β\nα+ β Eλ∼Beta(α,β+1)[h(fθ(x̃i,j(λ)))− yjfθ(x̃i,j(λ))]\n} .\nUsing the fact 1−Beta(α, β+1) andBeta(β+1, α) are of the same distribution and x̃ij(1−λ) = x̃ji(λ), we have ∑\ni,j\nEλ∼Beta(α,β+1)[h(fθ(x̃i,j(λ)))− yjfθ(x̃i,j(λ))]\n= ∑ i,j Eλ∼Beta(β+1,α)[h(fθ(x̃i,j(λ)))− yifθ(x̃i,j(λ))].\nThus, let D̃λ = αα+βBeta(α+ 1, β) + β α+βBeta(β + 1, α)\nLmixn (θ, S) = 1\nn n∑ i=1 Eλ∼D̃λErx∼Dxh(f(θ, λxi + (1− λ)rx)))− yif(θ, λxi + (1− λ)rx)\n= 1\nn n∑ i=1 Eλ∼D̃λErx∼Dx lx̌i,yi(θ) (9)\nwhere Dx is the empirical distribution induced by training samples, and x̌i = λxi + (1− λ)rx. In the following, denote Š = {(x̌i, yi)}ni=1, and let us analyze Lstdn (θ, Š) = 1n ∑n i=1 lx̌i,yi(θ), and compare it with Lstdn (θ, S). Let α = 1− λ and ψi(α) = lx̌i,yi(θ). Then, using the definition of the twice-differentiability of function ψi,\nlx̌i,yi(θ) = ψi(α) = ψi(0) + ψ ′ i(0)α+\n1 2 ψ′′i (0)α 2 + α2ϕi(α), (10)\nwhere limz→0 ϕi(z) = 0. By linearity and chain rule,\nψ′i(α) = h ′(fθ(x̌i))\n∂fθ(x̌i)\n∂x̌i\n∂x̌i ∂α − yi ∂fθ(x̌i)\n∂x̌i\n∂x̌i ∂α\n= h′(fθ(x̌i)) ∂fθ(x̌i)\n∂x̌i (rx − xi)− yi\n∂fθ(x̌i)\n∂x̌i (rx − xi)\nwhere we used ∂x̌i∂α = (rx − xi). Since\n∂\n∂α\n∂fθ(x̌i)\n∂x̌i (rx−xi) =\n∂\n∂α (rx−xi)>[\n∂fθ(x̌i)\n∂x̌i ]> = (rx−xi)>∇2fθ(x̌i) ∂x̌i ∂α = (rx−xi)>∇2fθ(x̌i)(rx−xi),\nwe have\nψ′′i (α) =h ′(fθ(x̌i))(rx − xi)>∇2fθ(x̌i)(rx − xi)\n+ h′′(fθ(x̌i))[ ∂fθ(x̌i)\n∂x̌i (rx − xi)]2 − yi(rx − xi)>∇2fθ(x̌i)(rx − xi).\nThus,\nψ′i(0) = h ′(fθ(xi))∇fθ(xi)>(rx−xi)−yi∇fθ(xi)>(rx−xi) = (h′(fθ(xi))−yi)∇fθ(xi)>(rx−xi)\nψ′′i (0) =h ′(fθ(xi))(rx − xi)>∇2fθ(xi)(rx − xi) + h′′(fθ(xi))[∇fθ(xi)>(rx − xi)]2\n− yi(rx − xi)>∇2fθ(xi)(rx − xi). =h′′(fθ(xi))∇fθ(xi)>(rx − xi)(rx − xi)>∇fθ(xi) + (h′(fθ(xi))− yi)(rx − xi)>∇2fθ(xi)(rx − xi)\nBy substituting these into equation 10 with ϕ(α) = 1n ∑n i=1 ϕi(α), we obtain the desired statement." }, { "heading": "A.2 PROOFS RELATED TO ADVERSARIAL ROBUSTNESS", "text": "" }, { "heading": "A.2.1 PROOF OF LEMMA 3.2", "text": "Recall that Ladvn (θ, S) = 1 n ∑n i=1 max‖δi‖2≤ε √ d l(θ, (xi + δi, yi)) and g(u) = 1/(1 + e\n−u). Then the second-order Taylor expansion of l(θ, (x+ δ, t)) is given by\nl(θ, (x+ δ, y)) = l(θ, (x, y)) + (g(θ>x)− y) · δ>θ + 1 2 g(x>θ)(1− g(x>θ)) · (δ>θ)2.\nConsequently, for any given η > 0,\nmax ‖δ‖2≤η l(θ, (x+ δ, y)) = max ‖δ‖2≤η l(θ, (x, y)) + (g(θ>x)− y) · δ>θ + 1 2 g(x>θ)(1− g(x>θ)) · (δ>θ)2\n=l(θ, (x, y)) + η|g(x>θ)− y| · ‖θ‖2 + η2\n2 (g(x>θ)(1− g(x>θ))) · ‖θ‖2,\nwhere the maximum is taken when δ = sgn(g(x>θ)− y) · θ‖θ‖ · η." }, { "heading": "A.2.2 PROOF OF THEOREM 3.1", "text": "Since fθ(x) = x>θ, we have ∇fθ(xi) = θ and ∇2fθ(xi) = 0. Since h(z) = log(1 + ez), we have h′(z) = e z\n1+ez = g(z) ≥ 0 and h ′′(z) = e\nz\n(1+ez)2 = g(z)(1 − g(z)) ≥ 0. By substituting these into the equation of Lemma 3.1 with Erx [rx] = 0,\nL̃mixn (θ, S) = L̃ mix n (θ, S) +R1(θ, S) +R2(θ, S), (11)\nwhere\nR1(θ, S) = Eλ[(1− λ)]\nn\nn∑ i=1 (yi − g(x>i θ))θ>xi\nR2(θ, S) = Eλ[(1− λ)2]\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|θ>Erx [(rx − xi)(rx − xi)>]θ\n≥ Eλ[(1− λ)] 2\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|θ>Erx [(rx − xi)(rx − xi)>]θ\nwhere we used E[z2] = E[z]2 + Var(z) ≥ E[z]2 and θ>Erx [(rx − xi)(rx − xi)>]θ ≥ 0. Since Erx [(rx − xi)(rx − xi)>] = Erx [rxr>x − rxx>i − xir>x + xix>i ] = Erx [rxr>x ] + xix>i where Erx [rxr>x ] is positive semidefinite,\nR2(θ, S) ≥ Eλ[(1− λ)]2\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|θ>(Erx [rxr>x ] + xix>i )θ.\n≥ Eλ[(1− λ)] 2\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|(θ>xi)2\n= Eλ[(1− λ)]2\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|‖θ‖22‖xi‖22(cos(θ, xi))2\n≥ R 2c2xEλ[(1− λ)]2d\n2n\nn∑ i=1 |g(x>i θ)(1− g(x>i θ))|‖θ‖22\nNow we bound E = Eλ[(1−λ)]n ∑n i=1(yi − g(x>i θ))(θ>xi) by using θ ∈ Θ. Since θ ∈ Θ, we have yifθ(xi) + (yi − 1)fθ(xi) ≥ 0, which implies that (θ>xi) ≥ 0 if yi = 1 and (θ>xi) ≤ 0 if yi = 0. Thus, if yi = 1, (yi − g(x>i θ))(θ>xi) = (1− g(x>i θ))(θ>xi) ≥ 0, since (θ>xi) ≥ 0 and (1− g(x>i θ)) ≥ 0 due to g(x>i θ) ∈ (0, 1). If yi = 0,\n(yi − g(x>i θ))(θ>xi) = −g(x>i θ)(θ>xi) ≥ 0,\nsince (θ>xi) ≤ 0 and −g(x>i θ) < 0. Therefore, for all i = 1, . . . , n,\n(yi − g(x>i θ))(θ>xi) ≥ 0,\nwhich implies that, since Eλ[(1− λ)] ≥ 0,\nR1(θ, S) = Eλ[(1− λ)]\nn\nn∑ i=1 |yi − g(x>i θ)||θ>xi|\n= Eλ[(1− λ)]\nn\nn∑ i=1 |g(x>i θ)− yi|‖θ‖2‖xi‖2| cos(θ, xi)|\n≥ RcxEλ[(1− λ)] √ d\nn\nn∑ i=1 |g(x>i θ)− yi|‖θ‖2\nBy substituting these lower bounds ofR1(θ, S) andR2(θ, S) into equation 11, we obtain the desired statement." }, { "heading": "A.2.3 PROOF OF THEOREM 3.2", "text": "Recall we assume that θ̂n will fall into the set Θ∗ with probability at least 1 − δn, and δn → 0 as n→∞. In addition, define the set\nXΘ∗(τ) = {x ∈ X : | cos(x, θ)| > τ for all θ ∈ Θ∗},\nthere is τ ∈ (0, 1) such that XΘ∗(τ) 6= ∅, and\npτ := P(x ∈ XΘ∗(τ)) ∈ (0, 1).\nLet us first study\n1\nn n∑ i=1 |g(x>i θ)(1− g(x>i θ))|(cos(θ, xi))2\nSince we assume Θ∗ is bounded and cx √ d 6 ‖xi‖2 6 bx √ d for all i, there exists κ > 0, such that |g(x>i θ)(1− g(x>i θ))| > κ. If we denote p̂ = {number of x′is such that xi ∈ XΘ∗(τ)}/n. Then, it is easy to see\n1 n ∑ xi∈X cΘ∗ (τ)\n|g(x>i θ)(1− g(x>i θ))| 1 n ∑ xi∈XΘ∗ (τ) |g(x > i θ)(1− g(x>i θ))| 6 (1− p̂)/4 p̂κ\nFor η2 satisfying\nη2(1 + (1− p̂)/4\np̂κ ) 6 τ2\nwe have\n1\nn n∑ i=1 |g(x>i θ)(1− g(x>i θ))|(cos(θ, xi))2 > 1 n ∑ xi∈XΘ∗ (τ) |g(x>i θ)(1− g(x>i θ))|τ2\n> 1\nn ∑ xi∈XΘ∗ (τ) |g(x>i θ)(1− g(x>i θ))|η2 + 1 n ∑ xi∈X cΘ∗ (τ) |g(x>i θ)(1− g(x>i θ))|η2.\nLastly by Hoeffding’s inequality, if we take ε = pτ/2\n(1 + (1− p̂)/4\np̂κ ) 6 (1 + (1− pτ/2)/4 (pτ/2)κ )\nwith probability at least 1− 2 exp(−2nε2)\nη 6 τ\n√ 4κpτ\n2− pτ + 4κpτ .\nSimilarly, if we study n∑ i=1 |g(x>i θ)− yi|| cos(θ, x)|\nBy boundedness of θ, x and y ∈ {0, 1}, we know there are constants κ1, κ2 > 0, such that\nκ1 6 |g(fθ(xi))− yi| 6 κ2\nSimilarly, we know\nη 6 pτκ1\n2κ2 − pτ (κ2 − κ1) τ.\nCombined together, we can obtain the result:\nη 6 min{ pτκ1 2κ2 − pτ (κ2 − κ1) ,\n√ 4κpτ\n2− pτ + 4κpτ }τ" }, { "heading": "A.2.4 PROOF OF THEOREM 3.3", "text": "From the assumption, we have fθ(xi) = ∇fθ(xi)>xi and∇2fθ(xi) = 0. Since h(z) = log(1+ez), we have h′(z) = e z\n1+ez = g(z) ≥ 0 and h ′′(z) = e\nz\n(1+ez)2 = g(z)(1 − g(z)) ≥ 0. By substituting these into the equation of Lemma 3.1 with Erx [rx] = 0,\nL̃mixn (θ, S) = L̃ mix n (θ, S) +R1(θ, S) +R2(θ, S), (12)\nwhere\nR1(θ, S) = Eλ[(1− λ)]\nn\nn∑ i=1 (yi − g(fθ(xi)))fθ(xi)\nR2(θ, S) = Eλ[(1− λ)2]\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|∇fθ(xi)>Erx [(rx − xi)(rx − xi)>]∇fθ(xi)\n≥ Eλ[(1− λ)] 2\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|∇fθ(xi)>Erx [(rx − xi)(rx − xi)>]∇fθ(xi)\nwhere we used E[z2] = E[z]2 +Var(z) ≥ E[z]2 and∇fθ(xi)>Erx [(rx−xi)(rx−xi)>]∇fθ(xi) ≥ 0. Since Erx [(rx − xi)(rx − xi)>] = Erx [rxr>x − rxx>i − xir>x + xix>i ] = Erx [rxr>x ] + xix>i where Erx [rxr>x ] is positive semidefinite,\nR2(θ, S) ≥ Eλ[(1− λ)]2\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|∇fθ(xi)>(Erx [rxr>x ] + xix>i )∇fθ(xi).\n≥ Eλ[(1− λ)] 2\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|(∇fθ(xi)>xi)2\n= Eλ[(1− λ)]2\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|‖∇fθ(xi)‖22‖xi‖22(cos(∇fθ(xi), xi))2\n≥ R 2c2xEλ[(1− λ)]2d\n2n\nn∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|‖∇fθ(xi)‖22\nNow we bound E = Eλ[(1−λ)]n ∑n i=1(yi − g(fθ(xi)))fθ(xi) by using θ ∈ Θ. Since θ ∈ Θ, we have yifθ(xi) + (yi − 1)fθ(xi) ≥ 0, which implies that fθ(xi) ≥ 0 if yi = 1 and fθ(xi) ≤ 0 if yi = 0. Thus, if yi = 1,\n(yi − g(fθ(xi)))(fθ(xi)) = (1− g(fθ(xi)))(fθ(xi)) ≥ 0,\nsince (fθ(xi)) ≥ 0 and (1− g(fθ(xi))) ≥ 0 due to g(fθ(xi)) ∈ (0, 1). If yi = 0,\n(yi − g(fθ(xi)))(fθ(xi)) = −g(fθ(xi))(fθ(xi)) ≥ 0,\nsince (fθ(xi)) ≤ 0 and −g(fθ(xi)) < 0. Therefore, for all i = 1, . . . , n,\n(yi − g(fθ(xi)))(fθ(xi)) ≥ 0,\nwhich implies that, since Eλ[(1− λ)] ≥ 0,\nR1(θ, S) = Eλ[(1− λ)]\nn\nn∑ i=1 |yi − g(fθ(xi))||fθ(xi)|\n= Eλ[(1− λ)]\nn\nn∑ i=1 |g(fθ(xi))− yi|‖∇fθ(xi)‖2‖xi‖2| cos(∇fθ(xi), xi)|\n≥ RcxEλ[(1− λ)] √ d\nn\nn∑ i=1 |g(fθ(xi))− yi|‖∇fθ(xi)‖2\nBy substituting these lower bounds of E and F into equation 12, we obtain the desired statement." }, { "heading": "A.3 PROOFS RELATED TO GENERALIZATION", "text": "" }, { "heading": "A.3.1 PROOF OF LEMMA 3.3 AND LEMMA 3.4", "text": "We first prove Lemma 3.3. The proof of Lemma 3.4 is similar.\nBy Eq. (9), we have Lmixn (θ, S) = L std n (θ, Š), where Š = {(x̌i, yi)}ni=1 with x̌i = λxi + (1− λ)rx and λ ∼ D̃λ = αα+βBeta(α + 1, β) + β\nα+βBeta(β + 1, α). Since for Generalized Linear Model (GLM), the prediction is invariant to the scaling of the training data, so it suffices to consider S̃ = {(x̃i, yi)}ni=1 with x̃i = 1λ̄ (λxi + (1− λ)rx).\nIn the following, we analyze Lstdn (θ, S̃). For GLM the loss function is\nLstdn (θ, S̃) = 1\nn n∑ i=1 lx̃i,yi(θ) = 1 n n∑ i=1 −(yix̃>i θ −A(x̃>i θ)),\nwhere A(·) is the log-partition function in GLMs. Denote the randomness (of λ and rx) by ξ, then the second order Taylor expansion yields\nEξ[A(x̃>i θ)−A(x>i θ)] 2nd−order approx. = Eξ[A′(x>i θ)(x̃i − xi)>θ +A′′(x>i θ)V ar(x̃>i θ)]\nNotice Eξ[x̃i − xi] = 0 and V arξ(x̃i) = 1n ∑n i=1 xix > i = Σ̂X , then we have the RHS of the last equation equal to\nA′′(x>i θ)( E(1− λ)2\nλ̄2 )θ>Σ̂Xθ.\nAs a result, the second-order Taylor approximation of the Mixup loss Lstdn (θ, S̃) is\nn∑ i=1 −(yix>i θ −A(x>i θ)) + 1 2n [ n∑ i=1 A′′(x>i θ)]E( (1− λ)2 λ2 )θ>Σ̂Xθ\n=Lstdn (θ, S) + 1\n2n [ n∑ i=1 A′′(x>i θ)]E( (1− λ)2 λ2 )θ>Σ̂Xθ.\nThis completes the proof of Lemma 3.3. For Lemma 3.4, since the Mixup is performed on the final layer of the neural nets, the setting is the same as the least square with covariates σ(w>j x). Moreover, since we include both the linear coefficients vector θ1 and bias term θ0, the prediction is invariant to the shifting and scaling of σ(w>j x). Therefore, we can consider training θ1 and θ0 on the covariates {(σ(Wxi) − σ̄W ) + 1−λλ (σ(Wrx) − σ̄W )} n i=1, where σ̄W = 1 n ∑n i=1 σ(Wxi). Moreover, since we consider the least square loss, which is a special case of GLM loss with A(u) = 12u 2, we have A′′ = 1. Plugging these quantities into Lemma 3.3, we get the desired result of Lemma 3.4." }, { "heading": "A.3.2 PROOF OF THEOREM 3.4 AND COROLLARY 3.1", "text": "By definition, given n i.i.d. Rademacher rv. ξ1, ..., ξn, the empirical Rademacher complexity is\nRad(Wγ , S) = Eξ sup a(θ)·θ>ΣXθ≤γ\n1\nn n∑ i=1 ξiθ >xi\nLet x̃i = Σ †/2 X xi, a(θ) = Ex[A′′(x>θ)] and v = Σ 1/2 X θ, then ρ-retentiveness condition implies a(θ)2 ≥ ρ ·min{1,Ex(θ>x)2} ≥ ρ ·min{1, θ>ΣXθ} and therefore a(θ) · θ>ΣXθ ≤ γ implies that ‖v‖2 = θ>ΣXθ ≤ max{(γρ ) 1/2, γρ}.\nAs a result,\nRad(Wγ , S) =Eξ sup a(θ)·θ>ΣXθ≤γ\n1\nn n∑ i=1 ξiθ >xi\n=Eξ sup a(θ)·θ>ΣXθ≤γ\n1\nn n∑ i=1 ξiv >x̃i\n≤Eξ sup ‖v‖2≤( γρ )1/2∨ γ ρ\n1\nn n∑ i=1 ξiv >x̃i\n≤ 1 n · (γ ρ )1/4 ∨ (γ ρ )1/2 · Eξ‖ n∑ i=1 ξix̃i ‖\n≤ 1 n · (γ ρ )1/4 ∨ (γ ρ )1/2 · √√√√Eξ‖ n∑ i=1 ξix̃i ‖2\n≤ 1 n · (γ ρ )1/4 ∨ (γ ρ )1/2 · √√√√ n∑ i=1 x̃>i x̃i .\nConsequently,\nRad(Wγ , S) = ES [Rad(Wγ , S)] ≤ 1 n · (γ ρ )1/4 ∨ (γ ρ )1/2 · √√√√ n∑ i=1 Exi [x̃>i x̃i]\n≤ 1√ n · (γ ρ )1/4 ∨ (γ ρ )1/2 · rank(ΣX).\nBased on this bound on Rademacher complexity, Corollary 3.1 can be proved by directly applying the following theorem.\nLemma A.1 (Result from Bartlett & Mendelson (2002)). For any B-uniformly bounded and LLipchitz function ζ, for all φ ∈ Φ, with probability at least 1− δ,\nEζ(φ(xi)) ≤ 1\nn n∑ i=1 ζ(φ(xi)) + 2LRad(Φ, S) +B\n√ log(1/δ)\n2n ." }, { "heading": "A.3.3 PROOF OF THEOREM 3.5", "text": "To prove Theorem 3.5, by Lemma A.1, it suffices to show the following bound on Rademacher complexity.\nTheorem A.1. The empirical Rademacher complexity ofWNNγ satisfies\nRad(WNNγ , S) ≤ 2\n√ γ · (rank(ΣσX) + ‖Σ σ†/2 X µσ‖2)\nn .\nBy definition, given n i.i.d. Rademacher rv. ξ1, ..., ξn, the empirical Rademacher complexity is\nRad(Wγ , S) =Eξ sup Wγ\n1\nn n∑ i=1 ξiθ > 1 σ(Wxi).\nLet θ̃1 = Σ σ1/2 X θ1 and µσ = E[σ(Wx)], then\nRS(WNNγ ) =Eξ sup WNNγ\n1\nn n∑ i=1 ξiθ̃ > 1 Σ σ†/2 X (σ(Wxi)− µσ) + Eξ sup WNNγ 1 n n∑ i=1 ξiθ̃ > 1 Σ σ†/2 X µσ\n≤‖θ̃1‖2 · ‖Eξ[ 1\nn n∑ i=1 ξiΣ σ†/2 X σ(Wxi)]‖+ ‖θ̃1‖ · 1√ n ‖Σσ†/2X µσ‖\n≤2\n√ γ · (rank(ΣσX) + ‖Σ σ†/2 X µσ‖2)\nn ,\nwhere the last inequality is obtained by using the same technique as in the proof of Lemma 3.4.\nCombining all the pieces, we get Rad(Wγ , S) ≤ √ γ · rank(ΣσX)\nn ." }, { "heading": "B DISCUSSION OF R IN THE NEURAL NETWORK CASE", "text": "(B.1). On the value of R = miniRi via experiments for neural networks. After training accuracy reaches 100%, the loss is further minimized when ‖fθ(xi)‖2 increases. Since\n‖fθ(xi)‖2 = ‖∇fθ(xi)>xi‖2 = ‖∇fθ(xi)‖2‖xi‖2Ri,\nthis suggests that Ri and R tend to increase after training accuracy reaches 100%. We confirm this phenomenon in Figure 3. In the figure,R is initially small but tend to increase after training accuracy reaches 100%, as expected. For example, for ANN, the values of R were initially 2.27 × 10−5 but increased to 6.11 × 10−2 after training. Figure 3 (c) and (d) also show that Ri for each i-th data point tends to increase during training and that the values of Ri for many points are much larger than the pessimistic lower bound R: e.g., whereas R = 6.11× 10−2, we have Ri > 0.8 for several data points in Figure 3 (d). For this experiment, we generated 100 data points as xi ∼ N (0, I) and yi = 1{x>i θ∗ > 0} where xi ∈ R10 and θ∗ ∼ N (0, I). We used SGD to train linear models and ANNs with ReLU activations and 50 neurons per each of two hidden layers. We set the learning rate to be 0.1 and the momentum coefficient to be 0.9. We turned off weight decay so that R is not maximized as a result of bounding ‖∇fθ(xi)‖, which is a trivial case from the above discussion. (B.2). A constant lower bound for neural networks. Similarly, we can obtain a constant lower bound by adding some additional conditions.\nAssumption B.1. Let us denote Θ̂n ⊆ Θ as the set of minimizers of L̃mixn (θ, S). We assume there exists a set Θ∗, such that for all n ≥ N , where N is a positive integer, Θ̂n ⊆ Θ∗ with probability at least 1− δn and δn → 0 as n→ 0. Moreover, there exists τ, τ ′ ∈ (0, 1) such that\nXΘ∗(τ, τ ′) = {x ∈ X : | cos(x,∇fθ(x))| > τ, ‖∇fθ(x)‖ > τ ′, for all θ ∈ Θ∗},\nhas probability pτ,τ ′ ∈ (0, 1). Theorem B.1. Define\nFΘ := {fθ|fθ(xi) = ∇fθ(xi)>xi,∇2fθ(xi) = 0 almost everywhere, θ ∈ Θ}.\nUnder Assumption B.1, for any fθ(x) ∈ FΘ, if there exists constants bx, cx > 0 such that cx √ d ≤\n‖xi‖2 ≤ bx √ d for all i ∈ {1, . . . , n}. Then, with probability at least 1 − δn − 2 exp(−np2τ,τ ′/2), there exist constants κ > 0, κ2 > κ1 > 0, if we further have θ ∈ Θ̂n, then\nL̃mixn (θ, S) ≥ 1\nn n∑ i=1 l̃adv(ε̃mix √ d, (xi, yi))\nwhere ε̃mix = R̃cxEλ∼D̃λ [1−λ] and R̃ = min{ √\npτ,τ′κτ ′2\n(2−pτ,τ′ )/4τ ′′2+pτ,τ′κτ ′2 ,\npτ,τ′κ1τ ′\npτ,τ′κ1τ ′+(2−pτ,τ′ )κ2τ ′′\n}τ." }, { "heading": "B.1 PROOF OF THEOREM B.1", "text": "Notice if we assume for\nXΘ∗(τ, τ ′) = {x ∈ X : | cos(x,∇fθ(x))| > τ, ‖∇fθ(x)‖ > τ ′, for all θ ∈ Θ∗},\nthere is τ, τ ′ ∈ (0, 1) such that XΘ∗(τ, τ ′) 6= ∅, and\npτ,τ ′ := P(x ∈ XΘ∗(τ, τ ′)) ∈ (0, 1).\nLet us first study 1\nn n∑ i=1 |g(fθ(xi))− yi|‖∇fθ(xi)‖2| cos(∇fθ(xi), xi)|\nBy boundedness of θ, x and y ∈ {0, 1}, we know there is κ1, κ2 > 0, such that\nκ1 6 |g(fθ(xi))− yi| 6 κ2\nIf we denote p̂ = {number of x′is such that xi ∈ XΘ∗(τ, τ ′)}/n. Then, it is easy to see\n1 n ∑ xi∈X cΘ∗ (τ,τ ′)\n||g(fθ(xi))− yi|‖∇fθ(xi)‖2 1 n ∑ xi∈XΘ∗ (τ,τ ′) |g(fθ(xi))− yi|‖∇fθ(xi)‖2 6 (1− p̂)κ2τ ′′ p̂κ1τ ′\nFor η2 satisfying\nη(1 + (1− p̂)κ2τ ′′\np̂κ1τ ′ ) 6 τ\nwe have\n1\nn n∑ i=1 |g(fθ(xi))− yi|‖∇fθ(xi)‖2 cos(∇fθ(xi), xi)| > 1 n n∑ i=1 |g(fθ(xi))− yi|‖∇fθ(xi)‖2η\nBesides, if we consider n∑ i=1 |g(fθ(xi))(1− g(fθ(xi)))|‖∇fθ(xi)‖22(cos(∇fθ(xi), xi))2\nThus, we have\nη2(1 + (1− p̂)/4τ ′′2\np̂κτ ′2 ) 6 τ2\nWith probability at least 1− 2 exp(−2nε2), for ε = pτ,τ ′/2, we have\nη 6 min{\n√ pτ,τ ′κτ ′2\n(2− pτ,τ ′)/4τ ′′2 + pτ,τ ′κτ ′2 ,\npτ,τ ′κ1τ ′\npτ,τ ′κ1τ ′ + (2− pτ,τ ′)κ2τ ′′ }τ\nB.2 PROOFS OF THE CLAIM fθ(x) = ∇fθ(x)>x AND ∇2fθ(x) = 0 FOR NN WITH RELU/MAX-POOLING\nConsider the neural networks with ReLU and max-pooling:\nfθ(x) = W [L]σ[L−1] ( z[L−1] ) , z[l](x, θ) = W [l]σ(l−1) ( z[l−1](x, θ) ) , l = 1, 2, . . . , L− 1,\nwith σ(0) ( z[0](x, θ) ) = x, where σ represents nonlinear function due to ReLU and/or max-pooling, and W [l] ∈ RNl×Nl−1 is a matrix of weight parameters connecting the (l − 1)-th layer to the lth layer. For the nonlinear function σ due to ReLU and/or max-pooling, we can define σ̇[l](x, θ)\nsuch that σ̇[l](x, θ) is a diagonal matrix with each element being 0 or 1, and σ[l] ( z[l](x, θ) ) =\nσ̇[l](x, θ)z[l](x, θ). Using this, we can rewrite the model as:\nfθ(x) = W [L]σ̇[L−1](x, θ)W [L−1]σ̇[L−2](x, θ) · · ·W [2]σ̇[1](x, θ)W [1]x.\nSince ∂σ̇ [l](x,θ) ∂x = 0 almost everywhere for all l, which will cancel all derivatives except for\nd dxW [1]x, we then have that\n∂fθ(x)\n∂x = W [L]σ̇[L−1](x, θ)W [L−1]σ̇[L−2](x, θ) · · ·W [2]σ̇[1](x, θ)W [1]. (13)\nTherefore,\n∂fθ(x)\n∂x x = W [L]σ̇[L−1](x, θ)W [L−1]σ̇[L−2](x, θ) · · ·W [2]σ̇[1](x, θ)W [1]x = fθ(x).\nThis proves that fθ(x) = ∇fθ(x)>x for deep neural networks with ReLU/Max-pooling. Moreover, from equation 13, we have that\n∇2fθ(x) = ∇x(W [L]σ̇[L−1](x, θ)W [L−1]σ̇[L−2](x, θ) · · ·W [2]σ̇[1](x, θ)W [1]) = 0,\nsince ∂σ̇ [l](x,θ) ∂x = 0 almost everywhere for all l. This proves that ∇\n2fθ(x) = 0 for deep neural networks with ReLU/Max-pooling." }, { "heading": "C MORE ABOUT EXPERIMENTS", "text": "" }, { "heading": "C.1 ADVERSARIAL ATTACK AND MIXUP", "text": "We demonstrate the comparison between Mixup and standard training against adversarial attacks created by FGSM. We train two WideResNet-16-8 (Zagoruyko & Komodakis, 2016) architectures on the Street View House Numbers SVHN (Netzer et al., 2011)) dataset; one model with regular empirical risk minimization and the other one with Mixup loss (α = 5, β = 0.5). We create FGSM adversarial attacks (Goodfellow et al., 2014) for 1000 randomly selected test images. Fig. (1a) describes the results for the two models. It can be observed that the model trained with Mixup loss has better robustness.\nC.2 VALIDITY OF THE APPROXIMATION OF ADVERSARIAL LOSS\nIn this subsection, we present numerical experiments to support the approximation in Eq. (5) and (6). Under the same setup of our numerical experiments of Figure 2, we experimentally show that the quadratic approximation of the adversarial loss is valid. Specifically, we train a Logistic Regression model (as one example of a GLM model, which we study later) and a two layer neural network with ReLU activations. We use the two-moons dataset (Buitinck et al., 2013). Fig. 4, and compare the approximated adversarial loss and the original one along the iterations of computing the original adversarial loss against `2 attacks. The attack size is chosen such that √ d = 0.5, and both models had the same random initialization scheme. This experiment shows that using second order Taylor expansion yields a good approximation of the original adversarial loss." }, { "heading": "C.3 GENERALIZATION AND MIXUP", "text": "" } ]
2,021
HOW DOES MIXUP HELP WITH ROBUSTNESS AND GENERALIZATION?
SP:f0fdbe6d66e21168cf3653190ea1f751acf8f2bb
[ "In this paper, the authors proposes a deep clustering model to enable the clustering and representation learning to favor each other via preserving the geometric structure of data. The proposed DCRL framework integrates an isometric loss for local intra-manifold structure and a ranking loss for global inter-manifold structure. The authors evaluate the proposed framework on five datasets and the experimental results show that the proposed framework brings certain improvements over the baseline approaches. " ]
In this paper, we propose a novel framework for Deep Clustering and multimanifold Representation Learning (DCRL) that preserves the geometric structure of data. In the proposed DCRL framework, manifold clustering is done in the latent space guided by a clustering loss. To overcome the problem that clusteringoriented losses may deteriorate the geometric structure of embeddings in the latent space, an isometric loss is proposed for preserving intra-manifold structure locally and a ranking loss for inter-manifold structure globally. Experimental results on various datasets show that the DCRL framework leads to performances comparable to current state-of-the-art deep clustering algorithms, yet exhibits superior performance for manifold representation. Our results also demonstrate the importance and effectiveness of the proposed losses in preserving geometric structure in terms of visualization and performance metrics. The code is provided in the Supplementary Material.
[]
[ { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xifeng Guo", "Long Gao", "Xinwang Liu", "Jianping Yin" ], "title": "Improved deep embedded clustering with local structure preservation", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "Xifeng Guo", "Xinwang Liu", "En Zhu", "Xinzhong Zhu", "Miaomiao Li", "Xin Xu", "Jianping Yin" ], "title": "Adaptive self-paced deep clustering with data augmentation", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "David D Lewis", "Yiming Yang", "Tony G Rose", "Fan Li" ], "title": "Rcv1: A new benchmark collection for text categorization research", "venue": "Journal of machine learning research,", "year": 2004 }, { "authors": [ "Stan Z Li", "Zelin Zhang", "Lirong Wu" ], "title": "Markov-lipschitz deep learning", "venue": "arXiv preprint arXiv:2006.08256,", "year": 2020 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "J MacQueen" ], "title": "Some methods for classification and analysis of multivariate observations", "venue": "In Proc. 5th Berkeley Symposium on Math., Stat., and Prob,", "year": 1965 }, { "authors": [ "Ryan McConville", "Raul Santos-Rodriguez", "Robert J Piechocki", "Ian Craddock" ], "title": "N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding", "venue": null, "year": 1908 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "venue": "arXiv preprint arXiv:1802.03426,", "year": 2018 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Uri Shaham", "Kelly Stanton", "Henry Li", "Boaz Nadler", "Ronen Basri", "Yuval Kluger" ], "title": "Spectralnet: Spectral clustering using deep neural networks", "venue": "arXiv preprint arXiv:1801.01587,", "year": 2018 }, { "authors": [ "Jianbo Shi", "Jitendra Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2000 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Fei Tian", "Bin Gao", "Qing Cui", "Enhong Chen", "Tie-Yan Liu" ], "title": "Learning deep representations for graph clustering", "venue": "In Aaai,", "year": 2014 }, { "authors": [ "Wouter Van Gansbeke", "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Luc Van Gool" ], "title": "Scan: Learning to classify images without labels", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol", "Léon Bottou" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and intelligent laboratory systems,", "year": 1987 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Wei Xu", "Xin Liu", "Yihong Gong" ], "title": "Document clustering based on non-negative matrix factorization", "venue": "In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval,", "year": 2003 }, { "authors": [ "Bo Yang", "Ming Xiang", "Yupei Zhang" ], "title": "Multi-manifold discriminant isomap for visualization and classification", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xu Yang", "Cheng Deng", "Feng Zheng", "Junchi Yan", "Wei Liu" ], "title": "Deep spectral clustering using dual autoencoder network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xiaohang Zhan", "Jiahao Xie", "Ziwei Liu", "Yew-Soon Ong", "Chen Change Loy" ], "title": "Online deep clustering for unsupervised representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yan Zhang", "Zhao Zhang", "Jie Qin", "Li Zhang", "Bing Li", "Fanzhang Li" ], "title": "Semi-supervised local multimanifold isomap by linear embedding for feature extraction", "venue": "Pattern Recognition,", "year": 2018 } ]
[ { "heading": null, "text": "In this paper, we propose a novel framework for Deep Clustering and multimanifold Representation Learning (DCRL) that preserves the geometric structure of data. In the proposed DCRL framework, manifold clustering is done in the latent space guided by a clustering loss. To overcome the problem that clusteringoriented losses may deteriorate the geometric structure of embeddings in the latent space, an isometric loss is proposed for preserving intra-manifold structure locally and a ranking loss for inter-manifold structure globally. Experimental results on various datasets show that the DCRL framework leads to performances comparable to current state-of-the-art deep clustering algorithms, yet exhibits superior performance for manifold representation. Our results also demonstrate the importance and effectiveness of the proposed losses in preserving geometric structure in terms of visualization and performance metrics. The code is provided in the Supplementary Material." }, { "heading": "1 INTRODUCTION", "text": "Clustering, a fundamental tool for data analysis and visualization, has been an essential research topic in data science and machine learning. Conventional clustering algorithms such as K-Means (MacQueen, 1965), Gaussian Mixture Models (GMM) (Bishop, 2006), and spectral clustering (Shi & Malik, 2000) perform clustering based on distance or similarity. However, handcrafted distance or similarity measures are rarely reliable for large-scale high-dimensional data, making it increasingly challenging to achieve effective clustering. An intuitive solution is to transform the data from the high-dimensional input space to the low-dimensional latent space and then to cluster the data in the latent space. This can be achieved by applying dimensionality reduction techniques such as PCA (Wold et al., 1987), t-SNE (Maaten & Hinton, 2008), and UMAP (McInnes et al., 2018). However, since these methods are not specifically designed for clustering tasks, some of their properties may be contrary to our expectations, e.g., two data points from different manifolds that are close in the input space will be closer in the latent space derived by UMAP. Therefore, the first question here is how to learn the manifold representation that favors clustering?\nThe two main points for the multi-manifold representation learning are Point (1) preserving the local geometric structure within each manifold and Point (2) ensuring the discriminability between different manifolds. Most previous work seems to start with the assumption that the label of each data point is known, and then design the algorithm in a supervised manner, which greatly simplifies the problem of multi-manifold learning. However, it is challenging to decouple complex crossover relations and ensure discriminability between different manifolds, especially in unsupervised settings. One natural strategy is to achieve Point (2) through performing clustering in the input space to get pseudo-labels and then performing representation learning for each manifold. However, clustering is in fact contradictory to Point (1) (which will be analyzed in detail in Sec. 3.3), making it important to alleviate this contradiction so that clustering helps both point (1) and point (2). Thus, the second question here is how to cluster data that favors learning manifold representation?\nTo answer these two questions, some pioneering work has proposed to integrate deep clustering and representation learning into a unified framework by defining a clustering-oriented loss. Though promising performance has been demonstrated on various datasets, we observe that a vital factor has been ignored by these work that the defined clustering-oriented loss may deteriorate the geometric\nstructure of the latent space 1, which in turn hurts the performance of visualization, clustering generalization, and manifold representation. In this paper, we propose to jointly perform deep clustering and multi-manifold representation learning with geometric structure preservation. Inspired by Xie et al. (2016), the clustering centers are defined as a set of learnable parameters, and we use a clustering loss to simultaneously guide the separation of data points from different manifolds and the learning of the clustering centers. To prevent clustering loss from deteriorating the latent space, an isometric loss and a ranking loss are proposed to preserve the intra-manifold structure locally and inter-manifold structure globally. Finally, we achieve the following three goals related to clustering, geometric structure, and manifold representation: (1) Clustering helps to ensure inter-manifold discriminability; (2) Local structure preservation can be achieved with the presence of clustering; (3) Geometric structure preservation helps clustering.\nThe contributions of this work are summarized as below:\n• Proposing to integrate deep clustering and multi-manifold representation learning into a unified framework with local and global structure preservation.\n• Unlike conventional multi-manifold learning algorithms that deal with all point pair relationships between different manifolds simultaneously, we set the clustering centers as a set of learnable parameters and achieve global structure preservation in a faster, more efficient, and easier to optimize manner by applying ranking loss to the clustering centers.\n• Analyzing the contradiction between two optimization goals of clustering and local structure preservation and proposing an elegant training strategy to alleviate it.\n• The proposed DCRL algorithm outperforms competing algorithms in terms of clustering effect, generalizability to out-of-sample, and performance in manifold representation." }, { "heading": "2 RELATED WORK", "text": "Clustering analysis. As a fundamental tool in machine learning, it has been widely applied in various domains. One branch of classical clustering is K-Means (MacQueen, 1965) and Gaussian Mixture Models (GMM) (Bishop, 2006), which are fast, easy to understand, and can be applied to a large number of problems. However, limited by Euclidean measure, their performance on high-dimensional data is often unsatisfactory. Spectral clustering and its variants (such as SCNcut (Bishop, 2006)) extend clustering to high-dimensional data by allowing more flexible distance measures. However, limited by the computational efficiency of the full Laplace matrix, spectral clustering is challenging to extend to large-scale datasets.\nDeep clustering. The success of deep learning has contributed to the growth of deep clustering. One branch of deep clustering performs clustering after learning a representation through existing unsupervised techniques. For example, Tian et al. (2014) use autoencoder to learn low dimensional features and then runK-Means to get clustering results (AE+K-Means). Considering the geometric structure of the data, N2D (McConville et al., 2019) applies UMAP to find the best clusterable manifold of the obtained embedding, and then runK-Means to discover higher-quality clusters. The other category of algorithms tries to optimize clustering and representation learning jointly. The closest work to us is Deep Embedding Clustering (DEC) (Xie et al., 2016), which learns a mapping from the input space to a low dimensional latent space through iteratively optimizing clustering-oriented objective. As a modified version of DEC, while IDEC (Guo et al., 2017) claims to preserve the local structure of the data, in reality, their contribution is nothing more than adding a reconstruction loss. JULE (Yang et al., 2016b) unifies unsupervised representation learning with clustering based on the CNN architecture to improve clustering accuracy, which can be considered as a neural extension of hierarchical clustering. DSC devises a dual autoencoder to embed data into latent space, and then deep spectral clustering (Shaham et al., 2018) is applied to obtain label assignments (Yang et al., 2019). ASPC-DA (Guo et al., 2019) combines data augmentation with self-paced learning to encourage the learned features to be cluster-oriented. While sometimes they both evaluate performance in terms of accuracy, we would like to highlight that deep clustering and visual self-supervised learning (SSL) are two different research fields. SSL typically uses more powerful CNN architecture (applicable only to image data), and uses sophisticated techniques such as contrastive learning (He\n1This claim was first made by IDEC (Guo et al., 2017), but they did not provide experiments to support it. In this paper, however, we show that the geometry of the latent space is indeed disrupted by visualization of learned embeddings (Fig. 4), visualization of clustering process (Fig. A3), and statistical analysis (Fig. A5).\net al., 2020), data augmentation (Chen et al., 2020), and clustering (Zhan et al., 2020; Ji et al., 2019; Van Gansbeke et al., 2020) for better performance on large-scale datasets such as ImageNet. Deep clustering, however, uses general MLP architecture (applicable to both image and vector data), so it is difficult to scale directly to large datasets without considering those sophisticated techniques.\nManifold Representation Learning. Isomap, as a representative algorithm of single-manifold learning, aims to capture global nonlinear features and seek an optimal subspace that best preserves the geodesic distance between data points (Tenenbaum et al., 2000). In contrast, some algorithms, such as the Locally Linear Embedding (LLE) (Roweis & Saul, 2000), are more concerned with the preservation of local neighborhood information. Combining DNN with manifold learning, the recently proposed Markov-Lipschitz Deep Learning (MLDL) algorithm achieves the preservation of local and global geometries by imposing Locally Isometric Smoothness (LIS) prior constraints (Li et al., 2020). Furthermore, multi-manifold learning is proposed to obtain intrinsic properties of different manifolds. Yang et al. (2016a) proposed a supervised discriminant isomap where data points are partitioned into different manifolds according to label information. Similarly, Zhang et al. (2018) proposed a semi-supervised learning framework that applies the labeled and unlabeled training samples to perform the joint learning of local neighborhood-preserving features. In most previous work on multi-manifold learning, the problem is considered from the perspective that the label is known or partially known, which significantly simplifies the problem. However, it is challenging to decouple multiple overlapping manifolds in unsupervised settings, and that is what this paper aims to explore." }, { "heading": "3 PROPOSED METHOD", "text": "Consider a dataset X with N samples, and each sample xi ∈ Rd is sampled from C different manifolds {Mc}Cc=1. Assume that each category in the dataset lies in a compact low-dimensional manifold, and the number of manifolds C is prior knowledge. Define two nonlinear mapping zi = f(xi, θf ) and yi = g(zi, θg), where zi ∈ Rm is the embedding of xi in the latent space, yi is the reconstruction of xi. The j-th cluster center is denoted as µj ∈ Rm, where {µj}Cj=1 is defined as a set of learnable parameters. We aim to find optimal parameters θf and µ so that the embeddings {zi}Ni=1 can achieve clustering with local and global structure preservation. To this end, a denoising autoencoder (Vincent et al., 2010) shown in Fig 1 is first pre-trained in an unsupervised manner to learn an initial latent space. Denoising autoencoder aims to optimize the self-reconstruction loss LAE = MSE(x̂, y), where the x̂ is a copy of x with Gaussian noise added, that is, x̂ = x + N(0, σ2). Then the autoencoder is finetuned by optimizing the following clustering-oriented loss {Lcluster(z, µ)} and structure-oriented losses {Lrank(x, µ), LLIS(x, z), Lalign(z, µ)}. Since the clustering should be performed on features of clean data, instead of noised data x̂ that is used in denoising autoencoder, the clean data x is used for fine-tuning." }, { "heading": "3.1 CLUSTERING-ORIENTED LOSS", "text": "First, the cluster centers {µj}Cj=1 in the latent space Z are initialized (the initialization method will be introduced in Sec 4.1). Then the similarity between the embedded point zi and cluster centers {µj}Cj=1 is measured by Student’s t-distribution:\nqij =\n( 1 + ‖zi − µj‖2 )−1 ∑\nj′\n( 1 + ‖zi − µj′‖2 )−1 (1) The auxiliary target distribution is designed to help manipulate the latent space, defined as:\npij = q2ij/fj∑ j′ q 2 ij′/fj′ , where fj = ∑ i qij (2)\nwhere fj is the normalized cluster frequency, used to balance the size of different clusters. Then the encoder is optimized by the following objective:\nLcluster = KL(P‖Q) = ∑ i ∑ j pij log pij qij\n(3)\nThe gradient of Lcluster with respect to each learnable cluster center µj can be computed as:\n∂Lcluster ∂µj = − ∑ i ( 1 + ‖zi − µj‖2 )−1 · (pij − qij) (zi − µj) (4)\nLcluster facilitates the aggregation of data points within the same manifold, while data points from different manifolds are kept away from each other. However, we find that the clustering-oriented loss may deteriorate the geometric structure of the latent space, which hurts the clustering accuracy and leads to meaningless representation. To prevent the deterioration of clustering loss, we introduce isometry loss LLIS and ranking loss Lrank to preserve the local and global structure, respectively." }, { "heading": "3.2 STRUCTURE-ORIENTED LOSS", "text": "Intra-manifold Isometry Loss. The intra-manifold local structure is preserved by optimizing the following objective:\nLLIS = N∑ i=1 ∑ j∈NZi |dX (xi, xj)− dZ (zi, zj)| · π(l(xi) = l(xj)) (5)\nwhere NZi represents the neighborhood of data point zi in the feature space Z, and the kNN is applied to determine the neighborhood. π(·) ∈ {0, 1} is an indicator function, and l(xi) is a manifold determination function that returns the manifold si where sample xi is located, that is si = l(xi) = argmaxj pij . Then we can derive C manifolds {Mc} C c=1: Mc = {xi; si = c, i = 1, 2, ..., N}. In a nutshell, the loss LLIS constrains the isometry within each manifold.\nInter-manifold Ranking Loss. The inter-manifold global structure is preserved by optimizing the following objective:\nLrank = C∑ i=1 C∑ j=1 ∣∣dZ (µi, µj)− κ · dX (vXi , vXj )∣∣ (6) where {vXj }Cj=1 is defined as the centers of different manifolds in the original input space X with vXj = 1 |Mj | ∑ i∈Mj xi (j = 1, 2, ..., C). The parameter κ determines the extent to which different manifolds move away from each other. The larger κ is, the further away the different manifolds are from each other. The derivation for the gradient of Lrank with respect to each learnable cluster center µj is placed in Appendix A.1. Note that Lrank is optimized in an iterative manner, rather than by initializing {µj}Cj=1 once and then separating different clusters based only on the initialization\nresults. Additionally, contrary to us, the conventional methods for dealing with inter-manifold separation typically impose push-away constraints on all data points from different manifolds (Zhang et al., 2018; Yang et al., 2016a), defined as:\nLsep = − N∑ i=1 N∑ j=1 dZ (zi, zj) · π(l(xi) 6= l(xj)) (7)\nThe main differences between Lrank and Lsep are as follows: (1) Lsep imposes constraints on embedding points {zi}Ni=1, which in turn indirectly affects the network parameters θf . In contrast, Lrank imposes rank-preservation constrains directly on learnable parameters {µj}Cj=1 in the form of regularization item to control the separation of the clustering centers. (2)Lrank is easier to optimize, faster to process, and more accurate. Lsep is imposed on all data points from different manifolds, which involves N×N point-to-point relationships. This means that each point may be subject to the push-away force from other manifolds, but at the same time, each point has to meet the isometry constraint with its neighboring points. Under these two constraints, optimization is difficult and it is easy to fall into a local optimal solution and output inaccurate results. In contrast, Lrank is imposed directly on the clustering centers, involving onlyC×C cluster-to-cluster relationships, which avoids the above problem and makes it easier to optimize. (3) The parameter κ introduced in Lrank allows us to control the extent of separation between manifolds for specific downstream tasks.\nAlignment Loss. Note that the global ranking loss Lrank is imposed directly on the learnable parameter {µj}Cj=1, so optimizing Lrank only updates {µj} C j=1 rather the encoder’s parameter θf . However, the optimization of {µj}Cj=1 not only relies on Lrank, but is also constrained by Lcluster, which ensures that data points remain roughly distributed around cluster centers and do not deviate significantly from them during the optimization process. Alignment loss Lalign, as an auxiliary term, aims to help align learnable cluster centers {µj}Cj=1 with real cluster centers {v Z j }Cj=1 and make this binding stronger:\nLalign = C∑ j=1 ||µj − vZj || (8)\nwhere {vZj }Cj=1 are defined as vZj = 1|Mj | ∑\ni∈Mj zi (j = 1, 2, ..., C). The derivation for the gradient of Lalign with respect to each learnable cluster center µj is placed in Appendix A.1." }, { "heading": "3.3 TRAINING STRATEGY", "text": "" }, { "heading": "3.3.1 CONTRADICTION", "text": "The contradiction between clustering and local structure preservation is analyzed from the forces analysis perspective. As shown in Fig 2, we assume that there exists a data point (red point) and its three nearest neighbors (blue points) around a cluster center (gray point). When clustering and local structure preserving are optimized simultaneously, it is very easy to fall into a local optimum, where the data point is in steady-state, and the resultant force from its three nearest neighbors is equal in magnitude and opposite to the gravitational forces of the cluster. Therefore, the following training strategy is applied to prevent such local optimal solutions." }, { "heading": "3.3.2 ALTERNATING TRAINING AND WEIGHT CONTINUATION", "text": "Alternating Training. To solve the above problem and integrate the goals of clustering and structure preservation into a unified framework, we take an alternating training strategy. Within each epoch, we first jointly optimize Lcluster and Lrank in a mini-batch, with joint loss defined as\nL1 = LAE + αLcluster + Lrank (9)\nwhere α is the weighting factor that balances the effects of clustering and global rank-preservation. Then at each epoch, we optimize isometry loss LLIS and Lalign on the whole dataset, defined as\nL2 = βLLIS + Lalign (10)\nWeight continuation. At different stages of training, we have different expectations for the clustering and structure-preserving. At the beginning of training, to successfully decouple the overlapping manifolds, we hope that the Lcluster will dominate and LLIS will be auxiliary. When the margin between different manifolds is sufficiently pronounced, the weight α for Lcluster can be gradually reduced, while the weight β for LLIS can be gradually increased, focusing on the preservation of the local isometry. The whole algorithm is summarized in Algorithm 1 in Appendix A.2.\nThree-stage explanation. The training process can be roughly divided into three stages, as shown in Fig 3, to explain the training strategy more vividly. At first, four different manifolds overlap. At Stage 1, Lcluster dominates, thus data points within each manifold converge towards cluster centers to form spheres, but the local structure of manifolds is destroyed. At Stage 2, Lrank dominates, thus different manifolds in the latent space move away from each other to increase the manifold margin and enhance the discriminability. At stage 3, the manifolds gradually recover their original local structure from the spherical shape with LLIS dominating. It is worth noting that the above losses may coexist with each other rather than being completely independent at different stages, but that the role played by different losses varies due to the alternating training and weight continuation." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUPS", "text": "In this section, the effectiveness of the proposed framework is evaluated in 6 benchmark datasets: MNIST-full, MNIST-test, USPS, Fashion-MNIST, REUTERS-10K and HAR, on which our method is compared with 9 other methods mentioned in Sec 2 in 8 evaluation metrics including metrics designed specifically for clustering and manifold representation learning. The brief descriptions of the datasets are given in Appendix A.3.\nParameters settings. Currently we use MLP architecture for this version and will extend it to ConvAE in the future. The encoder structure is d-500-500-500-2000-10 where d is the dimension of the input data, and the decoder is its mirror. After pretraining, in order to initialize the learnable clustering centers, the t-SNE is applied to transform the latent space Z to 2 dimensions further, and then the K-Means algorithm is run to obtain the label assignments for each data point 2. The centers of each category in the latent space Z are set as initial cluster centers {µj}Cj=1. The batch size is set to 256, the epoch is set to 300, the parameter k for nearest neighbor is set to 5, and the parameter κ\n2Since cluster centers {µj}Cj=1 are learnable and updated in an iterative manner, we believe that a proper initialization is sufficient, and the exploration of initialization methods is beyond the scope of this paper.\nis set to 3 for all datasets. Sensitivity analysis for parameters k and κ is available in Appendix A.12. Besides, Adam optimizer (Kingma & Ba, 2014) with learning rate λ=0.001 is used. As described in Sec 3.3.2, the weight continuation is applied to train the model. The weight parameter α for Lcluster decreases linearly from 0.1 to 0 within epoch 0-150. In contrast, the weight parameter β for LLIS increases linearly from 0 to 1.0 within epoch 0-150. In this paper, each set of experiments is run 5 times with different 5 random seeds, and the results are averaged into the final performance metrics. The implementation uses the PyTorch library running on NVIDIA v100 GPU.\nEvaluation Metrics. Two standard evaluation metrics: Accuracy (ACC) and Normalized Mutual Information (NMI) (Xu et al., 2003) are used to evaluate clustering performance. Besides, six evaluation metrics are adopted in this paper to evaluate the performance of multi-manifold representation learning, including Relative Rank Error (RRE), Trustworthiness (Trust), Continuity (Cont), Root Mean Reconstruction Error (RMRE), Locally Geometric Distortion (LGD) and Cluster Rank Accuracy (CRA). Limited by space, their precise definitions are available in Appendix A.4." }, { "heading": "4.2 EVALUATION OF CLUSTERING", "text": "" }, { "heading": "4.2.1 QUANTITATIVE COMPARISON", "text": "The metrics ACC/NMI of different methods on various datasets are reported in Tab 1. For those comparison methods whose results are not reported or the experimental settings are not clear on some datasets, we run the released code using the hyperparameters provided in their paper with the same random seeds and initialization, then report their average performance, and label them with (*). While ASPC-DA achieves the best performance on three datasets (MNIST-test, MNIST-full, and USPS), its performance gains do not come directly from clustering, but from sophisticated modules such as data augmentation and self-paced learning. Once these modules are removed, there is a very large degradation in its performance. For example, with data augmentation removed, ASPC-DA achieves less competitive performance, e.g., an accuracy of 0.931 (vs 0.988) on MNIST-full, 0.813 (vs 0.973) on MNIST-test and 0.768 (vs 0.982) on USPS. Though ASPC-DA is based on the MLP architecture, its image-based Data Augmentation (DA) cannot be applied directly to vector data, which explains why ASPC has no performance advantage on the vector-based REUTERS-10K and HAR datasets (even compared to DEC and IDEC).\nIn a fairer comparison (without considering ASPC-DA), we find that DCRL outperforms K-Means and SC-Ncut by a significant margin and surpasses the other seven competing DNN-based algorithms on all datasets except MNIST-test. Even with the MNIST-test dataset, we still rank second, outperforming the third by 1.1%. In particular, we obtained the best performance on the FashionMNIST and HAR (vector) dataset , and more notably, our clustering accuracy exceeds the current SOTA method by 5.1% and 4.9%, respectively." }, { "heading": "4.2.2 GENERALIZABILITY EVALUATION", "text": "Tab 2 demonstrates that a learned DCRL can generalize well to unseen data with high clustering accuracy. Taking MNIST-full as an example, DCRL was trained using 50,000 training samples and then tested on the remaining 20,000 testing samples using the learned model. In terms of the metrics ACC and MNI, our method is optimal for both training and testing samples. More importantly, there is hardly any degradation in the performance of our method on the testing samples compared\nto the training samples, while all other methods showed a significant drop in performance, e.g., DEC from 84.1% to 74.8%. This demonstrates the importance of geometric structure preservation for good generalizability. The testing visualization available in Appendix A.5 shows that DCRL still maintains clear inter-cluster boundaries even on the test samples, which demonstrates the great generalizability of our method." }, { "heading": "4.2.3 CLUSTERING VISUALIZATION", "text": "The visualization of DCRL with several comparison methods is shown in Fig 4 (visualized using UMAP). From the perspective of clustering, our method is much better than the other methods. Among all methods, only DEC, IDEC, and DCRL can hold clear boundaries between different clusters, while the cluster boundaries of the other methods are indistinguishable. Although DEC and IDEC can successfully separate different clusters, they group many data points from different classes into the same cluster. Most importantly, due to the use of the clustering-oriented loss, the embedding learned by algorithms such as DEC, IDEC, JULE, and DSC (especially DSC) tend to form spheres and disrupt the original topological structure. Instead, our method overcomes these problems and achieves almost perfect separation between different clusters while preserving the local and global structure.\nAdditionally, the embedding of latent space during the training process is visualized in Appendix A.6, which is highly consistent with the three-stage explanation mentioned in Sec 3.3.2, showing that clustering-oriented does indeed do deteriorate the local geometric structure of the latent space, and designed LLIS helps to recover it. In addition, in the above experiments, the cluster numberC is assumed to be a known prior (which is consistent with the assumptions of almost all deep clustering algorithms). Therefore, we provide an additional experiment to explore what happens when C is larger than the number of true clusters. It is found that there exists splitting of the clusters, but the different categories still maintain clear boundaries and are not mixed together, somewhat similar to hierarchical clustering. See Appendix A.7 for detailed experimental settings and analysis." }, { "heading": "4.3 EVALUATION OF MULTI-MANIFOLD REPRESENTATION LEARNING", "text": "Although numerous previous work has claimed that they brought clustering and representation learning into a unified framework, they all, unfortunately, lack an analysis of the effectiveness of the learned representations. In this paper, we compare DCRL with the other five methods in six evaluation metrics on six datasets. (Limited by space, only MNIST-full results are provided in the Tab 3 and the complete results are in Appendix A.8). The results show that DCRL outperforms all other methods, especially in the CRA metric, which is not only the best on all datasets but also reaches 1.0. This means that the “rank” between different manifolds in the latent space is completely preserved and undamaged, which proves the effectiveness of our global ranking loss Lrank.\nMoreover, statistical analysis is performed in this paper to show the extent to which local and global structure is preserved in the latent space for each algorithm. Limited by space, they are placed in Appendix A.9. Furthermore, we also evaluated whether the learned representations are meaningful through downstream tasks, and this experiment is available in Appendix A.10." }, { "heading": "4.4 ABLATION STUDY", "text": "This evaluates the effects of the loss terms and training strategies in the DCRL with five sets of experiments: the model without (A) Structure-oriented Loss (SL); (B) Clustering-oriented Loss (CL); (C) Weight Continuation (WC); (D) Alternating Training (AT), and (E) the full model. Limited by space, only MNIST-full results are provided in Tab 4, and results for the other four datasets are in Appendix A.11. After analyzing the results, we can conclude: (1) CL is the most important factor for obtaining good clustering, the lack of which leads to unsuccessful clustering, hence the numbers in the table are not very meaningful and are shown in gray color. (2) SL not only brings subtle improvements in clustering but also greatly improves the performance of multi-manifold representation learning. (3) Our elegant training strategies (WC and AT) both improve the performance of clustering and multi-manifold representation learning to some extent, especially on metrics such as RRE, Trust, Cont, and CRA." }, { "heading": "5 CONCLUSION", "text": "The proposed DCRL framework imposes clustering-oriented and structure-oriented constraints to optimize the latent space for simultaneously performing clustering and multi-manifold representation learning with local and global structure preservation. Extensive experiments on image and vector datasets demonstrate that DCRL is not only comparable to the state-of-the-art deep clustering algorithms but also able to learn effective and robust manifold representation, which is beyond the capability of those clustering methods that only care about clustering accuracy. Future work will focus on the adaptive determination of manifolds (clusters) numbers and extend our work to CNN architecture for large-scale datasets." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A.1 GRADIENT DERIVATION", "text": "In the paper, we have emphasized time and again that {µj}Cj=1 is a set of learnalbe parameters, which means that we can optimize it while optimizing the network parameter θf . In Eq. (4) of the paper, we have presented the gradient of Lcluster with respect to µj . In addition to Lcluster, both Lrank and Lalign are involving µj . Hence, the detailed derivations for the gradient of Lrank and Lalign with respect to µj are also provided. The gradient of Lrank with respect to each learnalbe cluster center µj can be computed as:\n∂Lrank ∂µj\n= ∂ ∑C i′=1 ∑C j′=1 ∣∣dZ (µi′ , µj′)− κ ∗ dX (vXi′ , vXj′ )∣∣ ∂µj\n= C∑ i′=1 C∑ j′=1\n∂ ∣∣dZ (µi′ , µj′)− κ ∗ dX (vXi′ , vXj )∣∣\n∂µj\nThe Euclidean metric is used for both the input space and the hidden layer space, i.e., dZ (µi′ , µj′) = ‖µi′ − µj′‖. In addition, the symbols are somewhat abused for clear derivation, representing κ ∗ dX ( vXi′ , v X j′ ) with K. Accordingly, Eq. (11) can be further derived as follows:\n∂Lrank ∂µj = C∑ i′=1 C∑ j′=1\n∂ ∣∣dZ (µi′ , µj′)− κ ∗ dX (vXi′ , vXj′ )∣∣\n∂µj\n= C∑ i′=1 C∑ j′=1\n∂ ∣∣ ‖µi′ − µj′‖ −K∣∣\n∂µj\n= C∑ i′=1\n∂ ∣∣ ‖µi′ − µj‖ −K∣∣\n∂µj + C∑ j′=1\n∂ ∣∣ ‖µj − µj′‖ −K∣∣\n∂µj\n= C∑ i′=1 ∂ (‖µi′ − µj‖ −K) ∂µj · ‖µi ′ − µj‖ −K∣∣ ‖µi′ − µj‖ −K∣∣\n+ C∑ j′=1 ∂ (‖µj − µj′‖ −K) ∂µj · ‖µj − µj ′‖ −K∣∣ ‖µj − µj′‖ −K∣∣\n= C∑ i′=1 ∂ ‖µi′ − µj‖ ∂µj · ‖µi ′ − µj‖ −K∣∣ ‖µi′ − µj‖ −K∣∣\n+ C∑ j′=1 ∂ ‖µj − µj′‖ ∂µj · ‖µj − µj ′‖ −K∣∣ ‖µj − µj′‖ −K∣∣\n= C∑ i′=1 µj − µi′ ‖µj − µi′‖ · ‖µj − µi ′‖ −K∣∣ ‖µj − µi′‖ −K∣∣ + C∑ j′=1 µj − µj′ ‖µj − µj′‖ · ‖µj − µj ′‖ −K∣∣ ‖µj − µj′‖ −K∣∣\n= 2 C∑ i′=1 µj − µi′ ‖µj − µi′‖ · ‖µj − µi ′‖ −K∣∣ ‖µj − µi′‖ −K∣∣\n= 2 C∑ i′=1 µj − µi′ ‖µj − µi′‖ · ‖µj − µi′‖ − κ ∗ dX\n( vXi′ , v X j )∣∣ ‖µj − µi′‖ − κ ∗ dX (vXi′ , vXj ) ∣∣ = 2\nC∑ i′=1 µj − µi′ dZ (µj , µi′) · dZ (µj , µi′)− κ ∗ dX\n( vXi′ , v X j )∣∣dZ (µj , µi′)− κ ∗ dX (vXi′ , vXj )∣∣\nThe gradient of Lalign with respect to each learnalbe cluster center µj can be computed as:\n∂Lalign ∂µj\n= ∂ ∑C\nj′=1 ||µj′ − vZj′ || ∂µj\n= C∑ j′=1 ∂||µj′ − vZj′ || ∂µj\n= ∂||µj − vZj ||\n∂µj\n= ∂(µj − vZj ) ∂µj · µj − vZj∥∥µj − vZj ∥∥ = µj − vZj∥∥µj − vZj ∥∥" }, { "heading": "A.2 ALGORITHM", "text": "Algorithm 1 Algorithm for Deep Clustering and Representation Learning" }, { "heading": "Input:", "text": "Input samples: X; Number of clusters: C; Number of batches: B; Number of iterations: E. Output:\nAutoencoder’s weights: θf and θg; Cluster labels {si}Ni=1; Trainable cluster centers {µj} C j=1. 1: Initialize the weight {µj}Cj=1, θf and θg , and obtain initialized soft label assignment {si} N i=1. 2: for epoch ∈ {0,1,· · · ,E} do 3: Compute embedded points {zi}Ni=1 and distribution Q; 4: Update target distribution P ; 5: Compute soft cluster centers { vXi }C i=1 and { vZi }C i=1\n. 6: for batch ∈ {0,1,· · · ,B} do 7: Pick up one batch of samples Xbatch from X; 8: Compute corresponding distribution Qbatch and it’s reconstruction Ybatch; 9: Pick up target distribution batch Pbatch from P ;\n10: Compute loss Lae, Lcluster and Lrank; 11: Update the weight θf , θg and {µj}Cj=1. 12: end for 13: Compute Liso and Lalign on all samples; 14: Update the weight θf and {µj}Cj=1; 15: Assign new soft labels {si}Ni=1. 16: end for 17: return θf , θg , {si}Ni=1, {µj} C j=1." }, { "heading": "A.3 DATASETS", "text": "To show that our method works well with various kinds of datasets, we choose the following six image and vector datasets. Some example images are shown in Fig A1, and the brief descriptions of the datasets are given in Tab A1.\n• MNIST-full (LeCun et al., 1998): The MNIST-full dataset consists of 70,000 handwritten digits of 28 × 28 pixels. Each gray image is reshaped to a 784-dimensional vector.\n• MNIST-test (LeCun et al., 1998): The MNIST-test is the testing part of the MNIST dataset, which contains a total of 10000 samples.\n• USPS 3: The USPS dataset is composed of 9298 gray-scale handwritten digit images with a size of 16x16 pixels.\n3https://cs.nyu.edu/∼roweis/data.html\nTable A1: Description of Datasets. Dataset Samples Categories Data Size MNIST-full 70000 10 28×28×1 MNIST-test 10000 10 28×28×1 USPS 9298 10 16×16×1 Fashion-MNIST 70000 10 28×28×1 REUTERS-10K 10000 4 2000 HAR 10299 6 561\n• Fashion-MNIST (Xiao et al., 2017): This Fashion-MNIST dataset has the same number of images and the same image size as MNIST-full, but it is fairly more complicated. Instead of digits, it consists of various types of fashion products. • REUTERS-10K: REUTERS (Lewis et al., 2004) is composed of around 810000 English\nnews stories labeled with a category tree. Four root categories (corporate/industrial, government/social, markets, and economics) are used as labels and excluded all documents with multiple labels. Following DEC (Xie et al., 2016), a subset of 10000 examples are randomly sampled, and the tf-idf features on the 2000 most frequent words are computed. The sampled dataset is denoted REUTERS-10K. • HAR: HAR is a time series dataset consisting of 10,299 sensor samples from a smartphone.\nIt was collected from 30 people performing six different activities: walking, walking upstairs, walking downstairs, sitting, standing, and laying.\n(a) MNIST\n(b) USPS\n(c) Fashion-MNIST\nFigure A1: The image samples from three datasets (MNIST, USPS, and Fashion-MNIST)" }, { "heading": "A.4 DEFINITIONS OF PERFORMANCE METRICS", "text": "The following notations are used for the definitions:\ndX(i, j): the pairwise distance between xi and xj in input space X; dZ(i, j): the pairwise distance between zi and zj in latent space Z;\nN k,Xi : the set of indices to the k-nearest neighbor (kNN) of xi in input space X; N k,Zi : the set of indices to the k-nearest neighbor (kNN) of zi in latent space Z; rX(i, j): the rank of the closeness (in Euclidean distance) of xj to xi in input space X; rZ(i, j): the rank of the closeness (in Euclidean distance) of zj to zi in latent space Z.\nThe eight evaluation metrics are defined below:\n(1) ACC (Accuracy) measures the accuracy of clustering:\nACC = max m\n∑N i=1 1 {li = m (si)}\nN\nwhere li and si are the true and predicted labels for data point xi, respectively, and m(·) is all possible one-to-one mappings between clusters and label categories.\n(2) NMI (Normalized Mutual Information) NMI calculates the normalized measure of similarity between two labels of the same data\nNMI = I(l; s)\nmax{H(l), H(s)}\nwhere I(l, s) is the mutual information between the real label l and predicted label s, and H(·) represents their entropy.\n(3) RRE (Relative Rank Change) measures the average of changes in neighbor ranking between two spaces X and Z:\nRRE = 1\n(k2 − k1 + 1) k2∑ k=k1 { MRkX→Z +MR k Z→X } where k1 and k2 are the lower and upper bounds of the k-NN.\nMRkX→Z = 1\nHk N∑ i=1 ∑ j∈Nk,Zi ( |rX(i, j)− rZ(i, j)| rZ(i, j) )\nMRkZ→X = 1\nHk N∑ i=1 ∑ j∈Nk,Xi ( |rX(i, j)− rZ(i, j)| rX(i, j) ) where Hk is the normalizing term, defined as\nHk = N k∑ l=1 |N − 2l| l .\n(4) Trust (Trustworthiness) measures to what extent the k nearest neighbors of a point are preserved when going from the input space to the latent space:\nTrust = 1\nk2 − k1 + 1 k2∑ k=k1 1− 2Nk(2N − 3k − 1) N∑ i=1 ∑ j∈Nk,Zi ,j /∈N k,X i (rX(i, j)− k) where k1 and k2 are the bounds of the number of nearest neighbors.\n(5) Cont (Continuity) is defined analogously to Trust, but checks to what extent neighbors are preserved when going from the latent space to the input space:\nCont = 1\nk2 − k1 + 1 k2∑ k=k1 1− 2Nk(2N − 3k − 1) N∑ i=1 ∑ j /∈Nk,Zi ,j∈N k,X i (rZ(i, j)− k) where k1 and k2 are the bounds of the number of nearest neighbors.\n(6) d-RMSE (Root Mean Square Error) measures to what extent the two distributions of distances coincide:\nd−RMSE = √√√√ 1 N2 N∑ i=1 N∑ j=1 (dX(i, j)− dZ(i, j))2\n(7) LGD (Locally Geometric Distortion) measures how much corresponding distances between neighboring points differ in two metric spaces and is the primary metric for isometry, defined as:\nLGD = k2∑ k=k1 √√√√ M∑ i ∑ j∈Nk,(l)i (dl(i, j)− dl′(i, j))2 (k2 − k1 + 1)2M(#Ni) .\nwhere k1 and k2 are the lower and upper bounds of the k-NN. (8) CRA (Cluster Rank Accuracy) measures the changes in ranks of cluster centers from the\ninput space X and to the latent space Z:\nCRA =\n∑C i=1 ∑C j=1 1(rX(v X i , v X j ) = rZ(v Z i , v Z j ))\nC2\nwhere C is the number of clusters, vXj is the cluster center of the jth cluster in the input space X , vZj is the cluster center of the jth cluster in the latent space Z, rX(v X i , v X j ) denotes the rank of the closeness (in terms of Euclidean distance) of vXi to v X j in space X in the input space X , and rZ(vZi , v Z j ) denotes the rank of the closeness (in terms of Euclidean distance) of vZi to v Z j in space Z.\nA.5 VISUALIZATION IN GENERALIZABILITY\nThe visualization results on the testing samples are shown in Fig A2; even for testing samples, our method still shows distinguishable inter-cluster discriminability, while all the other methods without exception coupled different clusters together.\n(a) AE+K-Means (b) DEC (c) IDEC (d) JULE\n(e) DSC (f) N2D (g) DCRL (ours)\nFigure A2: The visualization of the obtained embeddings on the testing samples to show the generalization performance of different algorithms on MNIST-full dateset.\nA.6 VISUALIZATION IN DIFFERENT STAGES\nThe embedding visualization of the latent space during the training process is visualized in Fig A3 for depicting how both clustering and structure-preserving is achieved. We can see that the different clusters initialized by pretrained autoencoder are closely adjacent. In the early stage of training, with clustering loss Lcluster and global ranking loss Lrank, different manifolds are separated from each other, each manifold loses its local structure, and all of them degenerate into spheres. As\nthe training progresses, the weight α for Lcluster gradually decreases, while the weight β for Liso increases and the optimization is gradually focused from global to local, with each manifold gradually recovering its original geometric structure from the sphere. Moreover, since our local isometry loss Liso is constrained within each manifold, the preservation of local structure will not disrupt the global ranking. Finally, we obtain representations in which cluster boundaries are clearly distinguished, and local and global structures are perfectly preserved. This shows that clusteringoriented loss does deteriorate the local geometric structure of the latent space, and designed LLIS helps to recover it.\n(a) Epoch 0 (b) Epoch 9 (c) Epoch 19 (d) Epoch 29 (e) Epoch 69\n(f) Epoch 119 (g) Epoch 159 (h) Epoch 209 (i) Epoch 249 (j) Epoch 299\nFigure A3: Clustering visualization at different stages of training on MNIST-full dateset." }, { "heading": "A.7 EXPLORATION ON THE ASSUMED CLUSTER NUMBER C", "text": "Taking MNIST-test dataset as an example, we present the embedding visualization with assumed number of clusters C being 10, 11, and 12, respectively. We find that when C is larger than the number of true clusters (10), data originally belonging to the same cluster will be split, e.g., a cluster is split into two, but the different categories of data still hold clear boundaries and are not mixed together, somewhat similar to hierarchical clustering.\n(a) C=10 (b) C=11 (c) C=12\nFigure A4: Clustering visualization with different assumed cluster numberC on MNIST-test dateset." }, { "heading": "A.8 QUANTITATIVE EVALUATION OF REPRESENTATION LEARNING", "text": "Our method is compared with the other five methods in six evaluation metrics on six datasets. The complete results in Tab A2 demonstrate the superiority of our method, especially on metrics RRE, Trust, Cont, and CRA. As shown in Tab A2, DCRL outperforms all other methods, especially in the CRA metric, which is not only the best on all datasets but also reaches 1.0. This means that the “rank” between different manifolds in the latent space is completely preserved and undamaged, which proves the effectiveness of our global ranking loss Lrank.\nTable A2: Representation learning performance of different algorithms on five datasets. Datasets Algorithms RRE Trust Cont d-RMSE LGD CRA\nMNIST-full\nDEC 0.09988 0.84499 0.94805 44.8535 4.37986 0.28 IDEC 0.00984 0.99821 0.97936 24.5803 1.71484 0.33 JULE 0.02657 0.93675 0.98321 28.3412 2.12955 0.27 DSC 0.09785 0.87315 0.92508 6.98098 1.19886 0.23 N2D 0.01002 0.99243 0.98466 5.7162 0.69946 0.21 DCRL 0.00567 0.99978 0.98716 5.4986 0.69168 1.0\nMNIST-test\nDEC 0.12800 0.81841 0.91767 14.6113 2.29499 0.19 IDEC 0.01505 0.99403 0.97082 7.4599 1.08350 0.38 JULE 0.04122 0.92971 0.97208 9.4768 1.17176 0.42 DSC 0.10728 0.85498 0.92254 7.1689 1.19239 0.26 N2D 0.01565 0.98764 0.97572 5.0120 0.97454 0.33 DCRL 0.01090 0.99811 0.97612 5.8000 0.93394 1.0\nUSPS\nDEC 0.07911 0.88871 0.94628 16.4355 1.77848 0.31 IDEC 0.01043 0.99726 0.97960 13.0573 1.11689 0.30 JULE 0.02972 0.98763 0.98810 14.6324 1.43426 0.33 DSC 0.06319 0.9151 0.93988 8.4412 1.02131 0.27 N2D 0.01337 0.98769 0.98135 8.1961 0.54967 0.37 DCRL 0.00577 0.99979 0.98701 6.4980 0.53180 1.0\nFasion-MNIST\nDEC 0.04787 0.93896 0.95450 39.3274 3.87731 0.37 IDEC 0.01089 0.99683 0.97797 25.4024 1.91385 0.27 JULE 0.03013 0.97732 0.97923 15.2213 1.43642 0.43 DSC 0.05168 0.95013 0.96121 17.2201 1.42091 0.36 N2D 0.00894 0.99062 0.98054 14.49079 1.28180 0.26 DCRL 0.00836 0.99868 0.98203 13.3788 1.33893 1.0\nREUTERS-10K DEC 0.26192 0.65518 0.80477 40.4671 4.00423 0.63 IDEC 0.05981 0.95840 0.90550 43.9556 2.01365 0.75 JULE - - - - - - DSC - - - - - - N2D 0.03827 0.97385 0.93412 36.1042 1.69013 0.31 DCRL (ours) 0.03206 0.98380 0.93802 34.5478 2.72096 1.0\nHAR\nDEC 0.09060 0.89097 0.91766 10.0222 1.58691 0.30 IDEC 0.01031 0.99433 0.98132 9.9155 0.93736 0.39 JULE - - - - - - DSC - - - - - - N2D 0.00841 0.99281 0.97695 8.2326 0.64296 0.33 DCRL (ours) 0.00665 0.99895 0.98634 15.2876 0.46189 1.0" }, { "heading": "A.9 STATISTICAL ANALYSIS", "text": "The statistical analysis is presented to show the extent to which local and global structure is preserved from the input space to the latent space. Taking MNIST-full as an example, the statistical analysis of the global rank-preservation is shown in Fig A5 (a)-(f). For the i-th cluster, if the rank (in terms of Euclidean distance) between it and the j-th cluster is preserved from input space to latent space, then the grid in the i-th row and j-th column is marked as blue, otherwise yellow. As shown in the figure, only our method can fully preserve the global rank between different clusters, while all other methods fail.\nFinally, we perform statistical analysis for the local isometry property of each algorithm. For each sample xi in the dataset, it forms a number of point pairs with its neighborhood samples {(xi, xj)|i = 1, 2, ..., N ;xj ∈ NXi }. We compute the difference in the distance of these point pairs from the input space to the latent space {dZ(xi, xj) − dX(xi, xj)|i = 1, 2, ..., N ;xj ∈ Ni}, and plot it as a histogram. As shown in Fig A5 (g), the curves of DCRL are distributed on both sides of the 0 value, with maximum peak height and minimum peak-bottom width, respectively, which indicates that DCRL achieves the best local isometry. Although IDEC claims that they can preserve the local structure well, there is still a big gap between their results and ours.\n(a) DEC (b) IDEC (c) JULE (d) DSC\n(e) N2D (f) DCRL (g) Local Isometry\nFigure A5: Statistical analysis of different algorithms to compare the capability of global and local structure preservation from the input space to the latent space." }, { "heading": "A.10 QUANTITATIVE EVALUATION OF DOWNSTREAM TASKS", "text": "Numerous deep clustering algorithms have recently claimed to obtain meaningful representations, however, they do not analyze and experiment with the so-called ”meaningful” ones. Therefore, we are interested in whether these proposed methods can indeed learn representations that are useful for downstream tasks. Four different classifiers, including a linear classifier (Logistic Regression; LR), two nonlinear classifiers (MLP, SVM), and a tree-based classifier (Random Forest Classifier; RFC) are used as downstream tasks, all of which use default parameters and default implementations in sklearn (Pedregosa et al., 2011) for a fair comparison. The learned representations are frozen and used as input for training. The classification accuracy evaluated on the test set serves as a metric to evaluate the effectiveness of learned representations. In Tab A3, DCRL outperformed the other methods overall on all six datasets, with MLP, RFC, and LR as downstream tasks. Additionally, we surprisingly find that with MLP and RFC as downstream tasks, all methods other than DCRL do not even match the accuracy of AE on the MNIST-full dataset. Notably, DEC and IDEC show a sharp deterioration in performance on downstream tasks, falling short of even the simplest AEs, again showing that clustering-oriented loss can disrupt the geometry of the data.\nTable A3: Performance of different algorithms in downstream tasks. Datasets Algorithms MLP RFC SVM LR\nMNIST-full\nAE 0.9746 0.9652 0.9859 0.9565 DEC 0.8647 0.8706 0.8707 0.8566 IDEC 0.9797 0.9737 0.9852 0.9650 JULE 0.9802 0.9825 0.9787 0.9743 DSC 0.9622 0.9501 0.9837 0.9752 N2D 0.9796 0.9803 0.9799 0.9792 DCRL 0.9851 0.9874 0.9869 0.9841\nMNIST-test\nAE 0.9415 0.9420 0.9745 0.9495 DEC 0.8525 0.8605 0.8725 0.8685 IDEC 0.9740 0.9725 0.9845 0.9655 JULE 0.9775 0.9845 0.9800 0.9825 DSC 0.9535 0.9740 0.9825 0.9795 N2D 0.9715 0.9760 0.9725 0.9725 DCRL 0.9855 0.9875 0.9865 0.9855\nUSPS\nAE 0.9421 0.9469 0.9677 0.9073 DEC 0.8289 0.8668 0.8289 0.8294 IDEC 0.9482 0.9556 0.9656 0.9125 JULE 0.9576 0.9617 0.9703 0.9476 DSC 0.9351 0.9572 0.9612 0.9342 N2D 0.9569 0.9569 0.9569 0.9541 DCRL 0.9656 0.9651 0.9604 0.9551\nFasion-MNIST\nAE 0.8613 0.9932 0.8314 0.7588 DEC 0.6268 0.9853 0.6377 0.6245 IDEC 0.8367 0.9918 0.8607 0.7514 JULE 0.8541 0.9892 0.8566 0.7723 DSC 0.8084 0.9823 0.8618 0.7676 N2D 0.8412 0.9493 0.8230 0.7753 DCRL 0.8642 0.9942 0.8468 0.7768\nREUTERS-10K\nAE 0.9325 0.9170 0.9375 0.8205 DEC 0.7985 0.7880 0.8105 0.7450 IDEC 0.9225 0.8930 0.9280 0.7705 JULE - - - - DSC - - - - N2D 0.9205 0.9080 0.9240 0.8335 DCRL (ours) 0.9360 0.9185 0.9390 0.8475\nHAR\nAE 0.9181 0.9139 0.9201 0.8849 DEC 0.7696 0.7847 0.7628 0.7634 IDEC 0.8973 0.9031 0.9041 0.8822 JULE - - - - DSC - - - - N2D 0.9138 0.9083 0.9174 0.8799 DCRL (ours) 0.9235 0.9193 0.9293 0.8996" }, { "heading": "A.11 MORE ABLATION EXPERIMENTS", "text": "The results of the ablation experiments on the MNIST-full dataset have been presented in Tab 4 in Sec 4.3. Here, we provide four more sets of ablation experiments on the other four datasets. The conclusion is similar (note that the clustering performance of the model without clustering-oriented losses is very poorly, so the “best” metric numbers are not meaningful and are shown in gray color): (1) CL is very important for obtaining good clustering. (2) SL is beneficial for both clustering and representation learning. (3) Our training strategies (WC and AT) are very superior in improving metrics such as ACC, RRE, Trust, Cont, and CRA.\nTable A4: Ablation study of loss items and training strategies used in DCRL.\nDatasets Methods ACC/NMI RRE Trust Cont d-RMSE LGD CRA w/o SL 0.976/0.939 0.0093 0.9967 0.9816 24.589 1.6747 0.32 w/o CL 0.814/0.736 0.0004 0.9998 0.9990 7.458 0.0487 1.00 w/o WC 0.977/0.943 0.0065 0.9987 0.9860 5.576 0.6968 0.98 w/o AT 0.978/0.944 0.0069 0.9986 0.9851 5.617 0.7037 0.96 MNIST-full\nfull model 0.980/0.946 0.0056 0.9997 0.9871 5.498 0.6916 1.00 w/o SL 0.973/0.932 0.0146 0.9928 0.9727 7.701 1.0578 0.31 w/o CL 0.773/0.747 0.0020 0.9994 0.9954 7.229 0.0809 1.00 w/o WC 0.956/0.904 0.0132 0.9955 0.9735 5.470 0.9364 1.00 w/o AT 0.970/0.929 0.0118 0.9974 0.9747 5.567 0.9404 1.00 MNIST-test\nfull model 0.972/0.930 0.0109 0.9981 0.9761 5.800 0.9339 1.00 w/o SL 0.958/0.902 0.0095 0.9967 0.9812 14.609 0.9847 0.29 w/o CL 0.664/0.658 0.0020 0.9996 0.9952 2.934 0.0687 1.0 w/o WC 0.956/0.896 0.0060 0.9991 0.9868 6.572 0.5335 1.00 w/o AT 0.947/0.885 0.0080 0.9979 0.9833 5.960 0.4967 1.00 USPS\nfull model 0.960/0.902 0.0057 0.9997 0.9870 6.498 0.5318 1.00 w/o SL 0.706/0.682 0.0108 0.9964 0.9781 25.954 1.8936 0.30 w/o CL 0.576/0.569 0.0004 0.9994 0.9995 7.654 0.0523 1.00 w/o WC 0.702/0.695 0.0084 0.9972 0.9814 13.238 1.3474 1.00 w/o AT 0.708/0.694 0.0097 0.9975 0.9798 13.354 1.3611 1.00 Fasion-MNIST\nfull model 0.710/0.685 0.0083 0.9986 0.9820 13.378 1.3389 1.00 w/o SL 0.819/0.564 0.0529 0.9610 0.9185 44.481 1.9090 0.38 w/o CL 0.542/0.279 0.0277 0.9868 0.9456 37.018 2.2294 1.00 w/o WC 0.830/0.583 0.0420 0.9667 0.9361 35.302 2.8286 1.00 w/o AT 0.825/0.563 0.0440 0.9650 0.9330 39.275 2.9146 1.00 REUTERS-10K\nfull model 0.836/0.590 0.0320 0.9838 0.9380 34.547 2.7209 1.00 w/o SL 0.835/0.746 0.0116 0.9944 0.9792 8.168 0.8882 0.33 w/o CL 0.744/0.615 0.0024 0.9986 0.9948 15.060 0.2193 1.00 w/o WC 0.786/0.701 0.0130 0.9950 0.9756 15.398 0.6171 1.00 w/o AT 0.834/0.745 0.0089 0.9965 0.9835 15.726 0.4734 1.00 HAR\nfull model 0.845/0.758 0.0066 0.9989 0.9863 15.287 0.4618 1.00" }, { "heading": "A.12 PARAMETER SENSITIVITY", "text": "We also evaluated the sensitivity of parameters k and κ on the MNIST-test dataset and the results are shown in Tab A5. The parameters k and κ are found to have little effect on the clustering performance (ACC/NMI), and some combinations of k and κ even produce better clustering performance than the metrics reported in the main paper. However, the effect of k and κ on representation learning is more pronounced, and different combinations of k and κ may increase or decrease performance. In general, this paper focuses on the design of the algorithm itself and has not performed the parameter search to find the best performance.\nTable A5: Parameter Sensitivity with different parameters k and κ on the MNIST-test dataset. Parameters ACC/NMI RRE Trust Cont d-RMSE LGD CRA k=1, κ=3 0.975/0.936 0.0125 0.9944 0.9756 5.757 0.8868 1.00 k=3, κ=3 0.973/0.931 0.0114 0.9970 0.9757 5.805 0.9207 1.00 k=5, κ=3 0.972/0.930 0.0109 0.9981 0.9761 5.800 0.9339 1.00 k=8, κ=3 0.972/0.929 0.0104 0.9989 0.9765 5.810 0.9476 1.00 k=10, κ=3 0.972/0.929 0.0105 0.9990 0.9764 5.704 0.9487 1.00 k=5, κ=1 0.967/0.912 0.0068 0.9993 0.9845 5.409 0.2524 1.00 k=5, κ=3 0.972/0.930 0.0109 0.9981 0.9761 5.800 0.9339 1.00 k=5, κ=5 0.972/0.929 0.0146 0.9964 0.9691 15.0653 1.5719 1.00 k=5, κ=8 0.972/0.929 0.0190 0.9943 0.9615 29.4607 2.5410 1.00 k=5, κ=10 0.972/0.929 0.0195 0.9951 0.9597 37.7661 3.1434 1.00" } ]
2,020
null
SP:2cf935e397f642fb22a861cb62cb395834eef5b6
[ "In more mathematical fields, theorem provers and similar systems can validate claims made about formal systems. However, many research contributions come in the form of papers, and thus they are never validated in this way. Math researchers can express their contributions in a special purpose language to do this, but that places an additional burden on them to learn this skill." ]
We propose the task of disambiguating symbolic expressions in informal STEM documents in the form of LATEX files – that is, determining their precise semantics and abstract syntax tree – as a neural machine translation task. We discuss the distinct challenges involved and present a dataset with roughly 33,000 entries. We evaluated several baseline models on this dataset, which failed to yield even syntactically valid LATEX before overfitting. Consequently, we describe a methodology using a transformer language model pre-trained on sources obtained from arxiv.org, which yields promising results despite the small size of the dataset. We evaluate our model using a plurality of dedicated techniques, taking the syntax and semantics of symbolic expressions into account.
[ { "affiliations": [], "name": "INFORMAL DOCUMENTS" }, { "affiliations": [], "name": "Dennis Müller" } ]
[ { "authors": [ "REFERENCES Roee Aharoni", "Yoav Goldberg" ], "title": "Towards string-to-tree neural machine translation, 2017", "venue": null, "year": 2017 }, { "authors": [ "Noriko H. Arai", "Takuya Matsuzaki", "Hidenao Iwane", "Hirokazu Anai" ], "title": "Mathematics by machine", "venue": "In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation,", "year": 2014 }, { "authors": [ "Grzegorz Bancerek", "Czeslaw Bylinski", "Adam Grabowski", "Artur Kornilowicz", "Roman Matuszewski", "Adam Naumowicz", "Karol Pak" ], "title": "The role of the Mizar Mathematical Library for interactive proof development in Mizar", "venue": "J. Autom. Reasoning,", "year": 2018 }, { "authors": [ "S. Buswell", "O. Caprotti", "D. Carlisle", "M. Dewar", "M. Gaetano", "M. Kohlhase" ], "title": "The Open Math Standard, Version 2.0", "venue": "Technical report, The Open Math Society,", "year": 2004 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Tree-to-tree neural networks for program translation, 2018", "venue": null, "year": 2018 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V. Le", "Christopher D. Manning" ], "title": "ELECTRA: pre-training text encoders as discriminators rather than generators", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Joe Corneli", "Moritz Schubotz" ], "title": "math.wikipedia.org: A vision for a collaborative semi-formal, language independent math(s) encyclopedia", "venue": "AITP", "year": 2017 }, { "authors": [ "Paul-Olivier Dehaye", "Mihnea Iancu", "Michael Kohlhase", "Alexander Konovalov", "Samuel Lelièvre", "Dennis Müller", "Markus Pfeiffer", "Florian Rabe", "Nicolas M. Thiéry", "Tom Wiesing" ], "title": "Interoperability in the OpenDreamKit project: The math-in-the-middle approach", "venue": "Intelligent Computer Mathematics 2016,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "Wenbin Gan", "Xinguo Yu" ], "title": "Automatic understanding and formalization of natural language geometry problems using syntax-semantics models. 2017", "venue": "doi: 10.24507/ijicic.14.01.83. URL http://www.ijicic.org/ijicic-140106.pdf", "year": 2017 }, { "authors": [ "Herman Geuvers", "Matthew England", "Osman Hasan", "Florian Rabe", "Olaf Teschke (eds" ], "title": "Intelligent Computer Mathematics, number 10383 in LNAI, 2017", "venue": null, "year": 2074 }, { "authors": [ "Deyan Ginev", "Heinrich Stamerjohanns", "Bruce R. Miller", "Michael Kohlhase" ], "title": "The latexml daemon: Editable math on the collaborative web", "venue": "Intelligent Computer Mathematics - 18th Symposium, Calculemus 2011, and 10th International Conference,", "year": 2011 }, { "authors": [ "Lisa Grossman" ], "title": "Metric math mistake muffed mars meteorology mission. https://www.wired", "venue": null, "year": 2010 }, { "authors": [ "Thomas C. Hales", "Mark Adams", "Gertrud Bauer", "Dat Tat Dang", "John Harrison", "Truong Le Hoang", "Cezary Kaliszyk", "Victor Magron", "Sean McLaughlin", "Thang Tat Nguyen", "Truong Quang Nguyen", "Tobias Nipkow", "Steven Obua", "Joseph Pleso", "Jason M. Rute", "Alexey Solovyev", "An Hoai Thi Ta", "Trung Nam Tran", "Diep Thi Trieu", "Josef Urban", "Ky Khac Vu", "Roland Zumkeller" ], "title": "A formal proof of the Kepler conjecture", "venue": "Forum of Mathematics,", "year": 2017 }, { "authors": [ "J. Harrison" ], "title": "HOL Light: A Tutorial Introduction", "venue": "In Proceedings of the First International Conference on Formal Methods in Computer-Aided Design,", "year": 1996 }, { "authors": [ "J. Harrison" ], "title": "Formal verification at Intel", "venue": "In 18th Annual IEEE Symposium of Logic in Computer Science,", "year": 2003 }, { "authors": [ "John Harrison", "Josef Urban", "Freek Wiedijk" ], "title": "History of interactive theorem proving", "venue": "doi: 10.1016/B978-0-444-51624-4.50004-6. URL https://doi.org/10. 1016/B978-0-444-51624-4.50004-6", "year": 2014 }, { "authors": [ "F. Horozal", "M. Kohlhase", "F. Rabe" ], "title": "Extending MKM Formats at the Statement Level", "venue": "Intelligent Computer Mathematics,", "year": 2012 }, { "authors": [ "Danqing Huang", "Jing Liu", "Chin-Yew Lin", "Jian Yin" ], "title": "Neural math word problem solver with reinforcement learning", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics,", "year": 2018 }, { "authors": [ "Mihnea Iancu" ], "title": "Towards Flexiformal Mathematics. PhD thesis, Jacobs University, Bremen, Germany, 2017", "venue": "URL https://opus.jacobs-university.de/frontdoor/index/index/ docId/721", "year": 2017 }, { "authors": [ "Cezary Kaliszyk", "Florian Rabe" ], "title": "A survey of languages for formalizing mathematics, 2020", "venue": "URL https://arxiv.org/abs/2005.12876", "year": 2005 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiří Vyskočil", "Herman Geuvers" ], "title": "Developing corpus-based translation methods between informal and formal mathematics: Project description", "venue": "Intelligent Computer Mathematics,", "year": 2014 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiří Vyskočil" ], "title": "Learning to parse on aligned corpora (rough diamond)", "venue": "Interactive Theorem Proving,", "year": 2015 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiří Vyskočil" ], "title": "Automating formalization by statistical and semantic parsing of mathematics", "venue": "Interactive Theorem Proving,", "year": 2017 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiří Vyskočil" ], "title": "System description: Statistical parsing of informalized Mizar formulas", "venue": "19th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing,", "year": 2017 }, { "authors": [ "M. Kohlhase" ], "title": "OMDoc: An Open Markup Format for Mathematical Documents (Version 1.2)", "venue": "Number 4180 in Lecture Notes in Artificial Intelligence. Springer,", "year": 2006 }, { "authors": [ "M. Kohlhase" ], "title": "Using LATEX as a Semantic Markup Format", "venue": "Mathematics in Computer Science,", "year": 2008 }, { "authors": [ "Michael Kohlhase" ], "title": "The flexiformalist manifesto", "venue": "14th International Workshop on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC", "year": 2012 }, { "authors": [ "Michael Kohlhase" ], "title": "A data model and encoding for a semantic, multilingual terminology of mathematics", "venue": "Intelligent Computer Mathematics 2014,", "year": 2014 }, { "authors": [ "Michael Kohlhase", "Florian Rabe" ], "title": "Experiences from exporting major proof assistant libraries. 2020", "venue": "URL https://kwarc.info/people/frabe/Research/KR_oafexp_20.pdf", "year": 2020 }, { "authors": [ "Michael Kohlhase", "Thomas Koprucki", "Dennis Müller", "Karsten Tabelow" ], "title": "Mathematical models as research data via flexiformal theory graphs", "venue": "In Geuvers et al. (2017)", "year": 2017 }, { "authors": [ "Michael Kohlhase", "Dennis Müller", "Sam Owre", "Florian Rabe" ], "title": "Making PVS accessible to generic services by interpretation in a universal format", "venue": "Interactive Theorem Proving,", "year": 2017 }, { "authors": [ "Guillaume Lample", "François Charton" ], "title": "Deep learning for symbolic mathematics, 2019", "venue": null, "year": 2019 }, { "authors": [ "Christoph Lange" ], "title": "Enabling Collaboration on Semiformal Mathematical Knowledge by Semantic Web Integration", "venue": "PhD thesis, Jacobs University Bremen, 2011a. URL https://svn.kwarc. info/repos/swim/doc/phd/phd.pdf. Also available as a book Lange", "year": 2011 }, { "authors": [ "Christoph Lange" ], "title": "Enabling Collaboration on Semiformal Mathematical Knowledge by Semantic Web Integration. Number 11 in Studies on the Semantic Web", "venue": "AKA Verlag and IOS Press, Heidelberg and Amsterdam,", "year": 2011 }, { "authors": [ "Tomer Libal", "Alexander Steen" ], "title": "Towards an executable methodology for the formalization of legal texts", "venue": "Logic and Argumentation - Third International Conference,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": "URL http://arxiv.org/abs/1907.11692", "year": 1907 }, { "authors": [ "Minh-Thang Luong", "Eugene Brevdo", "Rui Zhao" ], "title": "Neural machine translation (seq2seq) tutorial", "venue": "https://github.com/tensorflow/nmt,", "year": 2017 }, { "authors": [ "Takuya Matsuzaki", "Hidenao Iwane", "Hirokazu Anai", "Noriko H. Arai" ], "title": "The most uncreative examinee: A first step toward wide coverage natural language math problem solving", "venue": "In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Takuya Matsuzaki", "Takumi Ito", "Hidenao Iwane", "Hirokazu Anai", "Noriko H. Arai" ], "title": "Semantic parsing of pre-university math problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Takuya Matsuzaki", "Hidenao Iwane", "Munehiro Kobayashi", "Yiyang Zhan", "Ryoya Fukasaku", "Jumma Kudo", "Hirokazu Anai", "Noriko H. Arai" ], "title": "Can an a.i. win a medal in the mathematical olympiad? - benchmarking mechanized mathematics on pre-university problems", "venue": "AI Commun.,", "year": 2018 }, { "authors": [ "Mizar. Mizar" ], "title": "http://www.mizar.org, 1973–2006", "venue": "URL http://www.mizar.org", "year": 2006 }, { "authors": [ "Dennis Müller" ], "title": "Mathematical Knowledge Management Across Formal Libraries", "venue": "PhD thesis, Informatics, FAU Erlangen-Nürnberg,", "year": 2019 }, { "authors": [ "Dennis Müller", "Thibault Gauthier", "Cezary Kaliszyk", "Michael Kohlhase", "Florian Rabe" ], "title": "Classification of alignments between concepts of formal mathematical systems", "venue": "In Geuvers et al", "year": 2017 }, { "authors": [ "Dennis Müller", "Florian Rabe", "Claudio Sacerdoti Coen" ], "title": "The Coq Library as a Theory Graph", "venue": "CICM", "year": 2019 }, { "authors": [ "F. Rabe", "M. Kohlhase" ], "title": "A Scalable Module System", "venue": "Information and Computation,", "year": 2013 }, { "authors": [ "Florian Rabe" ], "title": "How to Identify, Translate, and Combine Logics", "venue": "Journal of Logic and Computation,", "year": 2017 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "CoRR, abs/1904.01557,", "year": 2019 }, { "authors": [ "Minjoon Seo", "Hannaneh Hajishirzi", "Ali Farhadi", "Oren Etzioni", "Clint Malcolm" ], "title": "Solving geometry problems: Combining text and diagram interpretation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Ron Solomon" ], "title": "On finite simple groups and their classification", "venue": "Notices of the AMS,", "year": 1995 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "CoRR, abs/1706.03762,", "year": 2017 }, { "authors": [ "Qingxiang Wang", "Cezary Kaliszyk", "Josef Urban" ], "title": "First experiments with neural translation of informal to formal mathematics", "venue": "11th International Conference on Intelligent Computer Mathematics (CICM 2018),", "year": 2018 }, { "authors": [ "Qingxiang Wang", "Chad E. Brown", "Cezary Kaliszyk", "Josef Urban" ], "title": "Exploration of neural machine translation in autoformalization of mathematics in Mizar", "venue": "Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs,", "year": 2020 }, { "authors": [ "Yan Wang", "Xiaojiang Liu", "Shuming Shi" ], "title": "Deep neural solver for math word problems", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Chris Dyer", "Eric P. Xing", "Taylor Berg-Kirkpatrick" ], "title": "Unsupervised text style transfer using language models as discriminators, 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite huge advancements in machine learning, the task of understanding informal reasoning is still beyond current methods. In fact, it became commonplace that humans annotate informal documents containing reasoning in many domains, e.g. law (Libal & Steen, 2020). Reasoning is most visible in mathematical documents and software specification and as such in the last decades, the formalization of mathematical knowledge, and the verification of formal proofs, has become increasingly popular. By now, dozens of interactive and automated theorem prover systems are available, each providing libraries with up to hundreds of thousands of formalizations of mathematical definitions, theorems, and their proofs written by human mathematicians (Harrison et al., 2014).\nWhile formal methods are still primarily used by computer scientists (e.g. to verify software and hardware, as well as in program synthesis), by now they have also drawn the interest of an increasing number of research mathematicians – primarily thanks to famous problems such as Kepler’s conjecture (Hales et al., 2017) or the classification theorem for finite simple groups (Solomon, 1995), which have successfully been verified using theorem prover systems.\nHowever, while some mathematicians have begun actively adapting formal methods for their work, there is a prohibitively large discrepancy between the way new mathematical results are developed, presented, and published in mathematical practice, and the way they are formalized and implemented in formal systems (Kaliszyk & Rabe, 2020): Most theorem proving systems implement a fixed logical foundation (such as variants of set theory or various kinds of type theories), a surface syntax in which a user declares new definitions and statements in terms of the underlying foundations, and either a tactic language or a language for expressing proof terms (usually on basis of the Curry-Howardcorrespondence in a typed λ-calculus) that allow for declaring proofs. Consequently, the process of formalizing new content in a formal system resembles programming much more than it does developing informal proofs.\nThis discrepancy results in severe challenges for traditional mathematicians: Formal systems are difficult to learn and use, even if one is well acquainted with the (informal) mathematics involved. They require learning dedicated formal languages resembling programming languages, declaring content on a level of detail that is prohibitive for beginners even for “obvious” conclusions, and their\nlibraries are difficult to grasp without already being familiar with the system’s language, conventions and functionalities. Due to the required level of detail, knowledge of the existing libraries is crucial when formalizing new content. Furthermore, many “intuitively valid” arguments can not be easily expressed in terms of a logical foundation in the first place, and knowing how to deal with those requires familiarity with the logical foundation involved and lots of practice.\nConsequently, the utility of formalizing mathematical results can be too easily (and too often is) dismissed in light of the additional time and work required for non-experts. This is despite the fact that many services available for formal mathematics are already enabled by semi-formal (or flexiformal) representations, such as semantic annotations in natural language texts, or formal representations containing opaque informal expressions (see e.g. Kohlhase (2013); Lange (2011a); Iancu (2017); Kohlhase et al. (2017a); Corneli & Schubotz (2017); Dehaye et al. (2016)). Therefore, we need to invest into methods for bridging the gap between informal mathematical practice and (semi-)formal mathematics. One way to do so is to investigate autoformalization, the task of (semi-automatically) converting existing informal mathematical presentations to (increasingly) formal representations.\nNotably, these issues extend beyond pure mathematics to other STEM (science, technology, engineering and math) fields, where the formal verification (or lack thereof) of results can have direct real-world implications – examples include an infamous and costly error in the floating-point unit of Intel processors (Harrison, 2003) and several human failures to adequately convert between SI and imperial units, most famously in NASA’s Mars orbiter (Grossman). In fact, the former has already established formal verification as a vital tool in hardware design (Harrison, 2003).\nTwo observations motivate the research presented here:\n1. The vast majority of STEM researchers can be assumed to be comfortable with using LATEX; any integration of formal methods in a LATEX development environment (e.g. via new packages or IDE integration) would consequently lower the entry barrier significantly.\n2. The task of going from purely informal mathematical texts to fully formal representations of the contained knowledge is best done via a separation of concerns, by focussing on individual subtasks (such as disambiguating symbolic expressions, parsing natural language, and translating it to a formal foundation) using dedicated tools for each.\nIn this paper, we discuss specifically the task of disambiguating symbolic expressions – i.e. associating all symbols in an expression with their precise semantics – in LATEX documents as a machine learning task, using sTEX semantically annotated LATEX (Kohlhase, 2008). The contributions are threefold:\n1. We discuss the details of disambiguating symbolic expressions in informal STEM documents as a neural machine translation task, 2. we present a new dataset specifically for this task, based on the existing SMGLoM library of sTEX macros (see Subsection 2.2), and 3. we present a methodology (using transformer language models) that allows us to achieve positive results on our dataset. We previously evaluated several baseline NMT models (such as Luong et al. (2017); Vaswani et al. (2017) and a plain character-based sequence-to-sequence model), which all failed to yield meaningful results due to our dataset being considerably smaller than is required for traditional NMT models.1" }, { "heading": "2 PRELIMINARIES", "text": "By disambiguating, we mean the task of transforming a sequence of symbols (representing a mathematical formula) into an abstract syntax tree and associating each leaf in the tree with a unique identifier specifying the precise semantics of the corresponding symbol.\nWhile this might superficially seem an easy task, closer consideration shows that even obvious seeming statements such as “a+ b” can in fact correspond to a multitude of possible disambiguations: a and b can be variables or previously defined constants, whereas + can represent e.g. addition on multiple different number spaces, generic ring or vector space operations, or string concatenation. In order to adequately disambiguate expressions generically, it is, therefore, necessary to take the context in which the expression occurs into account.\n1All code and data relevant to this paper is available at https://gl.kwarc.info/dmueller/ fifom.\nIn this paper, we consider informal documents in LATEX specifically, which we will disambiguate with the sTEX package, using semantic identifiers provided by the SMGloM library. This eventually enables various formal knowledge management services (such as type/proof checking) provided by the MMT system." }, { "heading": "2.1 STEX", "text": "Kohlhase proposed sTEX (Kohlhase, 2008), a package for annotating LATEX documents with structural and formal semantics which is today used by multiple groups formalizing mathematics in various systems. In particular, sTEX is based on OMDOC (Kohlhase, 2006), an extension of OpenMath (Buswell et al., 2004) which is foundation-agnostic in the sense that it does not favor a specific foundation (such as type or set theories) over any other. This approach is consequently best suited for semantifying informal documents, where foundations are often unspecified, left implicit or switched fluently. For example, category-theoretic and set-theoretic formulations are often used interchangeably in algebraic settings, whereas type theories are generally favored for computational aspects and formal systems.\nFigure 1 shows example sTEX macros and their usage in various stages. Relevant for this paper is primarily the \\symdef command, which introduces a new mathematical concept (e.g. \\nattimes in Figure 1). It takes as arguments a macro name (e.g. nattimes), a symbolic notation (last argument) and optionally an OMDOC-name (e.g. multiplication), arity (e.g. [1], which may be flexary) and notational precedence (e.g. p=600, for automatic bracketing). It generates a unique identifier for the concept being declared (based on the provided OMDOC-name), and a new LATEX macro (e.g. \\nattimes) for referring to the symbol. Alternative notational variants for symbols can be introduced via \\symvariant, which are used as options to the macro (e.g. \\nattimes[cdot]).\nIn addition to being valid LATEX, compilable via pdflatex, sTEX-documents can be transformed to OMDOC using the LaTeXML-software (Ginev et al., 2011), yielding a formally disambiguated representation of the document and in particular the symbolic expressions therein on the basis of the macros provided by \\symdefs. LaTeXML also heuristically attempts to disambiguate nonsTEX-symbols, e.g. by considering “=” and “+” as infix notations for generic equality and addition operators, respectively." }, { "heading": "2.2 SMGLOM", "text": "The SMGloM (Kohlhase, 2014), semantic multilingual glossary of mathematics) is a library of hundreds of sTEX-modules containing mathematical concepts and definitions. It is separated into signature modules (using the modsig-environment, see Figure 1) containing only symbol declarations, and natural language modules (using the mhmodnl-environment, here exemplary for English) that serve as dictionary entries for these, in which the semantics of the symbols are described in a semi-formal manner. The second row of Figure 1 shows an SMGLoM entry." }, { "heading": "2.3 MMT", "text": "sTEX itself is integrated, and shares an underlying OMDOC ontology, with the MMT system (Rabe & Kohlhase, 2013; Horozal et al., 2012; Rabe, 2017) – a foundation-independent meta-framework and API for knowledge management services. This integration makes the generic services provided by MMT– e.g. type checking, library management/browsing, translation – available to informal mathematical texts. Using alignments (Müller, 2019; Müller et al., 2017), OMDOC-expressions can be translated between different libraries, languages and foundations. This allows for e.g. translating (originally) sTEX-content to a typed setting in order to e.g. check expressions and run type inference.\nAdditionally, several theorem prover libraries have been translated to OMDOC and integrated in the MMT system, e.g. Kohlhase et al. (2017b); Müller et al. (2019) (for a detailed overview, see Müller (2019) and Kohlhase & Rabe (2020)). Extending these integrations to enable exporting from MMT as well (and in conjunction with natural language processing), this could enable verifying informal mathematics imported via sTEX using external state-of-the-art theorem prover systems." }, { "heading": "3 STATE OF THE ART", "text": "Various papers over the last years have – explicitly or implicitly – attempted to extract formal information from informal documents using machine learning. These fall into two categories:\nFirstly, there are projects that attempt to fully formalize informal mathematical documents using machine learning techniques, using the surface language of some theorem prover system directly as a target. In Kaliszyk et al. (2017a; 2015; 2014), the Flyspeck project (Hales et al., 2017) – the formalization of Kepler’s theorem – was used as a basis for a parallel dataset in order to translate from informal mathematics to HOL Light (Harrison, 1996) syntax. Kaliszyk et al. (2017b); Wang et al. (2018; 2020) target the Mizar language (Mizar) instead, using the Journal of Formalized Mathematics (JFM) as data – an informal representation of the formal Mizar Mathematical Library (Bancerek et al., 2018).\nWhile these projects achieved impressive results given the ambitious nature of the task, their success rate is naturally limited by the involved models having to solve several tasks at once (see second observation in Section 1), including ours. Additionally, by going to a fully formal language (and logical foundation) immediately, the result does not preserve the narrative presentation of the input document, effectively losing (for us) valuable information in the process. Consequently, our task and results obtained on it are not directly comparable to these projects.\nSecondly, various projects have aimed to solve informally presented mathematical problems of various kinds. These include Arai et al. (2014); Matsuzaki et al. (2014; 2017; 2018) on pre-university math problems, Saxton et al. (2019) and Lample & Charton (2019) on high-school level equations,\nGan & Yu (2017) and Seo et al. (2015) on geometric problems, and Huang et al. (2018) and Wang et al. (2017) on solving typical high-school word problems.\nWhile this naturally entails disambiguating symbolic expressions, all these projects reduce their domain of applicability to specific areas where all occurring formal symbols are syntactically unambiguous – primarily common arithmetic operations, functions, and relations on real numbers – such that disambiguation reduces to simple parsing of a fixed, small set of a priori known symbols." }, { "heading": "4 TASK DEFINITION", "text": "Definition 4.1. (Disamiguation Task) Let L be a set of LATEX fragments (i.e. strings), which we assume are syntactically valid LATEX in some suitable document context.\nA symbolic expression is (for our purposes, simplified) any substring s of some S ∈ L such that s is interpreted by the TEX-engine in math mode – e.g., if it is delimited by $, $$ or \\[ and \\] respectively.\nFor the purposes of our task, we call S ∈ L fully disambiguated, if every symbolic expression occurring in S only consists of:\n1. variable names (e.g. n or \\mathcal{G}, provided they do not represent specific, definite mathematical objects),\n2. sTEX macros introduced via a \\symdef declaration in the SMGLoM, or\n3. non-semantic commands or characters, such as additional spaces/tabs/linebreaks, purely aesthetic spacing or kerning commands, unnecessary parentheses or clarifying comments (e.g. in under- or overbraces).\nLet LsTEX ⊂ L the subset of fully disambiguated LATEX fragments. Conversely, let LLATEX ⊂ L be the set of LATEX fragments that do not contain any sTEX macros2.\nClearly, for any S ∈ L, there is some LATEX(S) ⊂ LLATEX such that S and any S′ ∈ LATEX(S) represent the same symbolic presentation – i.e. they generate the same output on pdflatex.\nConversely, we assume that for any S ∈ L there is a set sTEX(S) ⊂ LsTEX such that 1. LATEX(S) = LATEX(S′) for all S′ ∈ sTEX(S) (i.e. they have the same symbolic presentation) and 2. all S′ ∈ sTEX(S) capture the intended semantics of S - i.e. the author of S, were they to know the SMGLoM library sufficiently well, would agree that S′ is a correctly fully disambiguated variant of S.\nOur goal is to learn a function f : L → L such that for any S ∈ L we have f(S) ∈ sTEX(S). Example 4.1. Consider the sentence from the SMGloM\nMultiplication $\\cdot$ computes the product $a\\cdot b$ (also written as $ab$ or $a\\times b$) of natural numbers $a$ and $b$.\nThe last two symbolic expressions ($a$ and $b$) only consist of variable names, and are thus considered fully disambiguated already.\nThe first one ($\\cdot$) refers to the multiplication operator on natural numbers, which in sTEX is represented as \\nattimesOp, the remaining symbolic expressions are all multiplications on natural numbers applied to the variables a and b with different notations, represented in sTEX via \\nattimes with various options.\nWe expect the target function f on this input sentence to output\nMultiplication $\\nattimesOp$ computes the product $\\nattimes[cdot]{a,b}$ (also written as $\\nattimes{a,b}$ or $\\nattimes[x]{a,b}$) of natural\nnumbers $a$ and $b$.\n2Note that LLATEX and LsTEX are not disjoint" }, { "heading": "5 DATASETS", "text": "We have two datasets of sTEX-content:\n1. The SMGLoM3, which introduces precisely those macros that we want to be learned by a model. Unfortunately, it provides relatively few symbols and hence can only cover a small part of informal documents even in theory. Additionally, apart from some rudimentary concepts such as logical connectives or basic arithmetic functions, the SMGLoM library references the majority of symbols only once (in the corresponding dictionary entry). This is unlike most other formal systems, where all symbols need to be typed or defined formally when being declared, which naturally leads to a significant number of references to previously declared symbols.\n2. The MiKoMH4-repository of lecture notes by Michael Kohlhase (the author of sTEX) is heavily biased towards subjects in computer science, covering only a small part of SMGLoMentries, and often introducing local \\symdefs.\nNotably, while the translation from source to target language is difficult, the reverse translation (from sTEX to plain LATEX) is easy: Since sTEX macros internally expand (ultimately) to the plain notational representation as basic LATEX, translating from the target to the source language amounts to merely expanding sTEX macros. This allows for easily generating a parallel dataset from a set of documents in the target language.\nTo obtain such a parallel corpus for supervised learning, we take the individual LATEX-files in those repositories and do the following:\n1. We separate the documents into small fragments of (on average) 500 character lengths, which we consider to be the sentences in LsTEX. Symbolic expressions occur preferably at the end of a sentence, based on the assumption that preceding text provides a more meaningful context for disambiguation. Sentences that do not contain symbolic expressions are ignored.\n2. In each sentence S = SsTEX ∈ LsTEX, we perform some standardization function which e.g. removes non-semantic macros and ensures that macro arguments are always braced, in order to minimize author bias,\n3. We extract all symbolic expressions (msTEX,i)i≤nS in S and expand all sTEX macros in them, resulting in (mLATEX,i)i≤nS (where nS is the number of symbolic expressions in S). Analogously, we expand all sTEX macros in S itself, yielding SLATEX ∈ LLATEX.\nEach entry in our dataset then consists of a 4-tuple (SLATEX, SsTEX, (mLATEX,i)i≤nS , (msTEX,i)i≤nS ). In total, we obtain 911 entries from SMGLoM and 9200 entries from MiKoMH.\nSynthesizing Training Data In order to augment our datasets for supervised learning, we opted to exploit the MMT integration to synthesize additional training data.\nFor that, we aligned SMGLoM symbols with declarations in a strongly typed MMT archive; namely the Math-in-the-Middle (MitM) library (Müller, 2019). This allows us to randomly generate welltyped (and hence syntactically well-formed) terms in a typed setting, translate these along alignments to sTEX expressions and subsequently generate surrounding verbalizations.\nThe generating algorithm takes as input a set of symbols Sym (e.g. all MitM-symbols for which an alignment to SMGLoM exists) and a starting symbol s ∈ Sym (e.g. nattimes; binary multiplication on natural numbers). It returns a random well-typed formal expression t which is guaranteed to contain s. Afterwards, it is verbalized as an sTEX sentence using natural language fragments (a detailed description of the algorithm is given in Appendix A).\nThe synthesized sTEX sentences are then treated as above to augment our parallel training corpus.\nAs an evaluation dataset, we developed sTEX documents based on selected fragments of introductory sections from mathematics lecture notes; primarily containing basics such as set operations, number\n3https://gl.mathhub.info/smglom 4https://gl.mathhub.info/MiKoMH\nspaces, examples for proofs by induction, basic combinatorics, and definitions of common algebraic structures, containing 161 symbolic expressions in total. Importantly, these documents were written by hand, with a focus on featuring multiple symbols with the same symbolic representation; primarily the usual arithmetic operations on different number spaces.\nOf the ≈ 100 SMGLoM symbols used therein, 92 were aligned with corresponding symbols in the MitM library and used as input symbols for synthesizing sentences; with 250 sentences per starting symbol (as to not drown out the non-synthesized sentences), yielding 23,000 additional sentences.\nUnlike the training datasets, the evaluation document was translated to plain LATEX manually using the PDF as a reference, in order to avoid possible spurious patterns in automatically expanded sTEX.\n6 STEX-ANNOTATING WITH MACHINE LEARNING AS AN NMT TASK\nIn the course of our experiments, we considered our disambiguation task as a machine translation (NMT) problem, the models for which have been proven to be quite effective even beyond natural language translations (Clark et al., 2020). In fact, the autoformalization projects mentiond in Section 3, which are spiritually closest to our task, all used NMT models with positive results. There are however several aspects that distinguish a LATEX-to-sTEX translation from similar translation tasks which significantly affect the applicability of existing tools and hence our methodology.\nFirst, Unlike the most popular formal systems, there is no large library of formalizations for the translation target. This leaves us with only a small dataset that (for the reasons outlined in Section 5) does not represent well the general distribution we would like to learn.\nSecond, translation is only relevant for specific fragments of an input text, namely the symbolic expressions; for the surrounding natural language texts, translation should be the identity. Nevertheless, surrounding text usually contains critical information for disambiguation; e.g. without the surrounding context, it is impossible to disambiguate an expression a+ b, since the symbol “+” could refer to any of dozens of addition operations.\nFinally, depending on perspective, the domain language is a proper subset of the target language; or rather (since we want to avoid ambiguous expressions in sTEX) domain and target language share both a basic grammar as well as a large amount of vocabulary (namely LLATEX ∩ LsTEX) which e.g. subsumes natural English. For the domain language, large datasets are easily obtainable.\nOur task could also be considered as a text style transfer task – e.g. Yang et al. (2019) uses pre-trained language models for text style transfer, roughly similar to (but more sophisticated than) our approach. While the datasets used therein are still considerably larger than ours, this might be a promising avenue for future improvements over our model." }, { "heading": "7 METHODOLOGY", "text": "Notably, sTEX macros reflect the syntax tree of an expression, so that on symbolic expressions alone, the representation of the target sequences is naturally analogous to those chosen in string-to-tree translations (Aharoni & Goldberg, 2017). Plain LATEX however is not naturally amenable to a treestructured representation, making tree-to-tree approaches (Chen et al., 2018) not easily applicable to our dataset.\nInitial experiments using standard, dedicated NMT models with full sentences as input/output quickly proved to be ineffective due to the size of the training corpus, which was too small to cause these models to even generate syntactically correct LATEX (e.g. knowing to balance pairs of brackets) before overfitting on the training data. This makes it difficult to compare our approach to an informative baseline model.\nTransformer language models (e.g. Devlin et al. (2018); Liu et al. (2019); Radford (2018); Radford et al. (2019); Clark et al. (2020)) allow us to leverage huge available corpora of plain LATEX documents to train a model to “understand” both basic LATEX syntax and mathematical terminology. Using those, we consequently do not need to rely on our small dataset for this base-level understanding. We can then approach learning sTEX annotations as a downstream task on a pre-trained transformer model. Consequently, we pre-trained a GPT2 (Radford et al., 2019) model on a large portion of available\nLATEX sources of scientific papers from the preprint repository arxiv.org (6,673,950 entries of length 1,024 tokens). The model was trained from scratch in order to use a dedicated tokenizer trained on LATEX directly (byte-level tokenizer; vocabulary size 32,000) rather than natural language alone.\nIn order to leverage the pretrained model for both source and target language5, we subsequently opted to fine-tune the GPT2-model on inputs of the form\nSLATEX <s> mLATEX <s> msTEX <s>,\nwhere <s> a single-token separator.6 For example, for Figure 1 the training data contains fragments (normalized) such as:\nMultiplication $\\cdot$ computes the product $a\\cdot b$ (also written as $ab$ or $a\\times b$) of natural numbers $a$ and $b$.\n<s> $a\\cdot b$ <s> $\\nattimes[cdot]{a,b}$ <s> We then use text generation on inputs of the form SLATEX <s> mLATEX <s> for translating and stop generating after encountering <s>.\nBy using one entry per symbolic expression, we obtain a dataset of 121,368 examples. The GPT2model was finetuned on these for five epochs, resulting in an average training loss of 0.04 and yielding promising results on the evaluation set (see below). This approach has the following advantages:\n1. It allows for using large datasets of generic LATEX documents to learn basic syntactic rules and semantics of mathematical expressions beyond our small sTEX datasets.\n2. We conjecture that this approach makes the model less sensitive to spurious patterns in the synthesized part of our dataset.\n3. Adding new symbols to the SMGLoM and aligning them to (new or existent) symbols in the MitM library allows for immediately synthesizing training data, obviating the need to first obtain large amounts of data using the new symbol before the model can learn to use it.\n4. The mere pretrained GPT2 model can be trained on additional downstream tasks, e.g. introducing macros for referencing mathematical concepts in natural language fragments." }, { "heading": "8 EVALUATION AND RESULTS", "text": "The traditional evaluation metrics (loss during evaluation, perplexity, BLEU) are somewhat difficult and/or meaningless to apply in our situation, since 1. the returned tokens and provided label tokens might differ in semantically irrelevant ways (e.g. $a+b$ vs. $a + b$), and 2. loss/perplexity would be evaluated during a forward pass in a next token prediction task on a token-by-token basis, which would retroactively “correct” errors in prediction that would otherwise yield completely wrong result.\nConsequently, we opted for a plurality of evaluation strategies. Let SF the returned sentence of our model on an input SLATEX with the correct label SsTEX. Then on our evaluation set we get\n1. SF ∈ L for 96.9% of inputs 2. SLATEX ∈ LATEX(SF ) for 64.0% of inputs, 3. SF ∈ LsTEX for 60.2% of inputs, and 4. SF = SsTEX for 47.2% of inputs.\nIn comparison, using traditional NMT models auch as Luong et al. (2017); Vaswani et al. (2017) we effectively obtained 0% success rates for all of the above. Additional evaluation techniques exploiting the MMT integration are described in Appendix B.\nFigure 2 shows a few examples where our model “failed” in interesting ways. As the first and fourth examples show, the model seems to consistently fail to replace “=” by the intended macro \\eq – a failure that LaTeXML can recover when converting to OMDOC, but also regularly occurs in the training data. Similarly, \\ldots often leads to wrong translations: The first example shows that the\n5Initial experiment with the pretrained model as encoder component only showed improvements over randomly initialized encoder-decoder-models, but ultimately proved unsuitable still due to the small dataset size.\n6inspired by http://jalammar.github.io/illustrated-gpt2/ #part-3-beyond-language-modeling\nmodel simply dropped \\ldots, using a generic set constructor macro \\set rather than \\setdots, the one specifically intended for sets ending in ellipses.\nIn the second example, the model seems to introduce a nonsensical additional argument for the \\foral macro. Notably, the expression ∀x ∈ A.P can also be achieved using the dedicated macro \\foralS{x}{A}{P}. Seemingly, the model chose the macro \\foral, and the arguments for the \\foralS macro, yielding a wrong translation that generates a wrong pdf output, while being “semantically almost correct”.\nIn the third example, the model confuses the macro \\setst (for set comprehension) with a more complex macro \\bsetst (for set comprehension with a complex pattern on the left side). Additionally, it confuses \\sseteq (for inclusive subsets x ⊆ A) with \\sset (for generic subsets x ⊂ A), duplicating the first argument and moving the intended argument A outside the scope of the macro.\nExample four is interesting in that the model correctly identifies the arithmetic operations as those on the natural numbers, but spuriously inserts an additive term \\natplus{...,4,5}; this is likely an artifact from the left-hand side of the equation. Interestingly, these kinds of artifacts occur more than once in our evaluation set." }, { "heading": "9 CONCLUSION", "text": "We have proposed the task of disambiguating symbolic expressions in informal STEM documents and defined this task formally. This allows for annotating informal documents semantically, and further processing them using tools that support such annotated documents (e.g. MMT). We discussed the specificity of this task and what separates this task from other NMT problems. We developed a dataset for this task and presented an approach that yields promising results, especially in light of the size of the dataset. In particular, the presented approach points to the efficacy of using transformer models pretrained on generic LATEX documents.\nIn the future, we plan to combine the proposed symbolic disambiguation approach with an autoformalization framework. This way we aim to achieve better results for end-to-end formalization of informal mathematical documents. Furthermore, more promising results for the currently proposed task could be obtained by reintegrating the proposed models into an encoder-decoder NMT model." }, { "heading": "ACKNOWLEDGMENTS", "text": "The first author and this work were supported by a postdoc fellowship of the German Academic Exchange Service (DAAD).\nThe second author is supported by ERC starting grant no. 714034 SMART" }, { "heading": "A SYNTHESIZING TRAINING DATA", "text": "The generating algorithm takes as input a set of symbols Sym (e.g. all MitM-symbols for which an alignment to SMGLoM exists) and a starting symbol s ∈ Sym (e.g. nattimes; binary multiplication on natural numbers). The algorithm then proceeds as follows:\n1. If s : T has a (simple or dependent) function type, we fill in the required arguments. For s =nattimes, our type is T =Nat→Nat→Nat, hence we need to find two arguments s1, s2 of type Nat. For each si of required type Ti we proceed as follows:\n(a) With probability pvar, we introduce a new variable v : Ti from a list of allowed variable names (which include variants such as a, a′, a0 etc.) and let si := v. (b) With probability pfun, we pick a symbol f ∈ Sym with a function type with return type Ti (e.g. for Ti =Nat, we can pick natplus). In that case, we let s := f , recurse, and set si as the result. (c) With probability pconst = 1 − pvar − pfun, we pick a constant symbol c ∈ Sym of type Ti (e.g. for Ti =Nat we can pick 0) and return si := c.\nIn order to avoid stack overflows, we reduce pfun in each iteration by a certain factor < 1. As to not overuse certain symbols, we scale pfun and pconst with the number of respectively suitable symbols available; if Sym contains no suitable function or constant symbols, we let pfun = 0 (and/or pconst = 0, respectively).\n2. If s : T does not have a function type (or all its parameters have been filled in 1.), then s is well-typed and we return s with probability 1− pup. With probability pup, we instead pick a new symbol sf ∈ S of some function type such that some i-th parameter type of sf is T . In that case, we let si := s and s := sf and recurse. Again, in order to avoid stack overflows we reduce pup by some factor with each iteration.\nThe algorithm also takes subtyping into account, e.g. whenever a term of type Real is required, terms of type Int or Nat are used with some probability.\nIn order to obtain a sentence in the sense of Section 5 providing context for disambiguation, we first translate t along alignments to SMGLoM (using a random \\symvariant), collect the set V of all free variables of t and verbalize their types. For that, we associate each type with a set of verbalizations from which we choose randomly to produce a sentence that introduces the variables before using them in the generated expression. Figure 3 shows a few example verbalizations for a variable x of type Nat and generated sentences for the input symbol s =realuminus; the negation on real numbers.\nThe verbalizations are categorized as prefixed (e.g. “a natural number n”) or suffixed (e.g. “n a natural number”), and singular or plural, and picked according to the number of variables of the same type and the surrounding sentence, which is also picked at random (e.g. “Assume we have ...” uses prefixed, whereas “Let ...” uses suffixed)." }, { "heading": "B EVALUATION TACTICS", "text": "For every LATEX input SLATEX, expected label SsTEX and returned sentence SR, we employ the following strategies, the results of which are summarized in Figure 4:\nislatex We parse SR into an AST. Success implies that SR is syntactically valid LATEX. This might fail for “minor” reasons such as a missing closing bracket. It might yield false positives in cases where macros (not explicitly considered by our parser) occurring in SR have a wrong number of arguments. All subsequent evaluation strategies require islatex to succeed.\nstexcheck We heuristically check whether SR is in LsTEX – unlike islatex, this requires that all sTEX macros occurring in SR have the right number of arguments. Success does not tell us that the input has been disambiguated correctly, but does imply that is has been disambiguated at all. False negatives can occur if SR (and thus likely SLATEX as well)\ncontains complex variable names, or if SR contains e.g. an equality symbol “=” instead of the corresponding sTEX macro, which LaTeXML could recover.\neval_latex All sTEX macros occurring in SR are expanded and SR is normalized as described in Section 5. The result is string-compared to SLATEX. Success thus implies, that the notational presentation in PDF output of SLATEX and SR will coincide. False negatives can occur due to minor differences e.g. in not strictly necessary brackets.\nomdoc SR is translated to OMDOC using LaTeXML and imported to MMT. Success guarantees syntactic well-formedness of SR. Since both the LaTeXML-OMDOC export and the subsequent MMT-import are somewhat brittle, this can easily lead to false negatives.\ntranslated The import from omdoc is translated to the typed MitM library. This entails that all symbols used in SR are aligned with MitM symbols and SR is amenable for formal knowledge management services.\ninferred The translation to MitM obtained from translated is type checked by MMT by having its type inferred. Success guarantees that SR is well-typed. Notably, if SR is a mere variable (e.g. the expression $n$), it does not actually have an inferrable type, but succeeds trivially. This accounts for 60 of the entries in our evaluation set, i.e. 37%.\nprovided_stex Both the expected label SsTEX and SR are normalized and string-compared. Success implies that SR is definitely the correct translation. False negatives can easily occur due to non-semantic differences between SsTEX and SR however, such as bracketing, nested applications in SR (e.g. $\\natplus{\\natplus{a,b},c}$ vs. $\\natplus{a,b,c}$), etc.\nstex_as_omdoc SsTEX is translated to OMDOC via LaTeXML and directly compared to the OMDOC-term obtained from omdoc. Like provided_stex, success implies that SR is correct, but it is more fault-tolerant with respect to the precise syntax of SR, while being less fault tolerant due to the issues mentioned in omdoc.\nThe first three evaluations can always be applied; from the remaining, all but provided_stex require a working installation of LaTeXML and its sTEX-Plugin. The last two require a known correct translation.\nA detailed log file on our evaluation document with the individual results for each input and evaluation is available in the associated git repository." } ]
2,021
null
SP:c7ca1f4cc4801fa55cf90d97980f49bf144a1a4c
[ "The paper presents an algorithm for offline meta-learning, where tasks are drawn from a distribution and presented to a learner sequentially, the objective being to use accumulated knowledge in order to facilitate the learning of new tasks. The algorithm is motivated from the PAC-Bayes theory of generalization, extended in recent years to the meta-learning setup, and provides a new localized approach where the prior used for a novel task is allowed to depend on the data from the new task in addition to all previous tasks. They make use of a previously proposed local learning method (LCC) that leads to extra flexibility and to a tightening of the meta-learning bounds, and provide an algorithm based on these bounds from which a learning algorithm is derived based on minimization of the upper bound. Empirical results are provided demonstrating the efficacy of the method and comparing to other recent approaches. " ]
Meta-learning methods learn the meta-knowledge among various training tasks 1 and aim to promote the learning of new tasks under the task similarity assumption. 2 Such meta-knowledge is often represented as a fixed distribution; this, however, 3 may be too restrictive to capture various specific task information because the 4 discriminative patterns in the data may change dramatically across tasks. In this 5 work, we aim to equip the meta learner with the ability to model and produce 6 task-specific meta knowledge and, accordingly, present a localized meta-learning 7 framework based on the PAC-Bayes theory. In particular, we propose a Local 8 Coordinate Coding (LCC) based prior predictor that allows the meta learner to 9 generate local meta-knowledge for specific tasks adaptively. We further develop a 10 practical algorithm with deep neural network based on the bound. Empirical results 11 on real-world datasets demonstrate the efficacy of the proposed method. 12
[]
[ { "authors": [ "Gregory Griffin", "Alex Holub", "Pietro Perona" ], "title": "Caltech-256 object category dataset", "venue": null, "year": 2007 }, { "authors": [ "Benjamin Guedj" ], "title": "A primer on pac-bayesian learning", "venue": "arXiv preprint arXiv:1901.05353,", "year": 2019 }, { "authors": [ "Guy Lever", "François Laviolette", "John Shawe-Taylor" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn", "venue": "Springer Science & Business Media,", "year": 2012 } ]
[ { "heading": null, "text": "Meta-learning methods learn the meta-knowledge among various training tasks1 and aim to promote the learning of new tasks under the task similarity assumption.2 Such meta-knowledge is often represented as a fixed distribution; this, however,3 may be too restrictive to capture various specific task information because the4 discriminative patterns in the data may change dramatically across tasks. In this5 work, we aim to equip the meta learner with the ability to model and produce6 task-specific meta knowledge and, accordingly, present a localized meta-learning7 framework based on the PAC-Bayes theory. In particular, we propose a Local8 Coordinate Coding (LCC) based prior predictor that allows the meta learner to9 generate local meta-knowledge for specific tasks adaptively. We further develop a10 practical algorithm with deep neural network based on the bound. Empirical results11 on real-world datasets demonstrate the efficacy of the proposed method.12\n1 INTRODUCTION13\nTask 1 Instances\nTask 2 Instances\nGlobal Meta-knowledge Localized Meta-knowledge\nAdaptation\nRecent years have seen a resurgence of interest in the14 field of meta-learning, or learning-to-learn (Thrun15 & Pratt, 2012), especially for empowering deep neu-16 ral networks with the capability of fast adapting to17 unseen tasks just as humans (Finn et al., 2017; Ravi18 & Larochelle, 2017). More concretely, the neural19 networks are trained from a sequence of datasets,20 associated with different tasks sampled from a meta-21 distribution (also called task environment (Baxter,22 2000; Maurer, 2005)). The principal aim of meta23 learner is to extract transferable meta-knowledge24 from observed tasks and facilitate the learning of25 new tasks sampled from the same meta-distribution.26 The performance is measured by the generalization27\ntasks: distinguishing motorcycle versus bicycle and distinguishing motorcycle versus car. Intuitively,45 each task uses distinct discriminative patterns and thus the desired meta-knowledge is required46 to extract these patterns simultaneously. It could be a challenging problem to represent it with a47 global hyperposterior since the most significant patterns in the first task could be irrelevant or even48 detrimental to the second task. Figure schematically illustrates this notion. Therefore, customized49 meta-knowledge such that the patterns are most discriminative for a given task is urgently desired.50 Can the meta-knowledge be adaptive to tasks? How can one achieve it? Intuitively, we could51 implement this idea by reformulating the meta-knowledge as a maping function. Leveraging the52 samples in the target task, the meta model produces tasks specific meta-knowledge.53\nNaturally yet interestingly, one can see quantitatively how customized prior knowledge improves54 generalization capability, in light of the PAC-Bayes literature on the data distribution dependent-priors55 (Catoni, 2007; Parrado-Hernández et al., 2012; Dziugaite & Roy, 2018). Specifically, PAC-Bayes56 bounds control the generalization error of Gibbs Classifiers. They usually depend on a tradeoff57 between the empirical error of the posterior Q and a KL-divergence term KL(Q‖P ), where P is the58 prior. Since this KL-divergence term forms part of the generalization bound and is typically large in59 standard PAC-Bayes approaches (Lever et al., 2013), the choice of posterior is constrained by the60 need to minimize the KL-divergence between prior P and posterior Q. Thus, choosing an appropriate61 prior for each task which is close to the related posterior could yield improved generalization bounds.62 This encourages the study of data distribution-dependent priors for the PAC-Bayes analysis and gives63 rise to principled approaches to localized PAC-Bayes analysis. Previous related work are mainly64 discussed in Appendix A.65\nInspired by this, we propose a Localized Meta-Learning (LML) framework by formulating meta-66 knowledge as a conditional distribution over priors. Given task data distribution, we allow a meta67 learner to adaptively generate an appropriate prior for a new task. The challenges of developing this68 model are three-fold. First, the task data distribution is not explicitly given, and our only perception69 for it is via the associated sample set. Second, it should be permutation invariant — the output of70 model should not change under any permutation of the elements in the sample set. Third, the learned71 model could be used for solving unseen tasks. To address these problems, we further develop a prior72 predictor using Local Coordinate Coding (LCC)(Yu et al., 2009). In particular, if the classifier in73 each task is specialized to a parametric model, e.g. deep neural network, the proposed LCC-based74 prior predictor predicts base model parameters using the task sample set. The main contributions75 include: (1) A localized meta-learning framework which provides a means to tighten the original76 PAC-Bayes meta-learning bound (Pentina & Lampert, 2014; Amit & Meir, 2018) by minimizing77 the task-complexity term by choosing data-dependent prior; (2) An LCC-based prior predictor, an78 implementation of conditional hyperposterior, which generates local meta-knowledge for specific79 task; (3) A practical algorithm for probabilistic deep neural networks by minimizing the bound80 (though the optimization method can be applied to a large family of differentiable models); (4)81 Experimental results which demonstrate improved performance over meta-learning method in this82 field.83\n2 PRELIMINARIES84\nOur prior predictor was implemented by Local Coordinate Coding (LCC). The LML framework85 was inspired by PAC-Bayes theory for meta learning. In this section we briefly review the related86 definitions and formulations.87\n2.1 LOCAL COORDINATE CODING88\nDefinition 1. (Lipschitz Smoothness Yu et al. (2009).) A function f(x) in Rd is a (α, β)-Lipschitz89 smooth w.r.t. a norm ‖ · ‖ if ‖f(x)−f(x′)‖ ≤ α‖x−x′‖ and ‖f(x′)−f(x)−∇f(x)>(x′−x)‖ ≤90 β‖x− x′‖2.91 Definition 2. (Coordinate Coding Yu et al. (2009).) A coordinate coding is a pair (γ, C), where92 C ⊂ Rd is a set of anchor points(bases), and γ is a map of x ∈ Rd to [γu(x)]u∈C ∈ R|C| such that93 ∑\nu γu(x) = 1. It induces the following physical approximation of x in Rd : x̄ = ∑ u∈C γu(x)u.94\nDefinition 3. (Latent Manifold Yu et al. (2009).) A subsetM ⊂ Rd is called a smooth manifold with an intrinsic dimension |C| := dM if there exists a constant cM such that given any x ∈ M,\nthere exists |C| anchor points u1(x), . . . ,u|C|(x) ∈ Rd so that ∀x′ ∈M:\ninf γ∈R|C| ‖x′ − x− |C|∑ j=1 γjuj(x)‖2 ≤ cM‖x′ − x‖22,\nwhere γ = [γ1, . . . , γ|C|]> are the local codings w.r.t. the anchor points.b95\nDefinition 2 and 3 imply that any point in Rd can be expressed as a linear combination of a set96 of anchor points. Later, we will show that a high dimensional nonlinear prior predictor can be97 approximated by a simple linear function w.r.t. the coordinate coding, and the approximation quality98 is ensured by the locality of such coding (each data point can be well approximated by a linear99 combination of its nearby anchor points).100\n2.2 PAC-BAYES REGULAR META-LEARNING101\nIn order to present the advances proposed in this paper, we recall some definitions in PAC-Bayes102 theory for single-task learning and meta-learning (Catoni, 2007; Baxter, 2000; Pentina & Lampert,103 2014; Amit & Meir, 2018). In the context of classification, we assume all tasks share the same input104 space X , output space Y , space of classifiers (hypotheses) H ⊂ {h : X → Y} and loss function105 ` : Y × Y → [0, 1]. The meta learner observes n tasks in the form of sample sets S1, . . . , Sn. The106 number of samples in task i is denoted by mi. Each observed task i consists of a set of i.i.d. samples107 Si = {(xj , yj)}mij=1, which is drawn from a data distribution Si ∼ D mi i . Following the meta-learning108 setup in (Baxter, 2000), we assume that each data distribution Di is generated i.i.d. from the same109 meta distribution τ . Let h(x) be the prediction of x, the goal of each task is to find a classifier h110 that minimizes the expected loss Ex∼D`(h(x), y). Since the underlying ‘true’ data distribution Di is111 unknown, the base learner receives a finite set of samples Si and produces an “optimal” classifier112 h = Ab(Si) with a learning algorithm Ab(·) that will be used to predict the labels of unseen inputs.113 PAC-Bayes theory studies the properties of randomized classifier, called Gibbs classifier. Let Q be a114 posterior distribution overH. To make a prediction, the Gibbs classifier samples a classifier h ∈ H115 according to Q and then predicts a label with the chosen h. The expected error under data distribution116 D and empirical error on the sample set S are then given by averaging over distribution Q, namely117 er(Q) = Eh∼QE(x,y)∼D`(h(x), y) and êr(Q) = Eh∼Q 1m ∑m j=1 `(h(xj), yj), respectively.118\nIn the context of meta-learning, the goal of the meta learner is to extract meta-knowledge contained in the observed tasks that will be used as prior knowledge for learning new tasks. In each task, the prior knowledge P is in the form of a distribution over classifiersH. The base learner produces a posterior Q = Ab(S, P ) over H based on a sample set S and a prior P . All tasks are learned through the same learning procedure. The meta learner treats the prior P itself as a random variable and assumes the meta-knowledge is in the form of a distribution over all possible priors. Let hyperprior P be an initial distribution over priors, meta learner uses the observed tasks to adjust its original hyperprior P into hyperposterior Q from the learning process. Given this, the quality of the hyperposterior Q is measured by the expected task error of learning new tasks using priors generated from it, which is formulated as: er(Q) = EP∼QE(D,m)∼τ,S∼Dmer(Q = Ab(S, P )). (1) Accordingly, the empirical counterpart of the above quantity is given by:\nêr(Q) = EP∼Q 1\nn n∑ i=1 êr(Q = Ab(Si, P )). (2)\n2.3 PAC-BAYES REGULAR META-LEARNING BOUND WITH GAUSSIAN RANDOMIZATION119\nBased on the above definitions, Pentina & Lampert (2014) and Amit & Meir (2018) present regular120 meta-learning PAC-Bayes generalization bounds w.r.t. hyperposteriorQ. Notably, the proof technique121 in Amit & Meir (2018) allows to incorporate different single task bounds. Consider the benefit of122 Catoni’s bound (Catoni, 2007) (the minimization problem derived from the bound is a simple linear123 combination of empirical risk plus a regularizer), here we instantiate a regular meta-learning bound124 with Gaussian randomization based on that. To make fair comparison, we will adopt the same Catoni’s125 bound to analysis the proposed LML framework later. Particularly, the classifier h is parameterized126 as hw with w ∈ Rdw . The prior and posterior are a distribution over the set of all possible parameters127 w. We choose both the prior P and posterior Q to be spherical Gaussians, i.e. P = N (wP , σ2wIdw)128\nand Q = N (wQ, σ2wIdw). The mean wP is a random variable distributed first according to the129 hyperprior P , which we formulate as N (0, σ2wIdw), and later according to hyperposterior Q, which130 we model as N (wQ, σ2wIdw). When encountering a new task i, we first sample the mean of prior131 wPi from the hyperposterior N (wQ, σ2wIdw), and then use it as a basis to learn the mean of posterior132 wQi = Ab(Si, P ), as shown in Figure 2(left). Then, we could derive the following PAC-Bayes133 meta-learning bound.134\nTheorem 1. Consider the regular meta-learning framework, given the hyperpriorP = N (0, σ2wIdw). Then for any hyperposterior Q, any c1, c2 > 0 and any δ ∈ (0, 1] with probability ≥ 1− δ we have,\ner(Q) ≤c′1c′2êr(Q) + ( n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w + c′1 2c1nσ2w )‖wQ‖2 + n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w ‖ E wP\nwQi −w Q‖2\n+ n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 2 + log 2n δ ) + c′1 c1nσ2w log 2 δ , (3)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . To get a better understanding, we further simplify the notation\nand obtain that er(Q) ≤c′1c′2êr(Q) + ( n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w + c′1 2c1nσ2w )‖wQ‖2 + n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w ‖ E wP wQi −w Q‖2︸ ︷︷ ︸\ntask−complexity\n+ const(δ, n,mi, σw, c1, c2). (4)\nSee Appendix D.4 for the proof. Notice that the expected task generalization error is bounded by the135 empirical multi-task error plus two complexity terms which measures the environment-complexity136 and the task-complexity, respectively.137\n3 PAC-BAYES LOCALIZED META-LEARNING138 3.1 MOTIVATION AND OVERALL FRAMEWORK139 Our motivation stems from a core challenge in PAC-Bayes meta-learning bound in (4), wherein140 the task-complexity term ∑n i=1 c′1c ′ 2 2c2nmiσ2w ‖EwQi −wQ‖2, which measures the closeness between141 the mean of posterior and the mean of global hyperposterior for each task, is typically vital to the142 generalization bound. Finding the tightest possible bound generally depends on minimizing this143\nterm. It is obvious that the optimal wQ is ∑n i=1 c′1c ′ 2Ew Q i\n2c2nmiσ2w . This solution for global hyperposterior is144 required to satisfy the task similarity assumption that the optimal posteriors for each task are close145 together and lie within a small subset of the model space. Under this circumstance, there exists a146 global hyperposterior from which a good prior for any individual task is reachable. However, if the147 optimal posteriors for each task are not related or even mutually exclusive, i.e., one optimal posterior148 has a negative effect on another task, the global hyperposterior may impede the learning of some149 tasks. Moreover, this complexity term could be inevitably large and incur large generalization error.150\nNote that wQ is the mean of hyperposterior Q and this complexity term naturally indicates the151 divergence between the mean of prior wPi sampled from the hyperposterior Q and the mean of152 posterior wQi in each task. Therefore, we propose to adaptively choose the mean of prior w P i153\naccording to task i. It is obvious that the complexity term vanishes if we set wPi = w Q i , but the prior154 Pi in each task has to be chosen independently of the sample set Si. Fortunately, the PAC-Bayes155\ntheorem allows us to choose prior upon the data distribution Di. Therefore, we propose a prior156 predictor Φ : Dm → wP which receives task data distribution Dm and outputs the mean of prior157 wP . In this way, the generated priors could focus locally on those regions of model parameters that158 are of particular interest in solving specific tasks.159\nParticularly, the prior predictor is parameterized as Φv with v ∈ Rdv . We assume v to be a random160 variable distributed first according to the hyperprior P , which we reformulate as N (0, σ2vIdv), and161 later according to hyperposteriorQ, which we reformulate asN (vQ, σ2vIdv). Given a new task i, we162 first sample v from hyperposterior N (vQ, σ2vIdv) and estimate the mean of prior wPi by leveraging163 prior predictor wPi = Φv(D m i ). Then, the base learner utilizes the sample set Si and the prior164\nPi = N (wPi , σ2wIdw) to produce a mean posterior w Q i = Ab(Si, Pi), as shown in Figure 2(right).165\nTo make wP close to wQ in each task, what properties are the prior predictor is expected to exhibit?166 Importantly, it is required to (i) uncover the tight relationship between the sample set and model167 parameters. Intuitively, features and parameters yield similar local and global structures in their168 respective spaces in the classification problem. Features in the same category tend to be spatially169 clustered together while maintaining the separation between different classes. Take linear classifiers170 as an example, let wk be the parameters w.r.t. category k, the separability between classes is171 implemented as x · wk, which also explicitly encourages intra-class compactness. A reasonable172 choice of wk is to maximize the inner product distance with the input features in the same category173 and minimize the distance with the input features of the non-belonging categories. Besides, the prior174 predictor should be (ii) category-agnostic since it will be used continuously as new tasks and hence175 new categories become available. Lastly, it should be (iii) invariant under permutations of its inputs.176\n3.2 LCC-BASED PRIOR PREDICTOR177\nThere exists many implementations, such as set transformer (Lee et al., 2018), relation network (Rusu178 et al., 2019), task2vec(Achille et al., 2019), that satisfy the above conditions. We follow the idea of179 nearest class mean classifier (Mensink et al., 2013), which represents class parameter by averaging180 its feature embeddings. This idea has been explored in transductive few-shot learning problems (Snell181 et al., 2017; Qiao et al., 2018). Snell et al. (2017) learn a metric space across tasks such that when182 represented in this embedding, prototype (centroid) of each class can be used for label prediction183 in the new task. Qiao et al. (2018) directly predict the classifier weights using the activations by184 exploiting the close relationship between the parameters and the activations in a neural network185 associated with the same category. In summary, the classification problem of each task is transformed186 as a generic metric learning problem which is shared across tasks. Once this mapping has been187 learned on observed tasks, due to the structure-preserving property, it could be easily generalized to188 new tasks. Formally, consider each task as a K-class classification problem, and the parameter of the189 classifier in task i denoted as wi = [wi[1], . . . ,wi[k], . . . ,wi[K]], the prior predictor for class k can190 be defined as:191\nwPi [k] = Φv(D mik ik ) = E\nSik∼D mik ik\n1\nmik ∑ xj∈Sik φv(xj), (5)\nwhere φv(·) : Rd → Rdw is the feature embedding function, mik is the number of samples belonging to category k, Sik and Dik are the sample set and data distribution for category k in task i. We call this function the expected prior predictor. Since data distribution Dik is considered unknown and our only insight as to Dik is through the sample set Sik, we approximate the expected prior predictor by its empirical counterpart. Note that if the prior predictor is relatively stable to perturbations of the sample set, then the generated prior could still reflect the underlying task data distribution, rather than the data, resulting in a generalization bound that still holds perhaps with smaller probability (Dziugaite & Roy, 2018). Formally, the empirical prior predictor is defined as:\nŵPi [k] = Φ̂v(Sik) = 1\nmik ∑ xj∈Sik φv(xj). (6)\nAlthough we can implement the embedding function φv(·) with a multilayer perceptron (MLP), both input x ∈ Rd and model parameter w ∈ Rdw are high-dimensional, making the empirical prior predictor Φ̂v(·) difficult to learn. Inspired by the local coordinate coding method, if the anchor points are sufficiently localized, the embedding function φv(xj) can be approximated by a linear function w.r.t. a set of codings, [γu(xj)]u∈C . Accordingly, we propose an LCC-based prior predictor, which\nis defined as: w̄Pi [k] = Φ̄v(Sik) = 1\nmik ∑ xj∈Sik ∑ u∈C γu(xj)φv(u), (7)\nwhere φv(u) ∈ Rdw is the embedding of the corresponding anchor point u ∈ C. As such,192 the parameters of LCC-based prior predictor w.r.t. category k can be represented as vk =193 [φvk(u1), φvk(u2), . . . , φvk(u|C|)]. Lemma 1 illustrates the approximation error between empirical194 prior predictor and LCC-based prior predictor.195 Lemma 1. (Empirical Prior Predictor Approximation) Given the definition of ŵPi [k] and w̄Pi [k] in Eq. (6) and Eq. (7), let (γ,C) be an arbitrary coordinate coding on Rd and φv(·) be an (α, β)-Lipschitz smooth function. We have for all x ∈ Rd\n‖ŵPi [k]− w̄Pi [k]‖ ≤ Oα,β(γ,C) (8) whereOα,β(γ,C) = 1mik ∑ xj∈Sik ( α‖xj− x̄j‖+β ∑ u∈C ‖x̄j−u‖2 ) and x̄j = ∑ u∈C γu(xj)u.196\nSee Appendix D.1 for the proof. Lemma 1 shows that a good LCC-based prior predictor should make197 x close to its physical approximation x̄ and should be localized. The complexity of LCC coding198 scheme depends on the number of anchor points |C|. We follow the optimization method in Yu et al.199 (2009) to find the coordinate coding (γ,C), which is presented in Appendix B.200\n3.3 PAC-BAYES LOCALIZED META-LEARNING BOUND WITH GAUSSIAN RANDOMIZATION201\nIn order to derive a PAC-Bayes generalization bound for localized meta-learning, we first bound the202 approximation error between expected prior predictor and LCC-based prior predictor.203 Lemma 2. Given the definition of wP and w̄P in Eq. (5) and (7), let X be a compact set with radius R, i.e., ∀x,x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ (0, 1] with probability ≥ 1− δ, we have\n‖wP − w̄P ‖2 ≤ K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 1 δ )) +Oα,β(γ,C) )2 .\nSee Appendix D.2 for the proof. Lemma 2 shows that the approximation error between expected204 prior predictor and LCC-based prior predictor depends on (i) the concentration of prior predictor205 and (ii) the quality of LCC coding scheme. The first term implies the number of samples for each206 category should be larger for better approximation. This is consistent with the results of estimating207 the center of mass (Cristianini & Shawe-Taylor, 2004). Based on Lemma 2, using the same Catoni’s208 bound. we have the following PAC-Bayes LML bound.209 Theorem 2. Consider the localized meta-learning framework. Given the hyperprior P = N (0, σ2vIdv), then for any hyperposterior Q, any c1, c2 > 0 and any δ ∈ (0, 1] with probability ≥ 1− δ we have,\ner(Q) ≤c′1c′2êr(Q) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v )‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ(Si)‖ 2\n+ n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 σ2w K∑ k=1 ( αR√ mik (1 + √ 1 2 log( 4n δ )) +Oα,β(γ,C) )2 + dwK( σv σw )2 )\n+ n∑ i=1 c′1c ′ 2 c2nmiσ2w log 4n δ + c′1 2c1nσ2v log 2 δ , (9)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . To get a better understanding, we further simplify the notation\nand obtain that er(Q) ≤c′1c′2êr(Q) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v )‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ(Si)‖\n2︸ ︷︷ ︸ task−complexity\n+ const(α, β,R, δ, n,mi, σv, σw, c1, c2). (10)\nSee appendix D.3 for the proof. Similarly to the regular meta-learning bound in Theorem 1, the210 expected task error er(Q) is bounded by the empirical task error êr(Q) plus the task-complexity and211 environment-complexity terms. The main innovation here is to exploit the potential to choose the212 mean of prior wP adaptively, based on task data S. Intuitively, if the selection of the LCC-based213 prior predictor is appropriate, it will narrow the divergence between the mean of prior wPi sampled214\nfrom the hyperposterior Q and the mean of posterior wQi in each task. Therefore, the bound can be215 tighter than the ones in the regular meta-learning (Pentina & Lampert, 2014; Amit & Meir, 2018).216 Our empirical study in Section 4 will illustrate that the algorithm derived from this bound can217 reduce task-complexity and thus achieve better performance than the methods derived from regular218 meta-learning bounds.219\nWhen one is choosing the number of anchor points |C|, there is a balance between accuracy and220 simplicity of prior predictor. As we increase |C|, it will essentially increase the expressive power of221 Φ̄v(·) and reduce the task-complexity term ‖E\nv wQ − Φ̄vQ(S)‖2. However, at the same time, it will222 increase the enviornment-complexity term ‖vQ‖2 and make the bound loose. If we set |C| to 1, it223 degenerates to the regular meta-learning framework.224\n3.4 LOCALIZED META-LEARNING ALGORITHM225\nSince the bound in (9) holds uniformly w.r.t. Q, the guarantees of Theorem 2 also hold for the resulting learned hyperposterior Q = N (vQ, σ2vIdv), so the mean of prior wP sampled from the learned hyperposterior work well for future tasks. The PAC-Bayes localized meta-learning bound in (9) can be compactly written as ∑n i=1 Ev êri(Qi = Ab(Si, P )) +α1‖v Q‖2 + ∑n i=1 α2 mi ‖E v wQi − Φ̄vQ(Si)‖2, where α1, α2 > 0 are hyperparameters. For task i, the learning algorithm Ab(·) can be formulated as w?i = arg min\nwQi\nE v êri(Qi = N (wQi , σ2wIdw)). To make fair comparison and guarantee the benefit of\nthe proposed LML is not from using an improved optimization method, we follow the same learning algorithm in (Amit & Meir, 2018). Specifically, we jointly optimize the parameters of LCC-based prior predictor v and the parameters of classifiers in each task w1,w2, . . . ,wn, which is formulated as\narg min v,w1,...,wn n∑ i=1 E v êri(wi) + α1‖vQ‖2 + n∑ i=1 α2 mi ‖E v wQi − Φ̄vQ(Si)‖ 2. (11)\nWe can optimize v and w via mini-batch SGD. The details of the localized meta-learning algorithm226 is given in Appendix F. The expectation over Gaussian distribution and its gradient can be efficiently227 estimated by using the re-parameterization trick Kingma & Welling (2014); Rezende et al. (2014).228 For example, to sample w from the posterior Q = N (wQ, σ2wIdw), we first draw ξ ∼ N (0, Idw)229 and then apply the deterministic function wQ + ξ σ, where is an element-wise multiplication.230\n4 EXPERIMENTS231\nDatasets and Setup. We use CIFAR-100 and Caltech-256 in our experiments. CIFAR-100232 Krizhevsky (2009) contains 60,000 images from 100 fine-grained categories and 20 coarse-level233 categories. As in Zhou et al. (2018), we use 64, 16, and 20 classes for meta-training, meta-validation,234 and meta-testing, respectively. Caltech-256 has 30,607 color images from 256 classes Griffin et al.235 (2007). Similarly, we split the dataset into 150, 56 and 50 classes for meta-training, meta-validation,236 and meta-testing. We consider 5-way classification problem. Each task is generated by randomly237 sampling 5 categories and each category contains 50 samples. The base model uses the convolutional238 architecture in Finn et al. (2017), which consists of 4 convolutional layers, each with 32 filters and a239 fully-connected layer mapping to the number of classes on top. High dimensional data often lies on240 some low dimensional manifolds. We utilize an auto-encoder to extract the semantic information of241 image data and then construct the LCC scheme based on the embeddings. The parameters of prior242 predictor and base model are random perturbations in the form of Gaussian distribution.243\nWe design two different meta-learning environment settings to validate the efficacy of the proposed244 method. The first one uses the pre-trained base model as an initialization, which utilizes all the245 meta-training classes (64-class classification in CIFAR-100 case) to train the feature extractor. The246 second one uses the random initialization. We compare the proposed LML method with ML-PL247 method Pentina & Lampert (2014), ML-AM method Amit & Meir (2018) and ML-A which is248 derived from Theorem 1. In all these methods, we use their main theorems about the generalization249 upper bound to derive the objective of the algorithm. To ensure a fair comparison, all approaches250 adopt the same network architecture and pre-trained feature extractor (more details can be found in251 Appendix E).252\nResults. In Figure 3, we demonstrate the average test error of learning a new task based on the253 number of training tasks, together with the standard deviation, in different settings (with or without254 a pre-trained feature extractor). It is obvious that the performance continually increases as we255 increase the number of training tasks for all the methods. This is consistent with the generalization256 bounds that the complexity term converges to zero if large numbers of tasks are observed. ML-A257 consistently outperforms ML-PL and ML-AM since the single-task bound used in Theorem 1(ML-A)258 converges at the rate of O( 1m ) while the bounds w.r.t. ML-PL and ML-AM converge at the rate259 of O( 1√\nm ). This demonstrates the importance of using tight generalization bound. Moreover, our260 proposed LML significantly outperforms the baselines, which validates the effectiveness of the261 proposed LCC-based prior predictor. This confirms that LCC-based prior predictor is a more suitable262 representation for meta-knowledge than the traditional global hyperposterior in ML-A, ML-AM,263 and ML-PL. Finally, we observe that if the pre-trained feature extractor is provided, all of these264 methods do better than meta-training with random initialization. This is because the pre-trained265 feature extractor can be regarded as a data-dependent hyperpior. It is closer to the hyperposteior than266 the randomly initialized hyperprior. Therefore, it is able to reduce the environment complexity term267 and improves the generalization performance.\nIn Figure 4(b), we show the divergence between the mean of generated prior wP from meta model269 and the mean of learned posterior wQ for LML and ML-A. This further validates the effectiveness of270 the LCC-based prior predictor which could narrow down the divergence term and thus tighten the271 bound. In Figure 4(a), we vary the number of anchor points |C| in LCC scheme from 4 to 256, the272 optimal value is around 64 in both datasets. This indicates that LML is sensitive to the number of273 anchor points |C|, which further affects the quality of LCC-based prior predictor and the performance274 of LML.275\n5 CONCLUSION276 This work contributes a novel localized meta-learning framework from both the theoretical and277 computational perspectives. In order to tailor meta-knowledge to various individual task, we formulate278 meta model as a mapping function that leverages the samples in target set and produces task specific279 meta-knowledge as a prior. Quantitatively, this idea essentially provides a means to theoretically280 tighten the PAC-Bayes meta-learning generalization bound. We propose a LCC-based prior predictor281 to output localized meta-knowledge by using task information and further develop a practical282 algorithm with deep neural networks by minimizing the generalization bound. An interesting283 topic for future work would be to explore other principles to construct the prior predictor and apply284 the localized meta-learning framework to more realistic scenarios where tasks are sampled non-i.i.d.285 from an environment. Another challenging problem is to extend our techniques to derive localized286 meta-learning algorithms for regression and reinforcement learning problems.287\nREFERENCES288 Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C289 Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In290 Proceedings of the IEEE International Conference on Computer Vision, pp. 6430–6439, 2019.291\nRon Amit and Ron Meir. Meta-learning by adjusting priors based on extended PAC-Bayes theory. In292 International Conference on Machine Learning, pp. 205–214, 2018.293\nMaria-Florina Balcan, Mikhail Khodak, and Ameet Talwalkar. Provable guarantees for gradient-based294 meta-learning. In Proceedings of the 36th International Conference on Machine Learning, ICML295 2019, 9-15 June 2019, Long Beach, California, USA, pp. 424–433, 2019.296\nJonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:297 149–198, 2000.298\nO Catoni. PAC-Bayesian supervised classification: The thermodynamics of statistical learning.299 institute of mathematical statistics lecture notes—monograph series 56. IMS, Beachwood, OH.300 MR2483528, 2007.301\nNello Cristianini and John Shawe-Taylor. Kernel methods for pattern analysis, volume 173. Cam-302 bridge University Press Cambridge, 2004.303\nGiulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Incremental learning-to-304 learn with statistical guarantees. In Proceedings of the Thirty-Fourth Conference on Uncertainty305 in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pp. 457–466,306 2018a.307\nGiulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Learning to learn around a308 common mean. In Advances in Neural Information Processing Systems, pp. 10169–10179, 2018b.309\nGiulia Denevi, Carlo Ciliberto, Riccardo Grazzi, and Massimiliano Pontil. Learning-to-learn stochas-310 tic gradient descent with biased regularization. In Proceedings of the 36th International Conference311 on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 1566–1575,312 2019.313\nGintare Karolina Dziugaite and Daniel M Roy. Data-dependent PAC-Bayes priors via differential314 privacy. In Advances in Neural Information Processing Systems, pp. 8430–8441, 2018.315\nChelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of316 deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume317 70, pp. 1126–1135. JMLR. org, 2017.318\nTomer Galanti, Lior Wolf, and Tamir Hazan. A theoretical framework for deep transfer learning.319 Information and Inference: A Journal of the IMA, 5(2):159–209, 2016.320\nGregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.321\nBenjamin Guedj. A primer on pac-bayesian learning. arXiv preprint arXiv:1901.05353, 2019.322\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International323 Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015,324 Conference Track Proceedings, 2015.325\nDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International326 Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014,327 Conference Track Proceedings, 2014.328\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer,329 2009.330\nJuho Lee, Yoonho Lee, Jungtaek Kim, Adam R Kosiorek, Seungjin Choi, and Yee Whye Teh. Set331 transformer: A framework for attention-based permutation-invariant neural networks. arXiv332 preprint arXiv:1810.00825, 2018.333\nGuy Lever, François Laviolette, and John Shawe-Taylor. Tighter PAC-Bayes bounds through334 distribution-dependent priors. Theoretical Computer Science, 473:4–28, 2013.335\nAndreas Maurer. Algorithmic stability and meta-learning. Journal of Machine Learning Research, 6336 (Jun):967–994, 2005.337\nThomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image338 classification: Generalizing to new classes at near-zero cost. IEEE transactions on pattern analysis339 and machine intelligence, 35(11):2624–2637, 2013.340\nEmilio Parrado-Hernández, Amiran Ambroladze, John Shawe-Taylor, and Shiliang Sun. PAC-Bayes341 bounds with data dependent priors. Journal of Machine Learning Research, 13(Dec):3507–3531,342 2012.343\nAnastasia Pentina and Christoph Lampert. A PAC-Bayesian bound for lifelong learning. In Interna-344 tional Conference on Machine Learning, pp. 991–999, 2014.345\nSiyuan Qiao, Chenxi Liu, Wei Shen, and Alan L. Yuille. Few-shot image recognition by predicting pa-346 rameters from activations. In 2018 IEEE Conference on Computer Vision and Pattern Recognition,347 CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7229–7238, 2018.348\nSachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In 5th Interna-349 tional Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017,350 Conference Track Proceedings, 2017.351\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and352 approximate inference in deep generative models. In Proceedings of the 31th International353 Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1278–1286,354 2014.355\nOmar Rivasplata, Csaba Szepesvari, John S Shawe-Taylor, Emilio Parrado-Hernandez, and Shiliang356 Sun. PAC-Bayes bounds for stable algorithms with instance-dependent priors. In Advances in357 Neural Information Processing Systems, pp. 9214–9224, 2018.358\nAndrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero,359 and Raia Hadsell. Meta-learning with latent embedding optimization. In 7th International360 Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019,361 2019.362\nJake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In363 Advances in Neural Information Processing Systems, pp. 4077–4087, 2017.364\nSebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.365\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one366 shot learning. In Advances in neural information processing systems, pp. 3630–3638, 2016.367\nRisto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J Lim. Toward multimodal model-agnostic368 meta-learning. arXiv preprint arXiv:1812.07172, 2018.369\nXin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, and Joseph E Gonzalez. Tafe-net: Task-aware370 feature embeddings for low shot learning. In Proceedings of the IEEE Conference on Computer371 Vision and Pattern Recognition, pp. 1831–1840, 2019.372\nKai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. In373 Advances in neural information processing systems, pp. 2223–2231, 2009.374\nFengwei Zhou, Bin Wu, and Zhenguo Li. Deep meta-learning: Learning to learn in the concept space.375 arXiv preprint arXiv:1802.03596, 2018.376\nLuisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast377 context adaptation via meta-learning. In Proceedings of the 36th International Conference on378 Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 7693–7702,379 2019.380\nSupplementary Materials for Localized Meta-381 Learning: A PAC-Bayes Analysis for Meta-Learning382 Beyond Global Prior383 This supplementary document contains the discussion of previous work, the technical proofs of384 theoretical results and details of experiments. It is structured as follows: Appendix A gives a detailed385 discussion of previous work. Appendix B presents the optimization method for LCC. Appendix C386 presents notations for prior predictor. Appendix D gives the proofs of the main results. Appendix387 D.1 and D.2 show the approximation error between LCC-based prior predictor and empirical prior388 predictor, expected prior predictor, respectively. They are used in the proof of Theorem 2. Next, in389 Appendix D.3 and D.4 we show the PAC-Bayes generalization bound of localized meta-learning390 in Theorem 2 and also provides the PAC-Bayes generalization bound of regular meta-learning in391 Theorem 1. Details of experiments and more empirical results are presented in Appendix E. Finally,392 we summarize the localized meta-learning algorithm in Appendix F.393\nA RELATED WORK394\nMeta-Learning. Meta-learning literature commonly considers the empirical task error by directly395 optimizing a loss of meta learner across tasks in the training data. Recently, this has been successfully396 applied in a variety of models for few-shot learning Ravi & Larochelle (2017); Snell et al. (2017);397 Finn et al. (2017); Vinyals et al. (2016). Although Vuorio et al. (2018); Rusu et al. (2019); Zintgraf398 et al. (2019); Wang et al. (2019) consider task adaptation when using meta-knowledge for specific399 tasks, all of them are not based on generalization error bounds, which is the in the same spirit as400 our work. Meta-learning in the online setting has regained attention recently Denevi et al. (2018b;a;401 2019); Balcan et al. (2019), in which online-to-batch conversion results could imply generalization402 bounds. Galanti et al. (2016) analyzes transfer learning in neural networks with PAC-Bayes tools.403 Most related to our work are Pentina & Lampert (2014); Amit & Meir (2018), which provide a404 PAC-Bayes generalization bound for meta-learning framework. In contrast, neither work provides a405 principled way to derive localized meta-knowledge for specific tasks.406\nLocalized PAC-Bayes Learning. There has been a prosperous line of research for learning priors407 to improve the PAC-Bayes bounds Catoni (2007); Guedj (2019). Parrado-Hernández et al. (2012)408 showed that priors can be learned by splitting the available training data into two parts, one for409 learning the prior, one for learning the posterior. Lever et al. (2013) bounded the KL divergence by a410 term independent of data distribution and derived an expression for the overall optimal prior, i.e. the411 prior distribution resulting in the smallest bound value. Recently, Rivasplata et al. (2018) bounded412 the KL divergence by investigating the stability of the hypothesis. Dziugaite & Roy (2018) optimized413 the prior term in a differentially private way. In summary, theses methods construct some quantities414 that reflect the underlying data distribution, rather than the sample set, and then choose the prior P415 based on these quantities. These works, however, are only applicable for single-task problem and416 could not transfer knowledge across tasks in meta-learning setting.417\nB OPTIMIZATION OF LCC418\nWe minimize the inequality in (8) to obtain a set of anchor points. As with Yu et al. (2009), we simplify the localization error term by assuming x̄ = x, and then we optimize the following objective function:\narg min γ,C n∑ i=1 ∑ xj∈Si α‖xj − x̄j‖2 + β ∑ u∈C ‖xj − u‖2 s.t.∀x, ∑ u∈C γu(x) = 1, (12)\nwhere x̄ = ∑\nu∈C γu(x)u. In practice, we update C and γ by alternately optimizing a LASSO419 problem and a least-square regression problem, respectively.420\nC NOTATIONS421\nLet φv(·) : Rd → Rdw be the feature embedding function. mik denotes the number of samples belonging to category k. Sik and Dik are the sample set and data distribution for category k in task i,\nrespectively. Then, the expected prior predictor w.r.t. class k in task i is defined as:\nwPi [k] = Φv(D mik ik ) = E\nSik∼D mik ik\n1\nmik ∑ xj∈Sik φv(xj).\nThe empirical prior predictor w.r.t. class k in task i is defined as:\nŵPi [k] = Φ̂v(Sik) = 1\nmik ∑ xj∈Sik φv(xj).\nThe LCC-based prior predictor w.r.t. class k in task i is defined as:\nw̄Pi [k] = Φ̄v(Sik) = 1\nmik ∑ xj∈Sik ∑ u∈C γu(xj)φv(u).\nD THEORETICAL RESULTS422\nD.1 PROOF OF LEMMA 1423\nThis lemma bounds the error between the empirical prior predictor ŵPi [k] and the LCC-based prior424 predictor w̄Pi [k].425\nLemma 1 Given the definition of ŵPi [k] and w̄Pi [k] in Eq. (6) and Eq. (7), let (γ,C) be an arbitrary coordinate coding on Rdx and φ be an (α, β)-Lipschitz smooth function. We have for all x ∈ Rdx\n‖ŵPi [k]− w̄Pi [k]‖ ≤ 1\nmik ∑ xj∈Sik ( α‖xj − x̄j‖+ β ∑ u∈C ‖x̄j − u‖2 ) = Oα,β(γ,C), (13)\nwhere x̄j = ∑ u∈C γu(xj)u.426\nProof. Let x̄j = ∑\nu∈C γu(xj)u. We have ‖Φ̂v(Sik)− Φ̄v(Sik)‖2\n= 1\nmik ∑ xj∈Sik ‖φv(xj)− ∑ u∈C γu(xj)φv(u)‖2\n≤ 1 mik ∑ xj∈Sik ( ‖φv(xj)− φv(x̄j)‖2 + ‖ ∑ u∈C γu(xj)(φv(u)− φv(x̄j)‖2 )\n= 1\nmik ∑ xj∈Sik ( ‖φv(xj)− φv(x̄j)‖2 + ‖ ∑ u∈C γu(xj)(φv(u)− φv( ∑ u∈C γu(xj)u))−∇φv(x̄j)(u− x̄j)‖2 )\n≤ 1 mik ∑ xj∈Sik ( ‖φv(xj)− φv(x̄j)‖2 + ∑ u∈C |γu(xj)|‖(φv(u)− φv( ∑ u∈C γu(xj)u))−∇φv(x̄j)(u− x̄j)‖2 )\n≤ 1 mik ∑ xj∈Sik ( α‖xj − x̄j‖2 + β ∑ u∈C ‖x̄j − u‖22 ) = Oα,β(γ,C)\nIn the above derivation, the first inequality holds by the triangle inequality. The second equality427 holds since ∑ u∈C γu(xj) = 1 for all xj . The last inequality uses the assumption of (α, β)-Lipschitz428 smoothness of φv(·). This implies the desired bound.429\nThis lemma demonstrates that the quality of LCC approximation is bounded by two terms: the first term ‖xj − x̄j‖2 indicates x should be close to its physical approximation x̄, the second term ‖x̄j −u‖ implies that the coding should be localized. According to the Manifold Coding Theorem in Yu et al. (2009), if the data points x lie on a compact smooth manifoldM. Then given any > 0, there exists anchor points C ⊂M and coding γ such that\n1\nmik ∑ xj∈Sik ( α‖xj − x̄j‖2 + β ∑ u∈C ‖x̄j − u‖22 ) ≤ [αcM + (1 + 5 √ dM)β] 2. (14)\nIt shows that the approximation error of local coordinate coding depends on the intrinsic dimension430 of the manifold instead of the dimension of input.431\nD.2 PROOF OF LEMMA 2432\nIn order to proof Lemma 2, we first introduce a relevant theorem.433\nTheorem 3. (Vector-valued extension of McDiarmid’s inequality Rivasplata et al. (2018)) Let X1, . . . ,Xm ∈ X be independent random variables, and f : Xm → Rdw be a vector-valued mapping function. If, for all i ∈ {1, . . . ,m}, and for all x1, . . . ,xm,x′i ∈ X , the function f satisfies\nsup xi,x′i\n‖f(x1:i−1,xi,xi+1:m)− f(x1:i−1,x′i,xi+1:m)‖ ≤ ci (15)\nThen E‖f(X1:m) − E[f(X1:m)]‖ ≤ √∑m i=1 c 2 i . For any δ ∈ (0, 1) with probability ≥ 1 − δ we have\n‖f(X1:m)− E[f(X1:m)]‖ ≤ √√√√ m∑ i=1 c2i + √∑m i=1 c 2 i 2 log( 1 δ ). (16)\nThe above theorem indicates that bounded differences in norm implies the concentration of f(X1:m)434 around its mean in norm, i.e., ‖f(X1:m)− E[f(X1:m)]‖ is small with high probability.435 Then, we bound the error between expected prior predictor wPi and the empirical prior predictor ŵ P i .436\nLemma 3. Given the definition of wPi [k] and ŵPi [k] in (5) and (6), let X be a compact set with radius R, i.e., ∀x,x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ (0, 1] with probability ≥ 1− δ, we have\n‖wPi [k]− ŵPi [k]‖ ≤ αR √ mik (1 +\n√ 1\n2 log(\n1 δ )). (17)\nProof. According to the definition of Φ̂v(·) in (6), for all points x1, . . . ,xj−1,xj+1, . . . ,xmk ,x′j in the sample set Sik, we have\nsup xi,x′i\n‖Φ̂v(x1:j−1,xj ,xj+1:mk)− Φ̂v(x1:j−1,x′j ,xj+1:mk)‖\n= 1\nmik sup xj ,x′j\n‖φv(xj)− φv(x′j)‖ ≤ 1\nmik sup xj ,x′j\nα‖xj − x′j‖ ≤ αR\nmik , (18)\nwhere R denotes the domain of x, say R = supx ‖x‖. The first inequality follows from the Lipschitz smoothness condition of Φv(·) and the second inequality follows by the definition of domain X . Utilizing Theorem 3, for any δ ∈ (0, 1] with probability ≥ 1− δ we have\n‖wPi [k]− ŵPi [k]‖ = ‖Φ̂v(Sik)− E[Φ̂v(Sik)]‖ ≤ αR √ mik (1 +\n√ 1\n2 log(\n1 δ )). (19)\nThis implies the bound.437\nLemma 3 shows that the bounded difference of function Φv(·) implies its concentration, which can438 be further used to bound the differences between empirical prior predictor w̄Pi [k] and expected prior439 predictor wPi [k]. Now, we bound the error between expected prior predictor w P i and the LCC-based440 prior predictor w̄Pi .441\nLemma 2 Given the definition of wPi and w̄Pi in (5) and (7), let X be a compact set with radius R, i.e., ∀x,x′ ∈ X , ‖x− x′‖ ≤ R. For any δ ∈ (0, 1] with probability ≥ 1− δ, we have\n‖wPi − w̄Pi ‖2 ≤ K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 1 δ )) +Oα,β(γ,C) )2 . (20)\nProof According to the definition of wP , w̄P and ŵP , we have ‖wPi − w̄Pi ‖2\n= K∑ k=1 ‖wPi [k]− w̄Pi [k]‖2\n= K∑ k=1 ‖E[Φ̂v(Sik)]− Φ̂v(Sik) + Φ̂v(Sik)− Φ̄v(Sik)‖2\n= K∑ k=1 ( ‖E[Φ̂v(Sik)]− Φ̂v(Sik)‖2 + ‖Φ̂v(Sik)− Φ̄v(Sik)‖2 + 2(E[Φ̂v(Sik)]− Φ̂v(Sik))>(Φ̂v(Sik)− Φ̄v(Sik)) ) ≤\nK∑ k=1 ( ‖E[Φ̂v(Sik)]− Φ̂v(Sik)‖2 + ‖Φ̂v(Sik)− Φ̄v(Sik)‖2 + 2‖E[Φ̂v(Sik)]− Φ̂v(Sik)‖‖Φ̂v(Sik)− Φ̄v(Sik)‖ ) .\n(21)\nSubstitute Lemma 3 and Lemma 1 into the above inequality, we can derive\nPSik∼Dmkk ‖wP − w̄P ‖2 ≤ K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 1 δ )) +Oα,β(γ,C) )2 ≥ 1− δ. (22) This gives the assertion.442\nLemma 2 shows that the approximation error between expected prior predictor and LCC-based prior443 predictor depends on the number of samples in each category and the quality of the LCC coding444 scheme.445\nD.3 PROOF OF THEOREM 2446\nTheorem 3 Let Q be the posterior of base learner Q = N (wQ, σ2wIdw) and P be the prior N (Φ̄v(S), σ2wIdw). The mean of prior is produced by the LCC-based prior predictor Φ̄v(S) in Eq. (7) and its parameter v is sampled from the hyperposterior of meta learner Q = N (vQ, σ2vIdv). Given the hyperprior P = N (0, σ2vIdv), then for any hyperposterior Q, any c1, c2 > 0 and any δ ∈ (0, 1] with probability ≥ 1− δ we have,\ner(Q) ≤c′1c′2êr(Q) + ( n∑ i=1\nc′1c ′ 2\n2c2nmiσ2v + c′1 2c1nσ2v )‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ(Si)‖ 2\n+ n∑ i=1 c′1c ′ 2 c2nmiσ2w 1 σ2w K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 4n δ )) +Oα,β(γ,C) )2 + dwK( σv σw )2 +\nn∑ i=1 c′1c ′ 2 c2nmiσ2w log 4n δ + c′1 2c1nσ2v log 2 δ , (23)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . We can simplify the notation and obtain that er(Q) ≤c′1c′2êr(Q) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2v + c′1 2c1nσ2v )‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ(Si)‖ 2\n+ const(α, β,R, δ, n,mi). (24)\nProof Our proof contains two steps. First, we bound the error within observed tasks due to observing447 a limited number of samples. Then we bound the error on the task environment level due to observing448 a finite number of tasks. Both of the two steps utilize Catoni’s classical PAC-Bayes bound Catoni449 (2007) to measure the error. We give here a general statement of the Catoni’s classical PAC-Bayes450 bound.451\nTheorem 4. (Classical PAC-Bayes bound, general notations) Let X be a sample space and X be some distribution over X , and let F be a hypotheses space of functions over X . Define a loss function g(f,X) : F × X → [0, 1], and let XG1 , {X1, . . . , XG} be a sequence of G independent random\nvariables distributed according to X. Let π be some prior distribution over F (which must not depend on the samples X1, . . . , XG). For any δ ∈ (0, 1], the following bounds holds uniformly for all posterior distribution ρ over F (even sample dependent),\nPXG1 ∼i.i.dX\n{ E\nX∼X E f∼ρ g(f,X) ≤ c 1− e−c\n[ 1\nG G∑ g=1 E f∼ρ g(f,Xg) + KL(ρ||π) + log 1δ G× c\n] ,∀ρ } ≥ 1− δ. (25)\nFirst step We utilize Theorem 4 to bound the generalization error in each of the observed tasks.452 Let i ∈ 1, . . . , n be the index of task. For task i, we substitute the following definition into the453 Catoni’s PAC-Bayes Bound. Specifically, Xg , (xij , yij),K , mi denote the samples and X , Di454 denotes the data distribution. We instantiate the hypotheses with a hierarchical model f , (v,w),455 where v ∈ Rdv and w ∈ Rdw are the parameters of meta learner (prior predictor) Φv(·) and456 base learner h(·) respectively. The loss function only considers the base learner, which is defined457 as g(f,X) , `(hw(x), y). The prior over model parameter is represented as π , (P, P ) ,458 (N (0, σ2vIdv),N (wP , σ2wIdw)), a Gaussian distribution (hyperprior of meta learner) centered at 0459 and a Gaussian distribution (prior of base learner) centered at wP , respectively. We set the posterior460 to ρ , (Q, Q) , (N (vQ, σ2vIdv),N (wQ, σ2wIdw)), a Gaussian distribution (hyperposterior of461 meta learner) centered at vQ and a Gaussian distribution (posterior of base learner) centered at wQ.462 According to Theorem 4, the generalization bound holds for any posterior distribution including463 the one generated in our localized meta-learning framework. Specifically, we first sample v from464 hyperposteriorN (vQ, σ2vIdv) and estimate wP by leveraging expected prior predictor wP = Φv(D).465 The base learner algorithm Ab(S, P ) utilizes the sample set S and the prior P = N (wP , σ2wIdw)466 to produce a posterior Q = Ab(S, P ) = N (wQ, σ2wIdw). Then we sample base learner parameter467 w from posterior N (wQ, σ2wIdw) and compute the incurred loss `(hw(x), y). On the whole, meta-468 learning algorithmAm(S1, . . . , Sn,P) observes a series of tasks S1, . . . , Sn and adjusts its hyperprior469 P = N (vP , σ2vIdv) into hyperposterior Q = Am(S1, . . . , Sn,P) = N (vQ, σ2vIdv).470 The KL divergence term between prior π and posterior ρ is computed as follows:\nKL(ρ‖π) = E f∼ρ\nlog ρ(f)\nπ(f) = E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) log N (vQ, σ2vIdv)N (wQ, σ2wIdw) N (0, σ2vIdv)N (wP , σ2wIdw)\n= E v∼N (vQ,σ2vIdv ) log N (vQ, σ2vIdv) N (0, σ2vIdv) + E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) log N (wQ, σ2wIdw) N (wP , σ2wIdw)\n= 1\n2σ2v ‖vQ‖2 + E v∼N (vQ,σ2vIdv )\n1\n2σ2w ‖wQ −wP ‖2. (26)\nIn our localized meta-learning framework, in order to make KL(Q||P ) small, the center of prior distribution wP is generated by the expected prior predictor wP = Φv(D). However, the data distribution D is considered unknown and our only insight as to Dik is through the sample set Sik. In this work, we approximate the expected prior predictor Φv(D) with the LCC-based prior predictor w̄P = Φ̄v(S). Denote the term E\nv∼N (vQ,σ2vIdv ) 1 2σ2w ‖wQ −wP ‖2 by E v 1 2σ2w ‖wQ −wP ‖2\nfor convenience, we have\nE v\n1\n2σ2w ‖wQ −wP ‖2 =E v\n1\n2σ2w ‖wQ − w̄P + w̄P −wP ‖2\n=E v\n1\n2σ2w [‖wQ − w̄P ‖2 + ‖w̄P −wP ‖2 + 2(wQ − w̄P )>(w̄P −wP )]\n≤E v\n1\n2σ2w [‖wQ − w̄P ‖2 + ‖w̄P −wP ‖2 + 2‖wQ − w̄P ‖‖w̄P −wP ‖]\n≤ 1 σ2w E v ‖wQ − Φ̄v(S)‖2 + 1 σ2w E v ‖w̄P −wP ‖2. (27)\nSince w̄Pi = Φ̄v(Si) = [Φ̄v(Si1), . . . , Φ̄v(Sik), . . . , Φ̄v(SiK)], we have\nE v ‖wQi − Φ̄v(Si)‖ 2 = K∑ k=1 E v ‖wQi [k]− Φ̄v(Sik)‖ 2\n= K∑ k=1 ( E v ‖wQi [k]‖ 2 − 2(E v wQi [k]) >(Φ̄vQ(Sik)) + ‖Φ̄vQ(Sik)‖2 + V v [‖Φ̄v(Sik)‖] )\n= K∑ k=1 ( ‖E v wQi [k]− Φ̄vQ(Sik)‖ 2 + dv |C| σ2v ) =‖E\nv wQi − Φ̄vQ(Si)‖ 2 + dwKσ 2 v, (28)\nwhere V v [‖Φ̄v(Sik)‖] denotes the variance of ‖Φ̄v(Sik)‖. The last equality uses the fact that dv = |C|dw. Combining Lemma 2, for any δ′ ∈ (0, 1] with probability ≥ 1− δ′ we have\nE v\n1\n2σ2w ‖wQi −w P i ‖2\n≤ 1 σ2w ‖E v wQi − Φ̄vQ(Si)‖ 2 + dwK( σv σw )2 + 1 σ2w K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 1 δ )) +Oα,β(γ,C) )2 (29)\nThen, according to Theorem 4, we obtain that for any δi2 > 0\nPSi∼Dmii\n{ E\n(x,y)∼Di E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) `(hw(x), y)\n≤ c2 1− e−c2 · 1 mi mi∑ j=1 E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) `(hw(xj), yj)\n+ 1\n(1− e−c2) ·mi\n( 1\n2σ2v ‖vQ‖2 + E v∼N (vQ,σ2vIdv )\n1\n2σ2w ‖wQi −w P i ‖2 + log\n2\nδi\n) ,∀Q } ≥ 1− δi\n2 ,\n(30) for all observed tasks i = 1, . . . , n. Define δ′ = δi2 and combine inequality (29), we obtain\nPSi∼Dmii\n{ E\n(x,y)∼Di E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) `(hw(x), y)\n≤ c2 1− e−c2 · 1 mi mi∑ j=1 E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) `(hw(xj), yj)\n+ 1 (1− e−c2)mi · ( 1 2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ(Si)‖ 2 + log 2 δi + dwK( σv σw )2\n+ 1\nσ2w K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 2 δi )) +Oα,β(γ,C) )2) ,∀Q } ≥ 1− δi, (31)\nUsing the notations in Section 3, the above bound can be simplified as\nPSi∼Dmii\n{ E\nv∼N (vQ,σ2vIdv ),wP =Φv(D),Pi=N (wP ,σ2wIdw ) er(Ab(Si, Pi))\n≤ c2 1− e−c2 E v∼N (vQ,σ2vIdv ),wP =Φv(D),Pi=N (wP ,σ2wIdw ) êr(Ab(Si, Pi))\n+ 1\n(1− e−c2)mi\n( 1\n2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ(Si)‖ 2 + log 2 δi + dwK( σv σw )2\n+ 1\nσ2w K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 2 δi )) +Oα,β(γ,C) )2) ,∀Q } ≥ 1− δi. (32)\nSecond step Next we bound the error due to observing a limited number of tasks from the environment. We reuse Theorem 4 with the following substitutions. The samples are (Di,mi, Si), i = 1, . . . , n,\nwhere (Di,mi) are sampled from the same meta distribution τ and Si ∼ Dmii . The hyposthesis is parameterized as Φv(D) with meta learner parameter v. The loss function is g(f,X) ,\nE (x,y)∼D E w∼N (wQ,σ2wIdw ) `(hw(x), y), where wQ = Ab(Si, Pi). Let π , N (0, σ2vIdv) be the prior over meta learner parameter, the following holds for any δ0 > 0,\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ E\n(D,m)∼τ E S∼Dm E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) E (x,y)∼Di `(hw(x), y)\n≤ c1 1− e−c1 · 1 n n∑ i=1 E v∼N (vQ,σ2vIdv ) E w∼N (wQ,σ2wIdw ) E (x,y)∼Di `(hw(x), y)\n+ 1\n(1− e−c1)n\n( 1\n2σ2v ‖vQ‖2 + log 1 δ0\n) ,∀Q } ≥ 1− δ0. (33)\nUsing the term in Section 3, the above bound can be simplified as\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ er(Q)\n≤ c1 1− e−c1 · 1 n n∑ i=1 E v∼N (vQ,σ2vIdv ),wP =Φv(D),Pi=N (wP ,σ2wIdw ) er(Ab(Si, Pi))\n+ 1\n(1− e−c1)n\n( 1\n2σ2v ‖vQ‖2 + log 1 δ0\n) ,∀Q } ≥ 1− δ0, (34)\nFinally, by employing the union bound, we could bound the probability of the intersection of the events in (32) and (34) For any δ > 0, set δ0 , δ2 and δi , δ 2n for i = 1, . . . , n, we have\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ er(Q)\n≤ c1c2 (1− e−c1)(1− e−c2) · 1 n n∑ i=1 E v∼N (vQ,σ2vIdv ),wP =Φv(D),Pi=N (wP ,σ2wIdw ) êr(Ab(Si, Pi))\n+ c1 1− e−c1 · 1 n n∑ i=1\n1\n(1− e−c2)mi\n( 1\n2σ2v ‖vQ‖2 + 1 σ2w ‖E v wQi − Φ̄vQ(Si)‖ 2 + log 4n δ\n+ 1\nσ2w K∑ k=1 ( αR √ mik (1 + √ 1 2 log( 4n δ )) +Oα,β(γ,C) )2 + dwK( σv σw )2 + 1\n(1− e−c1)n\n( 1\n2σ2v ‖vQ‖2 + log 2 δ\n) ,∀Q } ≥ 1− δ. (35)\nWe can further simplify the notation and obtain that\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ er(Q) ≤ c′1c′2êr(Q)\n+( n∑ i=1\nc′1c ′ 2\n2c2nmiσ2v + c′1 2c1nσ2v )‖vQ‖2 + n∑ i=1 c′1c ′ 2 c2nmiσ2w ‖E v wQi − Φ̄vQ(Si)‖ 2\n+const(α, β,R, δ, n,mi),∀Q } ≥ 1− δ, (36)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . This completes the proof.471\nD.4 PROOF OF THEOREM 1472\nTheorem 2 Let Q be the posterior of base learner Q = N (wQ, σ2wIdw) and P be the prior N (wP , σ2wIdw). The mean of prior is sampled from the hyperposterior of meta learner Q = N (wQ, σ2wIdw). Given the hyperprior P = N (0, σ2wIdw), then for any hyperposterior Q, any\nc1, c2 > 0 and any δ ∈ (0, 1] with probability ≥ 1− δ we have, er(Q) ≤c′1c′2êr(Q) + ( n∑ i=1 c′1c ′ 2 2c2nmiσ2w + c′1 2c1nσ2w )‖wQ‖2 + n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w ‖ E wP wQi −w Q‖2\n+ n∑ i=1 c′1c ′ 2 c2nmiσ2w ( 1 2 + log 2n δ ) + c′1 c1nσ2w log 2 δ , (37)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 .473\nProof Instead of generating the mean of prior with a prior predictor, the vanilla meta-learning framework directly produces the mean of prior wP by sampling from hyperposteriorQ = N (wQ, σ2wIdw). Then the base learner algorithmAb(S, P ) utilizes the sample set S and the prior P = N (wP , σ2wIdw) to produce a posterior Q = Ab(S, P ) = N (wQ, σ2wIdw). Similarly with the two-steps proof in Theorem 2, we first get an intra-task bound by using Theorem 4. For any δi > 0, we have\nPSi∼Dmii\n{ E\n(x,y)∼Di E wP∼N (wQ,σ2wIdw ) E w∼N (wQ,σ2wIdw ) `(hw(x), y)\n≤ c2 1− e−c2 · 1 mi mi∑ j=1 E wP∼N (wQ,σ2wIdw ) E w∼N (wQ,σ2wIdw ) `(hw(xj), yj)\n+ 1\n(1− e−c2) ·mi\n( 1\n2σ2w ‖wQ‖2 + E wPi ∼N (wQ,σ2wIdw )\n1\n2σ2w ‖wQi −w P i ‖2 + log\n1\nδi\n) ,∀Q } ≥ 1− δi,\n(38) The term E\nwPi ∼N (wQ,σ2wIdw ) 1 2σ2w ‖wQi −wPi ‖2 can be simplified as\nE wPi ∼N (wQ,σ2wIdw )\n1\n2σ2w ‖wQi −w P i ‖2\n= 1\n2σ2w\n( E wP ‖wQi ‖ 2 − 2( E wP wQi ) >wQ + ‖wQ‖2 + V\nwPi\n[‖wPi ‖] )\n= 1\n2σ2w\n( ‖ E wP wQi −w Q‖2 + σ2w ) , (39)\nwhere V wPi [‖wPi ‖] denotes the variance of ‖wPi ‖. Then we get an inter-task bound. For any δ0 > 0, we have\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ E\n(D,m)∼τ E S∼Dm E wP∼N (wQ,σ2wIdw ) E w∼N (wQ,σ2wIdw ) E (x,y)∼Di `(hw(x), y)\n≤ c1 1− e−c1 · 1 n n∑ i=1 E wP∼N (wQ,σ2wIdw ) E w∼N (wQ,σ2wIdw ) E (x,y)∼Di `(hw(x), y)\n+ 1\n(1− e−c1)n\n( 1\n2σ2w ‖wQ‖2 + log 1 δ0\n) ,∀Q } ≥ 1− δ0. (40)\nFor any δ > 0, set δ0 , δ2 and δi , δ 2n for i = 1, . . . , n. Using the union bound, we finally get\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ er(Q)\n≤ c1c2 (1− e−c1)(1− e−c2) · 1 n n∑ i=1 E v∼N (vQ,σ2vIdv ),wP =Φv(D),Pi=N (wP ,σ2wIdw ) êr(Ab(Si, Pi))\n+ c1 1− e−c1 · 1 n n∑ i=1\n1\n(1− e−c2) ·mi\n( 1\n2σ2w ‖wQ‖2 + 1 2σ2w ‖ E wP wQi −w Q‖2 + 1 2 + log 2n δ\n)\n+ 1\n(1− e−c1)n\n( 1\n2σ2w ‖wQ‖2 + log 2 δ\n) ,∀Q } ≥ 1− δ. (41)\nSimilarly, we can further simplify the notation and obtain that\nP(Dmii )∼τ,Si∼Dmii ,i=1,...,n\n{ er(Q) ≤ c′1c′2êr(Q)\n+( n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w + c′1 2c1nσ2w )‖wQ‖2 + n∑ i=1\nc′1c ′ 2\n2c2nmiσ2w ‖ E wP wQi −w Q‖2\n+const(δ, n,mi),∀Q } ≥ 1− δ, (42)\nwhere c′1 = c1 1−e−c1 and c ′ 2 = c2 1−e−c2 . This completes the proof.474\nE DETAILS OF EXPERIMENTS475\nWhile the theorems consider bounded-loss, we use an unbounded loss in our experiments, we can476 have theoretical guarantees on a variation of the loss which is clipped to [0; 1]. Besides, in practice477 the loss function is almost always smaller than one.478\nE.1 DATA PREPARATION479\nWe used the 5-way 50-shot classification setups, where each task instance involves classifying480 images from 5 different categories sampled randomly from one of the meta-sets. We did not employ481 any data augmentation or feature averaging during meta-training, or any other data apart from the482 corresponding training and validation meta-sets.483\nE.2 NETWORK ARCHITECHTURE484\nAuto-Encoder for LCC For CIFAR100, the encoder is 7 layers with 16-32-64-64-128-128-256485 channels. Each convolutional layer is followed by a LeakyReLU activation and a batch normalization486 layer. The 1st, 3rd and 5th layer have stride 1 and kernel size (3, 3). The 2nd, 4th and 6th layer have487 stride 2 and kernel size (4, 4). The 7th layer has stride 1 and kernel size (4, 4). The decoder is the488 same as encoder except that the layers are in reverse order. The input is resized to 32 × 32. For489 Caltech-256, the encoder is 5 layers with 32-64-128-256-256 channels. Each convolutional layer is490 followed by a LeakyReLU activation and a batch normalization layer. The first 4 layers have stride 2491 and kernel size (4, 4). The last layer has stride 1 and kernel size (6, 6). The decoder is the same as492 encoder except that the layers are in reverse order. The input is resized to 96× 96.493 Base Model The network architecture used for the classification task is a small CNN with 4 con-494 volutional layers, each with 32 filters, and a linear output layer, similar to Finn et al. (2017). Each495 convolutional layer is followed by a Batch Normalization layer, a Leaky ReLU layer, and a max-496 pooling layer. For CIFAR100, the input is resized to 32× 32. For Caltech-256, the input is resized to497 96× 96.498\nE.3 OPTIMIZATION499\nAuto-Encoder for LCC As optimizer we used AdamKingma & Ba (2015) with β1 = 0.9 and500 β2 = 0.999. The initial learning rate is 1× 10−4. The number of epochs is 100. The batch size is501 512.502\nLCC Training We alternatively train the coefficients and bases of LCC with Adam with β1 = 0.9503 and β2 = 0.999. In specifics, for both datasets, we alternatively update the coefficients for 60 times504 and then update the bases for 60 times. The number of training epochs is 3.The number of bases is505 64. The batch size is 256.506\nPre-Training of Feature Extractor We use a 64-way classification in CIFAR-100 and 150-way507 classification in Caltech-256 to pre-train the feature embedding only on the meta-training dataset. For508 both CIFAR100 and Caltech-256, an L2 regularization term of 5e−4 was used. We used the Adam509 optimizer. The initial learning rate is 1× 10−3, β1 is 0.9 and β2 is 0.999. The number of epochs is510 50. The batch size is 512.511\nMeta-Training We use the cross-entropy loss as in Amit & Meir (2018). Although this is inconsistent512 with the bounded loss setting in our theoretical framework, we can still have a guarantee on a variation513 of the loss which is clipped to [0, 1]. In practice, the loss is almost always smaller than one. For514 CIFAR100 and Caltech-256, the number of epochs of meta-training phase is 12; the number of epochs515 of meta-testing phase is 40. The batch size is 32 for both datasets. As optimizer we used Adam with516 β1 = 0.9 and β2 = 0.999. In the setting with a pre-trained base model, the learning rate is 1× 10−5517 for convolutional layers and 5× 10−4 for the linear output layer. In the setting without a pre-trained518 base model, the learning rate is 1× 10−3 for convolutional layers and 5× 10−3 for the linear output519 layer. The confidence parameter is chosen to be δ = 0.1. The variance hyper-parameters for prior520 predictor and base model are σw = σv = 0.01. The hyperparameters α1, α2 in LML and ML-A are521 set to 0.01.522\nE.4 MORE EXPERIMENTAL RESULTS523\nWe also compare with two typical meta-learning few-shot learning methods: MAML (Finn et al.,524 2017) and MatchingNet (Vinyals et al., 2016). Both two methods use the Adam optimizer with initial525 learning rate 0.0001. In the meta-training phase, we randomly split the samples of each class into526 support set (5 samples) and query set (45 samples). The number of epochs is 100. For MAML, the527 learning rate of inner update is 0.01.528\nIn Figure 5, we demonstrate the average test error of learning a new task based on the number of529 training tasks, together with the standard deviation, in different settings (with or without a pre-trained530 feature extractor). We can find that all PAC-Bayes baselines outperform MAML and MatchingNet.531 Note that MAML and MatchingNet adopt the episodic training paradigm to solve the few-shot532 learning problem. The meta-training process requires millions of tasks and each task contains limited533 samples, which is not the case in our experiments. Scarce tasks in meta-training leads to severely534 meta-overfitting. In our method, the learned prior serves both as an initialization of base model and535 as a regularizer which restricts the solution space in a soft manner while allowing variation based on536 specific task data. It yields a model with smaller error than its unbiased counterpart when applied to a537 similar task.538\nF PSEUDO CODE539\nAlgorithm 1 Localized Meta-Learning (LML) algorithm Input: Data sets of observed tasks: S1, . . . , Sn. Output: Learned prior predictor Φ̄ parameterized by v. Initialize v ∈ Rdv and wi ∈ Rdw for i = 1 . . . , n. Construct LCC scheme (γ,C) from the whole training data by optimizing Eq. (12). while not converged do\nfor each task i ∈ {1, . . . , n} do Sample a random mini-batch from the data S′i ⊂ Si. Approximate E\nv êri(wi) using S′i.\nend for Compute the objective in (11), i.e. J ← ∑n i=1 Ev êri(wi) + α1‖v Q‖2 + ∑n i=1 α2 mi ‖E v wQi − Φ̄vQ(Si)‖2. Evaluate the gradient of J w.r.t. {v,w1, . . . ,wn} using backpropagation. Take an optimization step.\nend while" } ]
2,020
null
SP:f24872ae71c6883964f865312805f1c969e97d2c
[ "The authors propose a modification of the option-critic algorithm for hierarchical reinforcement learning. The proposed algorithm modifies how the termination conditions of the options are improved by experience. Specifically, the algorithm aims to maximize the mutual information between the options and their termination states. The authors develop an optimization scheme for achieving this objective and provide empirical results in a number of domains" ]
In this paper, we study the problem of autonomously discovering temporally abstracted actions, or options, for exploration in reinforcement learning. For learning diverse options suitable for exploration, we introduce the infomax termination objective defined as the mutual information between options and their corresponding state transitions. We derive a scalable optimization scheme for maximizing this objective via the termination condition of options, yielding the InfoMax Option Critic (IMOC) algorithm. Through illustrative experiments, we empirically show that IMOC learns diverse options and utilizes them for exploration. Moreover, we show that IMOC scales well to continuous control tasks.
[]
[ { "authors": [ "P. Bacon", "J. Harb", "D. Precup" ], "title": "The option-critic architecture", "venue": "In Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "A.G. Barto", "R.S. Sutton", "C.W. Anderson" ], "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "venue": "IEEE Trans. Syst. Man Cybern.,", "year": 1983 }, { "authors": [ "A.G. Barto", "G.D. Konidaris", "C.M. Vigorito" ], "title": "Behavioral hierarchy: Exploration and representation", "venue": "Computational and Robotic Models of the Hierarchical Organization of Behavior,", "year": 2013 }, { "authors": [ "E. Brunskill", "L. Li" ], "title": "Pac-inspired option discovery in lifelong reinforcement learning", "venue": "In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing,", "year": 2014 }, { "authors": [ "P.S. Castro", "D. Precup" ], "title": "Using bisimulation for policy transfer in mdps", "venue": "Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Y. Duan", "X. Chen", "R. Houthooft", "J. Schulman", "P. Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "B. Eysenbach", "A. Gupta", "J. Ibarz", "S. Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "R. Fruit", "A. Lazaric" ], "title": "Exploration-exploitation in mdps with options", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "K. Gregor", "D.J. Rezende", "D. Wierstra" ], "title": "Variational intrinsic control", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "J. Harb", "P. Bacon", "M. Klissarov", "D. Precup" ], "title": "When waiting is not an option: Learning options with a deliberation cost. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence", "venue": null, "year": 2018 }, { "authors": [ "A. Harutyunyan", "W. Dabney", "D. Borsa", "N. Heess", "R. Munos", "D. Precup" ], "title": "The termination critic", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "M. Jaderberg", "V. Mnih", "W.M. Czarnecki", "T. Schaul", "J.Z. Leibo", "D. Silver", "K. Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "A. Jain", "K. Khetarpal", "D. Precup" ], "title": "Safe option-critic: Learning safety in the option-critic", "venue": "architecture. CoRR,", "year": 2018 }, { "authors": [ "Y. Jinnai", "J.W. Park", "M.C. Machado", "G.D. Konidaris" ], "title": "Exploration in reinforcement learning with deep covering options", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "A. Jonsson", "A.G. Barto" ], "title": "Causal graph based decomposition of factored mdps", "venue": "J. Mach. Learn. Res.,", "year": 2006 }, { "authors": [ "K. Khetarpal", "M. Klissarov", "M. Chevalier-Boisvert", "P. Bacon", "D. Precup" ], "title": "Options of interest: Temporal abstraction with interest functions", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "M. Klissarov", "P. Bacon", "J. Harb", "D. Precup" ], "title": "Learnings options end-to-end for continuous action", "venue": "tasks. CoRR,", "year": 2017 }, { "authors": [ "A.S. Klyubin", "D. Polani", "C.L. Nehaniv" ], "title": "All else being equal be empowered", "venue": "Advances in Artificial Life, 8th European Conference,", "year": 2005 }, { "authors": [ "G.D. Konidaris", "A.G. Barto" ], "title": "Building portable options: Skill transfer in reinforcement learning", "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence,", "year": 2007 }, { "authors": [ "M.C. Machado", "M.G. Bellemare", "M.H. Bowling" ], "title": "A laplacian framework for option discovery in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "T.A. Mann", "S. Mannor" ], "title": "Scaling up approximate value iteration with options: Better policies with fewer iterations", "venue": "In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing,", "year": 2014 }, { "authors": [ "T.A. Mann", "S. Mannor", "D. Precup" ], "title": "Approximate value iteration with temporally extended actions", "venue": "J. Artif. Intell. Res.,", "year": 2015 }, { "authors": [ "V. Mnih", "A.P. Badia", "M. Mirza", "A. Graves", "T.P. Lillicrap", "T. Harley", "D. Silver", "K. Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "A.W. Moore" ], "title": "Efficient memory-based learning for robot control", "venue": "Technical Report UCAMCL-TR-209,", "year": 1990 }, { "authors": [ "I. Osband", "B.V. Roy", "D.J. Russo", "Z. Wen" ], "title": "Deep exploration via randomized value functions", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "C. Salge", "C. Glackin", "D. Polani" ], "title": "Approximation of empowerment in the continuous domain", "venue": "Advances in Complex Systems,", "year": 2013 }, { "authors": [ "A.M. Saxe", "J.L. McClelland", "S. Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "J. Schulman", "S. Levine", "P. Abbeel", "M.I. Jordan", "P. Moritz" ], "title": "Trust region policy optimization", "venue": "Proceedings of the 32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "J. Schulman", "P. Moritz", "S. Levine", "M.I. Jordan", "P. Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "CoRR, abs/1506.02438,", "year": 2015 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "CoRR, abs/1707.06347,", "year": 2017 }, { "authors": [ "A. Sharma", "S. Gu", "S. Levine", "V. Kumar", "K. Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ö. Simsek", "A.G. Barto" ], "title": "Skill characterization based on betweenness", "venue": "Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "S.P. Singh", "A.G. Barto", "N. Chentanez" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "R. Sutton", "A. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "R.S. Sutton" ], "title": "Generalization in reinforcement learning: Successful examples using sparse coarse coding", "venue": "Advances in Neural Information Processing Systems 8,", "year": 1995 }, { "authors": [ "R.S. Sutton", "D. Precup", "S.P. Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artif. Intell.,", "year": 1999 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "C.M. Vigorito", "A.G. Barto" ], "title": "Intrinsically motivated hierarchical skill learning in structured environments", "venue": "IEEE Trans. Auton. Ment. Dev.,", "year": 2010 }, { "authors": [ "C. Apps", "D. Silver" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": "Nature, 575(7782):350–354,", "year": 2019 }, { "authors": [ "R. Williams", "J. Peng" ], "title": "Function optimization using connectionist reinforcement learning algorithms", "venue": "Connection Science, 3:241–,", "year": 1991 }, { "authors": [ "R.J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Mach. Learn.,", "year": 1992 }, { "authors": [ "Y. Wu", "E. Mansimov", "R.B. Grosse", "S. Liao", "J. Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "R. Zhao", "P. Abbeel", "S. Tiomkin" ], "title": "Efficient online estimation of empowerment for reinforcement learning", "venue": "CoRR, abs/2007.07356,", "year": 2020 }, { "authors": [ "B OMITTED PROOFS B" ], "title": "PROOF OF PROPOSITION 1 First, we repeat the assumption 2 in Harutyunyan et al. (2019). Assumption 1. The distribution d(·|o) over the starting states of an option o under policy μ is independent of its termination condition β", "venue": null, "year": 2019 }, { "authors": [ "Schulman" ], "title": "We used ReLU as an activator for all hidden layers and initialized networks by the orthogonal (Saxe et al., 2014) initialization in all experiments. Unless otherwise noted, we used the default parameters in PyTorch (Paszke", "venue": "HYPERPARAMETERS", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Abstracting a course of action as a higher-level action, or an option (Sutton et al., 1999), is a key ability for reinforcement learning (RL) agents in several aspects, including exploration. In RL problems, an agent learns to approximate an optimal policy only from experience, given no prior knowledge. This leads to the necessity of exploration: an agent needs to explore the poorly known states for collecting environmental information, sometimes sacrificing immediate rewards. For statistical efficiency, it is important to explore the state space in a deep and directed manner, rather than taking uniformly random actions (Osband et al., 2019). Options can represent such directed behaviors by capturing long state jumps from their starting regions to terminating regions. It has been shown that well-defined options can facilitate exploration by exploiting an environmental structure (Barto et al., 2013) or, more generally, by reducing decision steps (Fruit and Lazaric, 2017).\nA key requirement for such explorative options is diversity. If all options have the same terminating region, they will never encourage exploration. Instead, options should lead to a variety of regions for encouraging exploration. However, automatically discovering diverse options in a scalable, online manner is challenging due to two difficulties: generalization and data limitation. Generalization with function approximation (Sutton, 1995) is important for scaling up RL methods to large or continuous domains. However, many existing option discovery methods for exploration are graph-based (e.g., Machado et al. (2017)) and incompatible with function approximation, except for that by Jinnai et al. (2020). Discovering options online in parallel with polices requires us to work with limited data sampled from the environment and train the model for evaluating the diversity in a data-efficient manner.\nTo address these difficulties, we introduce the infomax termination objective defined as the mutual information (MI) between options and their corresponding state transitions. This formulation reflects a simple inductive bias: for encouraging exploration, options should terminate in a variety of regions per starting regions. Thanks to the information-theoretical formulation, this objective is compatible with function approximation and scales up to continuous domains. A key technical contribution of this paper is the optimization scheme for maximizing this objective. Specifically, we employ a simple classification model over options as a critic for termination conditions, which makes our method data-efficient and tractable in many domains.\nThe paper is organized as follows. After introducing background and notations, we present the infomax termination objective and derive a practical optimization scheme using the termination gradient theorem (Harutyunyan et al., 2019). We then implement the infomax objective on the option-critic architecture (OC) (Bacon et al., 2017) with algorithmic modifications, yielding the InfoMax Option Critic (IMOC) algorithm. Empirically, we show that (i) IMOC improves exploration in structured environments, (ii) IMOC improves exporation in lifelong learning, (iii) IMOC is scalable\nto MuJoCo continuous control tasks, and (iv) the options learned by IMOC are diverse and meaningful. We then relate our method to other option-learning methods and the empowerment concept (Klyubin et al., 2005), and finally give concluding remarks." }, { "heading": "2 BACKGROUND AND NOTATION", "text": "We assume the standard RL setting in the Markov decision process (MDP), following Sutton and Barto (2018). An MDPM consists of a tuple (X ,A, p, r, γ), where X is the set of states, A is the set of actions, p : X ×A×X → [0, 1] is the state transition function, r : X ×A → [rmin, rmax] is the reward function, and 0 ≤ γ ≤ 1 is the discount factor. A policy is a probability distribution over actions conditioned on a state x, π : X ×A → [0, 1]. For simplicity, we consider the episodic setting where each episode ends when a terminal state xT is reached. In this setting, the goal of an RL agent is to approximate a policy that maximizes the expected discounted cumulative reward per episode:\nJRL(π) = Eπ,x0 [ T−1∑ t=0 γtRt ] , (1)\nwhere Rt = r(xt, at) is the reward received at time t, and x0 is the initial state of the episode. Relatedly, we define the action-value function Qπ(xt, at) def = Ext,at,π [∑T−1 t′=t γ t′−tRt′ ] and the\nstate-value function V π(xt) def = Ext,π ∑ a π(a|xt)Qπ(xt, a).\nAssuming that π is differentiable by the policy parameters θπ , a simple way to maximize the objective (1) is the policy gradient method (Williams, 1992) that estimates the gradient by:\n∇θπJRL(π) = Eπ,xt [ ∇θπ log π(at|xt)Â(xt, at) ] , (2)\nwhere Â(xt, at) is the estimation of the advantage function Aπ(xt, at) def = Qπ(xt, at)− V π(xt). A common choice of Â(xt, at) is N -step TD error ∑N i=0 γ iRt+i + γ N V̂ (xt+N )− V̂ (xt), where N is a fixed rollout length (Mnih et al., 2016)." }, { "heading": "2.1 OPTIONS FRAMEWORK", "text": "Options (Sutton et al., 1999) provide a framework for representating temporally abstracted actions in RL. An option o ∈ O consists of a tuple (Io, βo, πo), where Io ⊆ X is the initiation set, βo : X → [0, 1] is a termination function with βo(x) denoting the probability that option o terminates in state x, and πo is intra-option policy. Following related studies (Bacon et al., 2017; Harutyunyan et al., 2019), we assume that Io = X and learn only βo and πo. Letting xs denote an option-starting state and xf denote an option-terminating state, we can write the option transition function as:\nP o(xf |xs) = βo(xf )Ixf=xs + (1− βo(xs)) ∑ x pπ o (x|xs)P o(xf |x), (3)\nwhere I is the indicator function and pπo is the policy-induced transition function pπo(x′|x) def=∑ a∈A π\no(a|x)p(x′|x, a). We assume that all options eventually terminate so that P o is a valid probability distribution over xf , following Harutyunyan et al. (2019).\nTo present option-learning methods, we define two option-value functions: QO and UO, where QO is the option-value function denoting the value of selecting an option o at a state xt defined by QO(xt, o) def = Eπ,β,µ [∑T−1 t′=t γ t′−tRt ] . Analogously to Qπ and V π, we let VO denote the\nmarginalized option-value function VO(x) def = ∑ o µ(o|x)QO(x, o), where µ(o|xs) : X ×O → [0, 1] is the policy over options. Function UO(x, o) def = (1− βo(x))QO(x, o) + βo(x)VO(x) is called the option-value function upon arrival (Sutton et al., 1999) and denotes the value of reaching a state xt with o and not having selected the new option." }, { "heading": "2.2 OPTION CRITIC ARCHITECTURE", "text": "OC (Bacon et al., 2017) provides an end-to-end algorithm for learning πo and βo in parallel. To optimize πo, OC uses the intra-option policy gradient method that is the option-conditional version\nof the gradient estimator (2), ∇θπoJRL(πo) = E [ ∇θπo log πo(at|xt)Âo(xt, at) ] , where Âo is an estimation of the option-conditional advantage Aπ o .\nFor optimizing βo, OC directly maximizes QO using the estimated gradient: ∇θβoQO(x, o) = γE [ −∇θβoβ o(x) ( QO(x, o)− VO(x) )] . (4)\nIntuitively, this decreases the termination probability βo(x) when holding an o is advantageous, i.e., QO(x) − VO(x) is positive, and vice versa. Our method basically follows OC but has a different objective for learning βo." }, { "heading": "2.3 TERMINATION CRITIC", "text": "Recently proposed termination critic (TC) (Harutyunyan et al., 2019) optimizes βo by maximizing the information-theoretic objective called predictability:\nJTC(P o) = −H(Xf |o), (5)\nwhere H denotes entropy and Xf is the random variable denoting the option-terminating states. Maximizing −H(Xf |o) makes the terminating region of an option smaller and more predictable. In other words, we can compress terminating regions by optimizing the objective (5). To derivate this objective by the beta parameters θβo , Harutyunyan et al. (2019) introduced the termination gradient theorem: Theorem 1. Let βo be parameterized with a sigmoid function and `βo denote the logit of βo. We have\n∇θβP o(xf |xs) = ∑ x P o(x|xs)∇θβ `βo(x)(Ixf=x − P o(xf |x)), (6)\nLeveraging the theorem 1, TC performs gradient ascent using the estimated gradient:\n∇θβoJ TC(P o) = −Exs,x,xf [ ∇θβ `βo(x)βo(x) (( logP oµ(x)− logP oµ(xf ) ) + ( 1−\nP o(xf |xs)P oµ(x) P oµ(xf )P o(x|xs)\n))] .\nwhere P oµ(x) is the marginalized distribution of option-terminating states.\nContrary to the termination objective of OC (4), this objective does not depend on state values, making learned options robust against the reward structure of the environment. Our method is inspired by TC and optimizes a similar information-theoretic objective, not for predictability but for diversity. Also, our infomax objective requires an estimation of p̂(o|xs, xf ) instead of the option transition model P o(xf |xs), which makes our method tractable in more environments." }, { "heading": "3 INFOMAX OPTION CRITIC", "text": "We now present the key idea behind the InfoMax Option Critic (IMOC) algorithm. We first formulate the infomax termination objective based on the MI maximization, then derive a practical gradient estimation for maximizing this objective on βo, utilizing the termination gradient theorem 1.\nTo evaluate the diversity of options, we use the MI between options and option-terminating states conditioned by option-starting states:\nJ IMOC = I(Xf ;O|Xs) = H(Xf |Xs)−H(Xf |Xs, O), (7)\nwhere I denotes conditional MI I(A;B|Z) = H(A|Z) −H(A|B,Z), Xs is the random variable denoting an option-starting state, and O is the random variable denoting an option. We call this objective the infomax termination objective. Let us interpret Xf |Xs as the random variable denoting a state transition induced by an option. Then maximizing the MI (7) (i) diversifies a state transition Xf |Xs and (ii) makes an option-conditional state transition Xf |Xs, o more deterministic. Note that the marginalized MI I(Xf ;O) also makes sense in that it prevents the terminating region of each option from being too broad, as predictability (5) does. However, in this study, we focus on the conditional objective since it is easier to optimize.\nTo illustrate the limitation of infomax options, we conducted an analysis in a toy four-state deterministic chain environment, which has four states and two deterministic actions (go left and go right) per each state. Since deriving the exact solution is computationally difficult, we searched options that maximize H(Xf |Xs) from deterministic options that has deterministic option-policies and termination functions (thus has the minimum H(Xf |Xs, O)). Among multiple solutions, Figure 1 shows two interesting instances of deterministic infomax options when |O| = 2. The left options enable diverse behaviors per state, although they fail to capture long-term behaviors generally favorable in the literature (e.g., Mann et al. (2015)). On the other hand, the right options enable relatively long, two step state transitions, but they are the same and the rightmost state and the next one. Furthermore, an agent can be caught in a small loop that consists of the leftmost state and the next one. This example shows that (i) we can obtain short and diverse options with only a few options, (ii) to obtain long and diverse options, we need sufficiently many options, and (iii) an agent can be caught in a small loop with only a few options, failing to visit diverse states. As we show in Appendix A, this ’small loop’ problem cannot happen with four options. Thus, the number of options is important when we are to maximize the MI (7) and a limitation of this method. However, in experiments, we show that we can practically learn diverse options with relatively small number of options.\nFor maximizing the MI by gradient ascent, we now derive the gradient of the infomax termination objective (7). First, we estimate the gradient of the objective using the option transition model P o and marginalized option-transition model P (xf |xs) = ∑ o µ(o|xs)P o(xf |xs).\nProposition 1. Let βo be parameterized with a sigmoid function. Given a trajectory τ = xs, . . . , x, . . . , xf sampled by πo and βo, we can obtain unbiased estimations of∇θβH(Xf |Xs) and ∇θβH(Xf |Xs, O) by\n∇θβH(Xf |Xs) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) ( logP (x|xs)− logP (xf |xs) )] (8)\n∇θβH(Xf |Xs, O) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) ( logP o(x|xs)− logP o(xf |xs) )] (9)\nwhere `βo(x) denotes the logit of βo(x).\nNote that the additional term βo is necessary because x is not actually a terminating state. The proof follows section 4 in Harutyunyan et al. (2019) and is given in Appendix B.1.\nThe estimated gradient of the infomax termination objective (7) can now be written as:\n∇θβI(Xf ;O|Xs) = ∇θβH(Xf |Xs)−∇θβH(Xf |Xs, O) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) (logP (x|xs)− logP (xf |xs)− (logP o(x|xs)− logP o(xf |xs))) ] ,\n(10)\nwhich means that we can optimize this objective by estimating P o and P . However, estimating the probability over the state space can be difficult, especially when the state space is large, as common in the deep RL setting. Hence, we reformulate the gradient using Bayes’ rule in a similar way as Gregor et al. (2017). The resulting term consists of the reverse option transition p(o|xs, xf ) that denotes the probability of having an o given a state transition xs, xf .\nProposition 2. We now have\n∇θβI(Xf ;O|Xs) =∇θβH(Xf |Xs)−∇θβH(Xf |Xs, O) =Exs,x,xf ,o [ ∇θβ `βo(x)βo(x) ( log p(o|xs, x)− log p(o|xs, xf ) )] (11)\nThe proof is given in Appendix B.2. In the following sections, we estimate the gradient (11) by learning a classification model over options p̂(o|xs, xf ) from sampled option transitions." }, { "heading": "4 ALGORITHM", "text": "In this section, we introduce modifications for adjusting the OC (Bacon et al., 2017) to our infomax termination objective. Specifically, we implement IMOC on top of Advantage-Option Critic (AOC), a synchronous variant of A2OC (Harb et al., 2018), yielding the Advantage-Actor InfoMax Option Critic (A2IMOC) algorithm. To stably estimates p̂(o|xs, xf ) for updating θβ , we sample recent option state transitions o, xs, xf from We follow AOC for optimizing option-policies except the following modifications and give a full description of A2IMOC in Appendix C.1. In continuous control experiments, we also used Proximal Policy InfoMax Option Critic (PPIMOC) that is an implementation of IMOC based on PPO (Schulman et al., 2017). We give details of PPIMOC in Appendix C.2.\nUpgoing Option-Advantage Estimation Previous studies (e.g., Harb et al. (2018)) estimated the advantage Âot(xt) ignoring the future rewards after the current option ot terminates. Since longer rollout length often helps speed up learning (Sutton and Barto, 2018), it is preferable to extend this estimation to use all available future rewards. However, future rewards after option termination heavily depends on the selected option, often leading to underestimation of Âo. Thus, to effectively use future rewards, we introduce an upgoing option-advantage estimation (UOAE). Let t+ k denote the time step where the current option ot terminates in a sampled trajectory. Then, UOAE estimates the advantage by:\nÂoUOAE = −QO(xt, ot) + ∑k i=0 γ iRt+i +max N∑ j=k γjRt+j , γ kVO(xt+k) ︸ ︷︷ ︸ upgoing estimation (k < N)\n∑N i=0 γ iRt+i + γ NUO(xt+N , ot) (otherwise)\n.\n(12)\nSimilar to upgoing policy update (Vinyals et al., 2019), the idea is to be optimistic about the future rewards after option termination by taking the maximum with VO.\nPolicy Regularization Based on Mutual Information To perform MI maximization not only on termination functions but also on option-policies, we introduce a policy regularization based on the maximization of the conditional MI, I(A;O|Xs), where A is the random variable denoting an action. This MI can be interpreted as a local approximation of the infomax objective (7), assuming that each action leads to different terminating regions. Although optimizing the infomax termination objective diversifies option-policies implicitly, we found that this regularization helps learn diverse option-policies reliably. Letting πµ denote the marginalized policy πµ(a|x) def = ∑ o µ(o|x)πo(a|x), we write I(A;O|Xs) as:\nI(A;O|Xs) = H(A|Xs)−H(A|O,Xs) = Exs [H(πµ(xs))]− Exs,o [H(πo(xs))] .\nWe use this regularization with the entropy bonus (maximization of H(πo)) common in policy gradient methods (Williams and Peng, 1991; Mnih et al., 2016) and write the overall regularization term as\ncHµH(πµ(x)) + cHH(π o(x)), (13)\nwhere cHµ and cH are weights of each regularization term. Note that we add this regularization term on not only option-starting states but all sampled states. This introduces some bias, which we did\nnot find to be harmful when cHµ is reasonably small. To approximate H(πµ), we employ µ̂ that is the empirical estimation of µ. Using µ̂, H(πµ) is computed by πµ(a|x) ≈ ∑ o µ̂(o|x)πo(a|x) for descrete action spaces and approximated by Monte Carlo method for continuous action spaces. We show the details in Appendix C.1." }, { "heading": "5 EXPERIMENTS", "text": "We conducted a series of experiments to show two use cases of IMOC: exploration in structured environments and exploration for lifelong learning (Brunskill and Li, 2014). In this section, we used four options for all option-learning methods and compared the number of options in Appendix D.7." }, { "heading": "5.1 SINGLE TASK LEARNING IN STRUCTURED ENVIRONMENTS", "text": "We consider two ’Four Rooms’ domains, where diverse options are beneficial for utilizing environmental structures.\nGridworld Four Rooms with Suboptimal Goals First, we tested IMOC in a variant of the classical Four Rooms Gridworld (Sutton et al., 1999) with suboptimal goals. An agent is initially placed at the upper left room and receives a positive reward only at goal states: two closer goals with +1 reward and the farthest goal with +2 reward, as shown in Figure 2a. The episode ends when an agent reaches one of the goals. The optimal policy is aiming the farthest goal in the lower right room without converging to suboptimal goals. Thus, an agent is required to learn multimodal behaviors leading to multiple goals, which options can help. In this environment, we compared A2IMOC with A2C (Mnih et al., 2016; Wu et al., 2017), AOC, and our tuned version of AOC (our AOC) with all enhancements presented in section 4 to highlight the effectiveness of the termination objective among all of our improvements.1 We show the progress of average cumulative rewards over ten trials in Figure 2b. A2IMOC performed the best and found the optimal goal in most trials. AOC and our AOC also occasionally found the optimal goal, while A2C overfitted to either of the suboptimal goals through all trials.\nFigure 3 illustrates learned option-polices and termination functions of each compared method.2 Terminating regions learned with A2IMOC is diverse. For example, option 0 mainly terminates in the right rooms while option 3 terminates in the left rooms. We see that termination regions learned by A2IMOC are diverse and clearly separated per each option. Although all option policies converged to the same near optimal one, we show that A2IMOC diversifies option-policies at the beginning of learning in Appendix D.4. On the other hand, terminating regions learned with AOC overlap each other, and notably, option 3 has no terminating region. We assume this is because the loss function (4) decreases the terminating probability when the advantage is positive. We can see the same tendency in our AOC, although we cannot see the vanishment of the terminating regions.\n1Note that we did not include ACTC (Harutyunyan et al., 2019) for comparison since we failed to reliably reproduce the reported results with our implementation of ACTC.\n2Note that we choose the best model from multiple trials for visualization throughout the paper.\n(a) MuJoCo Point Four Rooms. The agent is an orange ball and there are three goals: green goals are suboptimal (+0.5) and the red one in optimal (+1.0).\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Total Environment Steps 1e6\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nCu m\nul at\niv e\nRe wa\nrd s\nMethod PPIMOC PPOC OurPPOC PPO\n(b) Performance progression.\nanalysis, we observed the same tendency in learned options as the Gridworld experiment, where the details are given in Appendix D.5." }, { "heading": "5.2 SINGLE TASK LEARNING IN CLASSICAL CONTINUOUS CONTROL", "text": "Additionally, we test PPIMOC on two classical, hard-exploration control problems: Mountain Car (Barto et al., 1983) and Cartpole swingup (Moore, 1990). In Mountain Car, PPIMOC and our PPOC successfully learns to reache the goal, while PPO and PPOC converged to run around the start posisition. In Cartpole swing up, PPIMOC and our PPOC performed better than PPO and PPOC, but still failed to learn stable behaviors." }, { "heading": "5.3 EXPLORATION FOR LIFELONG LEARNING", "text": "As another interesting application of IMOC, we consider the lifelong learning setting. Specifically, we tested IMOC in ’Point Billiard’ environment. In this environment, an agent receives a positive reward only when the blue objective ball reaches the goal, pushed by the agent (orange ball). Figure 6a shows all four configurations of Point Billiard that we used. There are four goals: green goals with +0.5 reward and a red one with +1.0 reward. The positions of four goals move clockwise after 1M environmental steps and agents need to adapt to the new positions of goals. We compared PPIMOC with PPO, PPOC, and our PPOC in this environment. Figure 6b shows the progress of average cumulative rewards over five trials. Both PPIMOC and our PPOC performed the best and adapted to all reward transitions. On the other hand, PPO and PPOC struggle to adapt to the second transition, where the optimal goal moves behind the agent. The ablation study given in Appendix D.6 shows that UOAE (12) works effectively in this task. However, without UOAE, PPIMOC still outperformed PPO. Thus, we argue that having diverse terminating regions itself is beneficial for adapting to new reward functions in environments with subgoals." }, { "heading": "6 RELATED WORK", "text": "Options for Exploration Options (Sutton et al., 1999) in RL are widely studied for many applications, including speeding up planning (Mann and Mannor, 2014) and transferring skills (Konidaris and Barto, 2007; Castro and Precup, 2010). However, as discussed by Barto et al. (2013), their benefits for exploration are less well recognized. Many existing methods focused on discovering subgoals that effectively decompose the problem then use such subgoals for encouraging exploration. Subgoals are discovered based on various properties, including graphical features of state transitions (Simsek and Barto, 2008; Machado et al., 2017; Jinnai et al., 2019) and causality (Jonsson and Barto, 2006; Vigorito and Barto, 2010). In contrast, our method directly optimizes termination functions instead of discovering subgoals, capturing environmental structures implicitly. From a theoretical perspective, Fruit and Lazaric (2017) analyzed that good options can improve the exploration combined with state-action visitation bonuses. Using infomax options with visitation bonuses would be an interesting future direction.\nEnd-to-end learning of Options While many studies attempted to learn options and option-policies separately, Bacon et al. (2017) proposed OC to train option-policies and termination functions in parallel. OC has been extended with various types of inductive biases, including deliberation cost (Harb et al., 2018), interest (Khetarpal et al., 2020), and safety (Jain et al., 2018). Our study is directly inspired by an information-theoretic approach presented by Harutyunyan et al. (2019), as we noted in section 2.\nMutual Information and Skill Learning MI often appears in the literature of intrinsically motivated (Singh et al., 2004) reinforcement learning, as a driver of goal-directed behaviors. A well-known example is the empowerment (Klyubin et al., 2005; Salge et al., 2013), which is obtained by maximizing the MI between sequential k actions and the resulting state I(at, ..., at+k;xt+k|xt). Some works (Mohamed and Rezende, 2015; Zhao et al., 2020) implemented lower bound maximization of empowerment as intrinsic rewards for RL agents, encouraging goal-directed behaviors in the absence of extrinsic rewards We can interpret our objective I(Xf ;O|Xs) as empowerment between limited action sequences and states corresponding to options. Gregor et al. (2017) employed this interpretation and introduced a method for maximizing the variational lower bound of this MI via option-policies, using the same model as our p̂, while we aim to maximize the MI via termination functions. MI is also used for intrinsically motivated discovery of skills, assuming that diversity is important to acquire useful skills. Eysenbach et al. (2019) proposed to maximize MI between skills and states I(O;X), extended to the conditional one I(O;X ′|X) by Sharma et al. (2020). Although our study shares the same motivation for using MI as these methods, i.e. diversifying sub-policies, the process of MI maximization is significantly different: our method optimizes termination functions, while their methods optimize conditional policies by using MI as intrinsic rewards." }, { "heading": "7 CONCLUSION", "text": "We presented a novel end-to-end option learning algorithm InfoMax Option Critic (IMOC) that uses the infomax termination objective to diversify options. Empirically, we showed that IMOC improves exploration in structured environments and for lifelong learning, even in continuous control tasks. We also quantitatively showed the diversity of learned options. An interesting future direction would be combining our method for learning termination conditions with other methods for learning option-policies, e.g., by using MI as intrinsic rewards. A limitation of the infomax objective presented in this study is that it requires on-policy data for training. Hence, another interesting line of future work is extending IMOC to use for off-policy option discovery." }, { "heading": "A MORE ANALYSIS ON DETERMINISTIC CHAIN EXAMPLE", "text": "Figure 7 shows infomax options with three options and four options in the four state deterministic chain example. Among multiple solutions, we selected options with an absorbing state per option (i.e., βo(x) = 1.0 for only one x), which are partially the same as the right options in Figure 1. With four options, Pr(xf |xs) = 0.25 for all xf and xs, thus H(Xf |Xs) is the maximum. This example shows that we need sufficiently many options for maximizing the MI, otherwise an agent can be caught in a small loop as we described in Section 3." }, { "heading": "B OMITTED PROOFS", "text": "" }, { "heading": "B.1 PROOF OF PROPOSITION 1", "text": "First, we repeat the assumption 2 in Harutyunyan et al. (2019).\nAssumption 1. The distribution dµ(·|o) over the starting states of an option o under policy µ is independent of its termination condition βo.\nNote that this assumption does not strictly hold since -Greedy option selection depends on βo via QO. However, since this dependency is not so strong, we found that βo reliably converged in our experiments.\nLemma 1. Assume that the distribution dµ(·|o) over the starting state of an option o under policy µ is independent with βo. Then the following equations hold.\n∇θβH(Xf |Xs) = − ∑ xs,o dµ(xs, o) ∑ x P o(x|xs)∇θβ `βo(x) [ logP (x|xs) + 1− ∑ xf P o(xf |x) ( logP (xf |xs) + 1 )] (14)\n∇θβH(Xf |Xs, O) = − ∑ xs,o dµ(xs, o) ∑ x P o(x|xs)∇θβ `βo(x) [ logP o(x|xs) + 1− ∑ xf P o(xf |x) ( logP o(xf |xs) + 1 )] ,\n(15)\nSampling xs, x, xf , o from dµ and P o, ∇θβH(Xf |Xs) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) ( logP (x|xs)− logP (xf |xs) )] (16)\n∇θβH(Xf |Xs, O) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) ( logP o(x|xs)− logP o(xf |xs) )] . (17)" }, { "heading": "Proof of Lemma 1", "text": "Proof. First, we prove Equation (14). Let dµ(xs) denote the probability distribution over xs under the policy µ, or the marginal distribution of dµ(xs|o), and dµ(xs, o) denote the joint distribution of xs and o. Then, we have:\n∇θβH(Xf |Xs) = −∇θβ ∑ xs dµ(xs) ∑ xf P (xf |xs) logP (xf |xs)\n= − ∑ xs dµ(xs) ∑ xf ( ∇θβP (xf |xs) logP (xf |xs) + P (xf |xs) ∇θβP (xf |xs) P (xf |xs) ) = −\n∑ xs dµ(xs) ∑ xf ∇θβP (xf |xs) ( logP (xf |xs) + 1 ) = −\n∑ xs dµ(xs) ∑ xf ∑ o\nµ(o|xs) ∇θβP o(xf |xs)︸ ︷︷ ︸ Apply theorem (6)\n( logP (xf |xs) + 1 )\n= − ∑ xs dµ(xs) ∑ xf ∑ o µ(o|xs) ∑ x P o(x|xs)∇θβ `βo(x)(Ixf=x − P o(xf |x)) ( logP (xf |xs) + 1 ) = −\n∑ xs dµ(xs) ∑ o µ(o|xs) ∑ x P o(x|xs)∇θβ `βo(x) ∑ xf (Ixf=x − P o(xf |x)) ( logP (xf |xs) + 1 ) = −\n∑ xs,o\ndµ(xs, o)︸ ︷︷ ︸ sample\n∑ x\nP o(x|xs)︸ ︷︷ ︸ sample\n∇θβ `βo(x)× [ logP (x|xs) + 1− ∑ xf\nP o(xf |x)︸ ︷︷ ︸ sample\n( logP (xf |xs) + 1 )] .\nSampling xs, x, xf , o, we get (16).\nThen we prove Equation (15). ∇θβH(Xf |Xs, O) = −∇θβ ∑ xs,o dµ(xs, o) ∑ xf P o(xf |xs) logP o(xf |xs)\n= − ∑ xs,o dµ(xs, o) ∑ xf ( ∇θβP o(xf |xs) logP o(xf |xs) + P o(xf |xs) ∇θβP o(xf |xs) P o(xf |xs) ) = −\n∑ xs,o dµ(xs, o) ∑ xf\n∇θβP o(xf |xs)︸ ︷︷ ︸ Apply theorem (6)\n( logP o(xf |xs) + 1 )\n= − ∑ xs,o dµ(xs, o) ∑ xf ∑ x P o(x|xs)∇θβ `βo(x)(Ixf=x − P o(xf |x)) ( logP o(xf |xs) + 1 ) = −\n∑ xs,o dµ(xs, o) ∑ x P o(x|xs)∇θβ `βo(x) ∑ xf (Ixf=x − P o(xf |x)) ( logP o(xf |xs) + 1 ) = −\n∑ xs,o\ndµ(xs, o)︸ ︷︷ ︸ sample\n∑ x\nP o(x|xs)︸ ︷︷ ︸ sample\n∇θβ `βo(x)× [ logP o(x|xs) + 1− ∑ xf\nP o(xf |x)︸ ︷︷ ︸ sample\n( logP o(xf |xs) + 1 )]\nSampling xs, x, xf , o, we get (17)." }, { "heading": "B.2 PROOF OF PROPOSITION 2", "text": "Proof. First, we have that:\nlogP o(xf |xs)− logP (xf |xs) = log P o(xf |xs) P (xf |xs)\n= log Pr(xf |xs, o) P (xf |xs)\n= log Pr(xs, xf , o) Pr(xs)\nPr(xs, xf ) Pr(xs, o)\n= log Pr(xs, xf , o)\nPr(xs, xf ) Pr(o|xs)\n= log Pr(o|xs, xf ) Pr(xs, xf ) Pr(xs, xf ) Pr(o|xs)\n= log p(o|xs, xf ) µ(o|xs)\nUsing this equation, we can rewrite the equation (10) as:\n∇θβI(Xf ;O|Xs) = ∇θβH(Xf |Xs)−∇θβH(Xf |Xs, O) = Exs,x,xf ,o [ −∇θβ `βo(x)βo(x) ( logP (x|xs)− logP (xf |xs)− logP o(x|xs) + logP o(xf |xs) )] = Exs,x,xf ,o [ ∇θβ `βo(x) (( logP o(x|xs)− logP (x|xs) ) − ( logP o(xf |xs)− logP (xf |xs)\n))] = Exs,x,xf ,o [ ∇θβ `βo(x) ( log\np(o|xs, x) µ(o|xs) − log p(o|xs, xf ) µ(o|xs) )] = Exs,x,xf ,o [ ∇θβ `βo(x) ( log p(o|xs, x)− log p(o|xs, xf ) )]\nC IMPLEMENTATION DETAILS" }, { "heading": "C.1 THE WHOLE ALGORITHM OF A2IMOC", "text": "Algorithm 1 shows a full description of A2IMOC. It follows the architecture of A2C (Mnih et al., 2016; Wu et al., 2017) and has multiple synchronous actors and a single learner. At each optimization step, we update πo, QO, and βo from online trajectories collected by actors. We update p̂(o|xs, xf ) for estimating the gradient (11) and µ̂(o|xs) for entropy regularization (13). To learn p̂ and µ̂ stably, we maintain a replay buffer BO that stores option-transitions, implemented by a LIFO queue. Note that using older o, xs, xf sampled from the buffer can introduce some bias in the learned p̂ and µ̂, since they depend on the current πo and βo. However, we found that this is not harmful when the capacity of the replay buffer is reasonably small.\nWe also add maximization of the entropy of βo to the loss function for preventing the termination probability saturating on zero or one. Then the full objective of βo is written as:\nlog p̂(o|xs, x)− log p̂(o|xs, xf ) + cHβH(βo(x)),\nwhere cHβ is a weight of entropy bonus.\nC.2 IMPLEMENTATION OF PPIMOC\nFor continuous control tasks, we introduce PPIMOC on top of PPO, with the following modifications to A2IMOC.\nAlgorithm 1 Advantage-Actor InfoMax Option Critic (A2IMOC) 1: Given: Initial option-value QO, option-policy πo, and termination function βo. 2: Let BO be a replay buffer for storing option-transitions. 3: for k = 1, ... do 4: for i = 1, 2, ..., N do . Collect experiences from environment 5: Sample termination variable bi from βoi(xi) 6: if bi = 1 then 7: Store option transition xs, xf , oi to BO 8: end if 9: Choose next option oi+1 by -Greedy 10: Receive reward Ri and state xi+1, taking ai ∼ πoi+1(xi) 11: end for 12: for all xi in the trajectory do . Train option-policy, option-values, and termination function 13: Update πo(ai|xi) with PG via the UOAE advantage (12) and policy regularization (13) 14: Update QO(xi, o) via the optimistic TD error (12) 15: if oi has already terminated then 16: Update βo(xi) via (11) and the maximization of cHβH(β\no(xi)) 17: end if 18: end for 19: Train p̂ and µ̂ by option-transitions sampled from BO 20: end for\nUpgoing Option-Advantage Estimation for GAE To use an upgoing option-advantage estimation (12) with Generalized Advantage Estimator (GAE) (Schulman et al., 2015b) common with PPO, we introduce an upgoing general option advantage estimation (UGOAE). Letting δ denote the TD error corresponding to the marginalized option-state-values, δt = Rt + γVO(xt+1)− VO(xt), we write the GAE for marginalized policy πµ as Âoµ = ∑N i=0(γλ)\niδt+i, where λ is a coefficient. Supposing that ot terminates at the t+ k step and letting δo denote the TD error corresponding to an option-state value δot = Rt + γQO(xt+1, o)−QO(xt, o), we formulate UGOAE by:\nÂoUGOAE = ∑k i=0(γλ) iδot+i +max( N∑ i=k+1 (γλ)iδt+i, 0)︸ ︷︷ ︸ upgoing estimation\n(k < N)\n∑N−1 i=0 (γλ) iδot+i + (γλ) N (Rt+N + γUO(xt+N+1)−QO(xt+N , o)) (otherwise).\n(18)\nThe idea is the same as UOAE (12) and is optimistic about the advantage after option termination.\nClipped βo Loss In our preliminary experiments, we found that performing multiple steps of optimization on the gradient (11) led to destructively large updates and resulted in the saturation of βo to zero or one. Hence, to perform PPO-style multiple updates on βo, we introduce a clipped loss for βo:\n∇θβclip(`βo(x)− `βoold(x),− β , β)β o old(x) ( log p(o|xs, x)− log p(o|xs, xf ) ) , (19)\nwhere β is a small coefficient, βoold is a β o before the update, and clip(x,− , ) = max(− ,min( , x)). Clipping makes the gradient zero when βo is sufficiently different than βoold and inhibits too large updates." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "" }, { "heading": "D.1 NETWORK ARCHITECTURE", "text": "Figure 8 illustrates the neural network architecture used in our experiments. In Gridworld experiments, we used the same state encoder for all networks and we found that it is effective for diversifying πo as an auxiliary loss (Jaderberg et al., 2017). However, in MuJoCo experiments, we found that sharing\nthe state encoder can hurt the policy update because the magnitude of βo loss is larger even if clipped loss (19) is used. As a remedy for this, we used two encoders in MuJoCo experiments: one is for πo and QO, and the other is for βo, p̂, and µ̂.\nIn Gridworld experiments, we represented a state as an image and encoded it by a convolutional layer with 16 filters of size 4× 4 with stride 1, followed by a convolutional layer with 16 filters of size 2× 2 with stride 1, followed by a fully connected layer with 128 units. In MuJoCo experiments, we encode the state by two fully connected layers with 64 units. πo is parameterized as a Gaussian distribution with separated networks for standard derivations per option, similar to Schulman et al. (2015a). We used ReLU as an activator for all hidden layers and initialized networks by the orthogonal (Saxe et al., 2014) initialization in all experiments. Unless otherwise noted, we used the default parameters in PyTorch (Paszke et al., 2019) 1.5.0." }, { "heading": "D.2 HYPERPARAMETERS", "text": "When evaluating agents, we used -Greedy for selecting options with opt and did not use deterministic evaluation (i.e., an agent samples actions from πo) in all experiments. We show the algorithm-specific hyperparameters of A2IMOC in Table 1. In Gridworld experiments, we used = 0.1 for AOC. Our AOC implementation is based on the released code3 and uses truncated N -step advantage. Other parameters of AOC and A2C are the same as A2IMOC. We also show the hyperparameters of PPIMOC in Table 2. For PPOC, we used cµent = 0.001 for the weight of the entropy H(µ(x)). Our PPOC implementation is based on the released code4 and uses N -step (not truncated) GAE for computing advantage. PPOC and PPO shares all other parameters with PPIMOC.\n3https://github.com/jeanharb/a2oc_delib 4https://github.com/mklissa/PPOC" }, { "heading": "D.3 ENVIRONMENTAL DETAILS", "text": "In the Gridworld experiment, an agent can select four actions: go up, go down, go left and go right. With the probability 0.1, the agent takes a uniformly random action. If the agent reaches one of goals, it receives +1.0 or +2.0 reward. Otherwise, an action penalty −0.002 is given. The maximum episode length is 100.\nMuJoCo Point environments are implemented based on “PointMaze” in rllab (Duan et al., 2016) with some modifications, mainly around collision detection. The maximum episode length is 1000. In Four Rooms task, an agent receives +0.5 or +1.0 reward when it reaches a goal. Otherwise, an action penalty −0.0001 is given. This reward structure is the same in Billiard Task: an agent receives a goal reward when the object ball reaches a goal, otherwise it receives penalty." }, { "heading": "D.4 EARLY OPTION-POLICIES LEARNED IN GRIDWORLD FOUR ROOMS", "text": "Figure 9 shows early option-polices and termination probabilities in Gridworld Four Rooms experiment. We can see that A2IMOC learned the most diverse option-policies." }, { "heading": "D.5 QUALITATIVE ANALYSIS OF POINT FOUR ROOMS EXPERIMENT", "text": "Figure 10 shows the visualizations of learned option-polices and termination functions in MuJoCo Point Four Rooms, averaged over 100 uniformly sampled states per each position. Arrows show the expected moving directions, computed from rotations and option-policies. Terminating regions and option-policies learned with PPIMOC are diverse. For example, option 1 tends to go down while option 2 tends to go right. In the sampled trajectory of PPIMOC, we can see that it mainly used option 1 but occasionally switched to option 0 and option 2 for reaching the goal, and switched to option 3 around the goal. Contrary, for reaching the goal PPOC only used option 3 that does not terminate in any region. Options learned by Our PPOC is almost the same: termination probability is high around the upper left corner and option-policies direct downward." }, { "heading": "D.6 ABLATION STUDIES", "text": "We conducted ablation studies with three variants of A2IMOC/PPIMOC:\n• cHµ = 0: Do not use the policy regularization based on MI (13). • N -step Advantange: Use N -step advantage or N -step GAE instead of UOAE (12)\nUGOAE (18). • Truncated N -step Advantange: Compute advantage ignoring future rewards instead of using\nUOAE or UGOAE.\nFigure 11 shows all results in three tasks. We can see that UOAE is effective in all tasks, since both N -step advantage and truncated N -step advantage performed worse than UOAE. The policy regularization based on MI (13) is effective only in the Point Billiard lifelong learning task." }, { "heading": "D.7 NUMBER OF OPTIONS", "text": "Figure 12 shows the performance of IMOC with varying the number of options. Two options perfomed worse in all experiments and we need four or more options to make use of IMOC. However, when we increase the number of options to six and eight, we don’t see any peroformance improvement from four options, despite of our analysis that we need sufficiently many options to cover the state space Appendix A. This would be an interesting problem for future works." } ]
2,020
null
SP:72e513837413282d60e4e2ab71276a0f7856e87e
[ "The paper proposes a variation of interaction networks (IN) called region proposal interaction networks. The key idea is to have a richer object-centric feature representation using ROI-Pooling to encode the objects for prediction and use convolution operators to help the IN handle the change in the dimensionality of the feature representation. The paper is well-written and is evaluated on several popular benchmarks. The proposed variations seem to have a considerable effect on the performance to offer significant gains on challenging benchmarks." ]
Learning long-term dynamics models is the key to understanding physical common sense. Most existing approaches on learning dynamics from visual input sidestep long-term predictions by resorting to rapid re-planning with short-term models. This not only requires such models to be super accurate but also limits them only to tasks where an agent can continuously obtain feedback and take action at each step until completion. In this paper, we aim to leverage the ideas from success stories in visual recognition tasks to build object representations that can capture inter-object and object-environment interactions over a long-range. To this end, we propose Region Proposal Interaction Networks (RPIN), which reason about each object’s trajectory in a latent region-proposal feature space. Thanks to the simple yet effective object representation, our approach outperforms prior methods by a significant margin both in terms of prediction quality and their ability to plan for downstream tasks, and also generalize well to novel environments. Code, pre-trained models, and more visualization results are available at our Website.
[ { "affiliations": [], "name": "Haozhi Qi" }, { "affiliations": [], "name": "Xiaolong Wang" }, { "affiliations": [], "name": "Deepak Pathak" } ]
[ { "authors": [ "Pulkit Agrawal", "Ashvin V Nair", "Pieter Abbeel", "Jitendra Malik", "Sergey Levine" ], "title": "Learning to poke by poking: Experiential learning of intuitive physics", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Frank Allgöwer", "Alex Zheng" ], "title": "Nonlinear model predictive control", "venue": null, "year": 2012 }, { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "ICLR, 2017", "year": 2017 }, { "authors": [ "Anton Bakhtin", "Laurens van der Maaten", "Justin Johnson", "Laura Gustafson", "Ross Girshick" ], "title": "Phyre: A new benchmark for physical reasoning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In NeurIPS, 2016", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks. arXiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Kiran S Bhat", "Steven M Seitz", "Jovan Popović", "Pradeep K Khosla" ], "title": "Computing the physical parameters of rigid-body motion from video", "venue": "In ECCV,", "year": 2002 }, { "authors": [ "Apratim Bhattacharyya", "Mateusz Malinowski", "Bernt Schiele", "Mario Fritz" ], "title": "Long-term image boundary extrapolation", "venue": "arXiv, 2016", "year": 2016 }, { "authors": [ "Marcus A Brubaker", "Leonid Sigal", "David J Fleet" ], "title": "Estimating contact dynamics", "venue": "In ICCV. IEEE,", "year": 2009 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Eduardo F Camacho", "Carlos Bordons Alba" ], "title": "Model predictive control", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Michael B Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "ICLR, 2016", "year": 2016 }, { "authors": [ "Baoyang Chen", "Wenmin Wang", "Jinzhuo Wang" ], "title": "Video imagination from a single image with transformation generation", "venue": "In ACMMM Workshop,", "year": 2017 }, { "authors": [ "Kenneth James Williams Craik" ], "title": "The nature of explanation", "venue": "CUP Archive,", "year": 1952 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Frederik Ebert", "Sudeep Dasari", "Alex X Lee", "Sergey Levine", "Chelsea Finn" ], "title": "Robustness via retrying: Closed-loop robotic manipulation with self-supervised learning. arXiv, 2018a", "venue": null, "year": 2018 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Sudeep Dasari", "Annie Xie", "Alex Lee", "Sergey Levine" ], "title": "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Sebastien Ehrhardt", "Aron Monszpart", "Niloy J Mitra", "Andrea Vedaldi" ], "title": "Learning a physical long-term predictor", "venue": "arXiv, 2017", "year": 2017 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot motion", "venue": "In ICRA,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Katerina Fragkiadaki", "Pulkit Agrawal", "Sergey Levine", "Jitendra Malik" ], "title": "Learning visual predictive models of physics for playing billiards", "venue": null, "year": 2015 }, { "authors": [ "Rohit Girdhar", "Laura Gustafson", "Aaron Adcock", "Laurens van der Maaten" ], "title": "Forward prediction for physical reasoning", "venue": "arXiv, 2020", "year": 2020 }, { "authors": [ "Ross Girshick" ], "title": "Fast R-CNN", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Georgia Gkioxari", "Jitendra Malik", "Justin Johnson" ], "title": "Mesh R-CNN", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Oliver Groth", "Fabian Fuchs", "Ingmar Posner", "Andrea Vedaldi" ], "title": "Shapestacks: Learning vision-based physical intuition for generalised object stacking", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Radek Grzeszczuk", "Demetri Terzopoulos", "Geoffrey Hinton" ], "title": "Neuroanimator: Fast neural network emulation and control of physics-based models", "venue": "In SIGGRAPH,", "year": 1998 }, { "authors": [ "Jessica Hamrick", "Peter Battaglia", "Joshua B Tenenbaum" ], "title": "Internal physics models guide probabilistic judgments about object dynamics", "venue": "In COGSCI,", "year": 2011 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask R-CNN", "venue": "In ICCV, 2017", "year": 2017 }, { "authors": [ "Michael Janner", "Sergey Levine", "William T Freeman", "Joshua B Tenenbaum", "Chelsea Finn", "Jiajun Wu" ], "title": "Reasoning about physical interactions with object-oriented prediction and planning", "venue": "ICLR, 2019", "year": 2019 }, { "authors": [ "Dinesh Jayaraman", "Frederik Ebert", "Alexei A Efros", "Sergey Levine" ], "title": "Time-agnostic prediction: Predicting predictable video frames", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc V Gool" ], "title": "Dynamic filter networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Zhaoyin Jia", "Andrew C Gallagher", "Ashutosh Saxena", "Tsuhan Chen" ], "title": "3d reasoning from blocks to stability", "venue": null, "year": 2015 }, { "authors": [ "Michael I Jordan", "David E Rumelhart" ], "title": "Forward models: Supervised learning with a distal teacher", "venue": "Cognitive science,", "year": 1992 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "ICLR, 2020", "year": 2020 }, { "authors": [ "James R Kubricht", "Keith J Holyoak", "Hongjing Lu" ], "title": "Intuitive physics: Current research and controversies", "venue": "Trends in cognitive sciences,", "year": 2017 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv, 2018", "year": 2018 }, { "authors": [ "Adam Lerer", "Sam Gross", "Rob Fergus" ], "title": "Learning physical intuition of block towers by example", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Wenbin Li", "Seyedmajid Azimi", "Aleš Leonardis", "Mario Fritz" ], "title": "To fall or not to fall: A visual approach to physical stability prediction. arXiv, 2016a", "venue": null, "year": 2016 }, { "authors": [ "Wenbin Li", "Aleš Leonardis", "Mario Fritz" ], "title": "Visual stability prediction and its application to manipulation", "venue": "AAAI, 2016b", "year": 2016 }, { "authors": [ "Yunzhu Li", "Jiajun Wu", "Jun-Yan Zhu", "Joshua B Tenenbaum", "Antonio Torralba", "Russ Tedrake" ], "title": "Propagation networks for model-based control under partial observation", "venue": "In ICRA,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Raymond A Yeh", "Xiaoou Tang", "Yiming Liu", "Aseem Agarwala" ], "title": "Video frame synthesis using deep voxel flow", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": null, "year": 2016 }, { "authors": [ "Michael McCloskey" ], "title": "Intuitive physics", "venue": "Scientific american,", "year": 1983 }, { "authors": [ "Roozbeh Mottaghi", "Hessam Bagherinezhad", "Mohammad Rastegari", "Ali Farhadi" ], "title": "Newtonian scene understanding: Unfolding the dynamics of objects in static images", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Roozbeh Mottaghi", "Mohammad Rastegari", "Abhinav Gupta", "Ali Farhadi" ], "title": "what happens if...” learning to predict the effect of forces in images", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Alejandro Newell", "Kaiyu Yang", "Jia Deng" ], "title": "Stacked hourglass networks for human pose estimation", "venue": "In ECCV, 2016", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": null, "year": 2017 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Guanghao Luo", "Pulkit Agrawal", "Dian Chen", "Yide Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In MICCAI,", "year": 2015 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": null, "year": 2020 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Network,", "year": 2009 }, { "authors": [ "Russell Stewart", "Stefano Ermon" ], "title": "Label-free supervision of neural networks with physics and domain knowledge", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Sergey Tulyakov", "Ming-Yu Liu", "Xiaodong Yang", "Jan Kautz" ], "title": "Mocogan: Decomposing motion and content for video generation", "venue": null, "year": 2017 }, { "authors": [ "Rishi Veerapaneni", "John D Co-Reyes", "Michael Chang", "Michael Janner", "Chelsea Finn", "Jiajun Wu", "Joshua B Tenenbaum", "Sergey Levine" ], "title": "Entity abstraction in visual model-based reinforcement learning", "venue": "CoRL,", "year": 2019 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction. ICLR, 2017a", "venue": null, "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Yuliang Zou", "Sungryull Sohn", "Xunyu Lin", "Honglak Lee" ], "title": "Learning to generate long-term future via hierarchical prediction", "venue": null, "year": 2017 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Niklas Wahlström", "Thomas B Schön", "Marc Peter Deisenroth" ], "title": "From pixels to torques: Policy learning with deep dynamical models", "venue": null, "year": 2015 }, { "authors": [ "Jacob Walker", "Carl Doersch", "Abhinav Gupta", "Martial Hebert" ], "title": "An uncertain future: Forecasting from static images using variational autoencoders", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Jacob Walker", "Kenneth Marino", "Abhinav Gupta", "Martial Hebert" ], "title": "The pose knows: Video forecasting by generating pose futures", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual interaction networks: Learning a physics simulator from video", "venue": "In NeurIPS, 2017", "year": 2017 }, { "authors": [ "Jiajun Wu", "Ilker Yildirim", "Joseph J Lim", "Bill Freeman", "Josh Tenenbaum" ], "title": "Galileo: Perceiving physical object properties by integrating a physics engine with deep learning", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Jiajun Wu", "Joseph J Lim", "Hongyi Zhang", "Joshua B Tenenbaum", "William T Freeman" ], "title": "Physics 101: Learning physical object properties from unlabeled videos", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Jiajun Wu", "Erika Lu", "Pushmeet Kohli", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning to see physics via visual de-animation", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Tianfan Xue", "Jiajun Wu", "Katherine Bouman", "Bill Freeman" ], "title": "Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Tian Ye", "Xiaolong Wang", "James Davidson", "Abhinav Gupta" ], "title": "Interpretable intuitive physics model", "venue": null, "year": 2018 }, { "authors": [ "Yufei Ye", "Maneesh Singh", "Abhinav Gupta", "Shubham Tulsiani" ], "title": "Compositional video prediction", "venue": "ICCV, 2019", "year": 2019 }, { "authors": [ "Kexin Yi", "Chuang Gan", "Yunzhu Li", "Pushmeet Kohli", "Jiajun Wu", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "Clevrer: Collision events for video representation and reasoning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "OM", "CVP" ], "title": "We choose the hourglass network (Newell et al., 2016) as the image feature extractor. Given an input image, the hourglass network firstly applies a 7×7 stride-2 convolution, three residual", "venue": null, "year": 2016 }, { "authors": [ "Ye" ], "title": "2019), we randomly sample 100 outputs from our model, and select the best (in terms of the distance to ground-truth) of them as our model’s output. B PLANNING DETAILS Given an initial state (represented by an image) and a goal, we aim to produce an action that can lead to the goal", "venue": null, "year": 2019 }, { "authors": [ "imagination Fragkiadaki" ], "title": "Firstly, we select a candidate action a from a candidate action set A. Then we generate the input images I. For SimB, we train a forward model take the initial image and the action embedding to generate object positions for the next 3 steps. Then we use the simulator to generate the 3 images. For PHYRE, we convert the action to the red ball using the simulator", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "As argued by Kenneth Craik, if an organism carries a model of external reality and its own possible actions within its head, it is able to react in a much fuller, safer and more competent manner to emergencies which face it (Craik, 1952). Indeed, building prediction models has been long studied in computer vision and intuitive physics. In computer vision, most approaches make predictions in pixel-space (Denton & Fergus, 2018; Lee et al., 2018; Ebert et al., 2018b; Jayaraman et al., 2019; Walker et al., 2016), which ends up capturing the optical flow (Walker et al., 2016) and is difficult to generalize to long-horizon. In intuitive physics, a common approach is to learn the dynamics directly in an abstracted state space of objects to capture Newtonian physics (Battaglia et al., 2016; Chang et al., 2016; Sanchez-Gonzalez et al., 2020). However, the states end up being detached from raw sensory perception. Unfortunately, these two extremes have barely been connected. In this paper, we argue for a middle-ground to treat images as a window into the world, i.e., objects exist but can only be accessed via images. Images are neither to be used for predicting pixels nor to be isolated from dynamics. We operationalize it by learning to extract a rich state representation directly from images and build dynamics models using the extracted state representations.\nIt is difficult to make predictions, especially about the future — Niels Bohr\nContrary to Niels Bohr, predictions are, in fact, easy if made only for the short-term. Predictions that are indeed difficult to make and actually matter are the ones made over the long-term. Consider the example of “Three-cushion Billiards” in Figure 1. The goal is to hit the cue ball in such a way that it touches the other two balls and contacts the wall thrice before hitting the last ball. This task is extremely challenging even for human experts because the number of successful trajectories is very sparse. Do players perform classical Newtonian physics calculations to obtain the best action before each shot, or do they just memorize the solution by practicing through exponentially many configurations? Both extremes are not impossible, but often impractical. Players rather build a physical understanding by experience (McCloskey, 1983; Kubricht et al., 2017) and plan by making intuitive, yet accurate predictions in the long-term.\nLearning such a long-term prediction model is arguably the “Achilles’ heel” of modern machine learning methods. Current approaches on learning physical dynamics of the world cleverly side-step the long-term dependency by re-planning at each step via model-predictive control (MPC) (Allgöwer & Zheng, 2012; Camacho & Alba, 2013). The common practice is to train short-term dynamical models (usually 1-step) in a simulator. However, small errors in short-term predictions can accumulate\nover time in MPC. Hence, in this work, we focus primarily on the long-term aspect of prediction by just considering environments, such as the three-cushion billiards example or the PHYRE (Bakhtin et al., 2019) in Figure 1, where an agent is allowed to take only one action in the beginning so as to preclude any scope of re-planning.\nHow to learn an accurate dynamics model has been a popular research topic for years. Recently, there are a series of work trying to represent video frames using object-centric representations (Battaglia et al., 2016; Watters et al., 2017; Chang et al., 2016; Janner et al., 2019; Ye et al., 2019; Kipf et al., 2020). However, those methods either operate in the state space, or ignore the environment information, both of which are not practical in real-world scenarios. In contrast, our objective is to build a data-driven prediction model that can both: (a) model long-term interactions over time to plan successfully for new instances, and (b) work from raw visual input in complex real-world environments. Therefore, the question we ask is: how to extract such an effective and flexible object representation and perform long-term predictions?\nWe propose Region Proposal Interaction Network (RPIN) which contains two key components. Firstly, we leverage the region of interests pooling (RoIPooling) operator (Girshick, 2015) to extract object features maps from the frame-level feature. Object feature extraction based on region proposals has achieved huge success in computer vision (Girshick, 2015; He et al., 2017; Dai et al., 2017; Gkioxari et al., 2019), and yet, surprisingly under-explored in the field of intuitive physics. By using RoIPooling, each object feature contains not only its own information but also the context of the environment. Secondly, we extend the Interaction Network and propose Convolutional Interaction Networks that perform interaction reasoning on the extracted RoI features. Interaction Networks is originally proposed in (Battaglia et al., 2016), where the interaction reasoning is conducted via MLPs. By changing MLPs to convolutions, we can effectively utilize the spatial information of an object and make accurate future prediction of object location and shapes changes.\nNotably, our approach is simple, yet outperforms the state-of-the-art methods in both simulation and real datasets. In Section 5, we thoroughly evaluate our approach across four datasets to study scientific questions related to a) prediction quality, b) generalization to time horizons longer than training, c) generalization to unseen configurations, d) planning ability for downstream tasks. Our method reduces the prediction error by 75% in the complex PHYRE environment and achieves state-of-the-art performance on the PHYRE reasoning benchmark." }, { "heading": "2 RELATED WORK", "text": "Physical Reasoning and Intuitive Physics. Learning models that can predict the changing dynamics of the scene is the key to building physical common-sense. Such models date back to “NeuroAnimator” (Grzeszczuk et al., 1998) for simulating articulated objects. Several methods in recent years have leveraged deep networks to build data-driven models of intuitive physics (Bhattacharyya et al., 2016; Ehrhardt et al., 2017; Fragkiadaki et al., 2015; Chang et al., 2016; Stewart & Ermon, 2017). However, these methods either require access to the underlying ground-truth state-space or do not scale to long-range due to absence of interaction reasoning. A more generic\nyet explicit approach has been to leverage graph neural networks (Scarselli et al., 2009) to capture interactions between entities in a scene (Battaglia et al., 2018; Chang et al., 2016). Closest to our approach are interaction models that scale to pixels and reason about object interaction (Watters et al., 2017; Ye et al., 2019). However, these approaches either reason about object crops with no context around or can only deal with a predetermined number and order of objects. A concurrent work (Girdhar et al., 2020) studies using prediction for physical reasoning, but their prediction model is either in the state space or in the pixel space.\nOther common ways to measure physical understanding are to predict future judgments given a scene image, e.g., predicting the stability of a configuration (Groth et al., 2018; Jia et al., 2015; Lerer et al., 2016; Li et al., 2016a;b). Several hybrid methods take a data-driven approach to estimate Newtonian parameters from raw images (Brubaker et al., 2009; Wu et al., 2016; Bhat et al., 2002; Wu et al., 2015), or model Newtonian physics via latent variable to predict motion trajectory in images (Mottaghi et al., 2016a;b; Ye et al., 2018). An extreme example is to use an actual simulator to do inference over objects (Hamrick et al., 2011). The reliance on explicit Newtonian physics makes them infeasible on real-world data and un-instrumented settings. In contrast, we take into account the context around each object via RoIPooling and explicitly model their interaction with each other or with the environment without relying on Newtonian physics, and hence, easily scalable to real videos for long-range predictions.\nVideo Prediction. Instead of modeling physics from raw images, an alternative is to treat visual reasoning as an image translation problem. This approach has been adopted in the line of work that falls under video prediction. The most common theme is to leverage latent-variable models for predicting future (Lee et al., 2018; Denton & Fergus, 2018; Babaeizadeh et al., 2017). Predicting pixels is difficult so several methods leverage auxiliary information like back/fore-ground (Villegas et al., 2017a; Tulyakov et al., 2017; Vondrick et al., 2016), optical flow (Walker et al., 2016; Liu et al., 2017), appearance transformation (Jia et al., 2016; Finn et al., 2016; Chen et al., 2017; Xue et al., 2016), etc. These inductive biases help in a short interval but do not capture long-range behavior as needed in several scenarios, like playing billiards, due to lack of explicit reasoning. Some approaches can scale to relative longer term but are domain-specific, e.g., pre-defined human-pose space (Villegas et al., 2017b; Walker et al., 2017). However, our goal is to model long-term interactions not only for prediction but also to facilitate planning for downstream tasks.\nLearning Dynamics Models. Unlike video prediction, dynamics models take actions into account for predicting the future, also known as forward models (Jordan & Rumelhart, 1992). Learning these forward dynamics models from images has recently become popular in robotics for both specific tasks (Wahlström et al., 2015; Agrawal et al., 2016; Oh et al., 2015; Finn et al., 2016) and exploration (Pathak et al., 2017; Burda et al., 2019). In contrast to these methods where a deep network directly predicts the whole outcome, we leverage our proposed region-proposal interaction module to capture each object trajectories explicitly to learn long-range forward dynamics as well as video prediction models.\nPlanning via Learned Models. Leveraging models to plan is the standard approach in control for obtaining task-specific behavior. Common approach is to re-plan after each action via Model Predictive Control (Allgöwer & Zheng, 2012; Camacho & Alba, 2013; Deisenroth & Rasmussen, 2011). Scaling the models and planning in a high dimensional space is a challenging problem. With deep learning, several approaches shown promising results on real-world robotic tasks (Finn et al., 2016; Finn & Levine, 2017; Agrawal et al., 2016; Pathak et al., 2018). However, the horizon of these approaches is still very short, and replanning in long-term drifts away in practice. Some methods try to alleviate this issue via object modeling (Janner et al., 2019; Li et al., 2019) or skip connections (Ebert et al., 2018a) but assume the models are trained with state-action pairs. In contrast to prior works where a short-range dynamic model is unrolled in time, we learn our long-range models from passive data and then couple them with short-range forward models to infer actions during planning." }, { "heading": "3 REGION PROPOSAL INTERACTION NETWORKS", "text": "Our model takes N video frames and the corresponding object bounding boxes as inputs, and outputs the objects’ bounding boxes and masks for the future T timesteps. The overall model structure is illustrated in Figure 2.\nFor each frame, we first extract the image features using a ConvNet. Then we apply RoIPooling (Girshick, 2015; He et al., 2017) to obtain the object-centric visual features. These features are then forwarded to our Convolutional Interaction Networks (CIN) to perform objects’ interaction reasoning and used to predict future object bounding boxes and masks. The whole pipeline is trained end-to-end by minimizing the loss between the predicted outputs and the ground-truth." }, { "heading": "3.1 OBJECT-CENTRIC REPRESENTATION", "text": "We apply the houglass network (Newell et al., 2016) to extract the image features. Given an input RGB image I ∈ R3×H×W , the hourglass network firstly downsample the input via one 2-strided convolution and 2-strided max pooling layers, and then refine the representation by a U-Net-like encoder-decoder modules (Ronneberger et al., 2015). The hourglass network provides features with a large receptive field and a fine spatial resolution (4 times smaller than the input image), both of which are crucial to accurately model object-object and object-environment interactions. We denote the number of output channels of the feature map as d.\nOn top of this feature map, we use RoIPooling (Girshick, 2015) to extract a d × h × w objectcentric features. RoIPooling takes the feature map and a object bounding box as input. The region corresponding to the bounding box on the feature map is cropped and resize to a fixed spatial size (denoted as h × w). We use xti to represent the feature at t-th timestep for the i-th object. Such feature representation differs from previous method in two perspectives: 1) It is extracted from image features with a large receptive field, which gives plenty of context information around the object. 2) The feature representation is a 3-dimensional feature map rather than a single vector representation. It can represent the objects’ shapes while the vector representation cannot because the spatial dimension is flattened." }, { "heading": "3.2 CONVOLUTIONAL INTERACTION NETWORKS", "text": "To better utilize the spatial information of our object feature map, we propose Convolutional Interaction Networks, an extension of Interaction Network operators on 3-dimensional tensors. In this section, we first briefly review the original Interaction Network (IN) (Battaglia et al., 2016; Watters et al., 2017) and introduce the proposed Convolutional Interaction Networks (CIN).\nThe original interaction network is a general-purpose data-driven method to model and predict physical dynamics. It takes the feature vectors of m objects at timestep t: X = {xt1, xt2, ..., xtm |xti ∈ Rd} and performs object reasoning fO as well as relational reasoning fR on these features. Specifically, the updated rule of object features can be described as:\neti = fA ( fO(x t i) + ∑ j 6=i fR(x t i, x t j) ) ,\nzti = fZ(x t i, e t i), xt+1i = fP ( zti , z t−1 i , . . . , z t−k i ) .\n(1)\nIn the above equation, fA is the function to calculate the effect of both of object reasoning and relational reasoning results. And fZ is used to combine the original object state and the reasoning effect. Finally, fP is used to do future state predictions based on one or more previous object states. In IN, fO,R,A,Z,P are instantiated by a fully-connected layer with learnable weights.\nConvolutional Interaction Networks. The input of CIN is m object feature maps at timestep t: X = {xt1, xt2, ..., xtm|xti ∈ Rd×h×w}. The high-level update rule is the same as IN, but the key difference is that we use convolution to instantiate fO,R,A,Z,P . Such instantiation is crucial to utilize the spatial information encoded in our object feature map and to effectively reason future object states. Specifically, we have\nfR(x t i, x t j) =WR ∗ [xti, xtj ]\nfA(x t i) =W T A ∗ xti\nfO(x t i) =WO ∗ xti\nfZ(x t i, e t i) =WZ ∗ [xti, eti]\n(2)\nfP (z t i , z t−1 i , ..., z t−k i ) =WP ∗ [z t i , z t−1 i , ..., z t−k i ] (3)\nOne can plug the functions in Equation 1 for better understanding the operations. In the above equations, ∗ denotes the convolution operator, and [·, ·] denotes concatenation along the channel dimension. WR,WZ ∈ Rd×2d×3×3, WO,WA ∈ Rd×d×3×3, WP ∈ Rd×(kd)×3×3 are learnable weights of the convolution kernels with kernel size 3× 3. We also add ReLU activation after each of the convolutional operators." }, { "heading": "3.3 LEARNING REGION PROPOSAL INTERACTION NETWORKS (RPIN)", "text": "Our model predicts the future bounding box and (optionally) masks of each object. Given the predicted feature xt+1i , we use a simple two layer MLP decoder to estimate its bounding boxes coordinates and masks. The bounding box decoder takes a flattened object feature map as input. It firstly projects it to d dimensional vector, and outputs 4-d vector, representing the center location and size of the box. The mask decoder is of the same architecture but has 21×21 output channels, representing a 21×21 binary masks inside the corresponding bounding boxes. We use the `2 loss for bounding box predictions. For mask prediction, we use spatial cross-entropy loss which sums the cross-entropy values of a 21×21 predicted positions. The objective can be written as:\nLp = T∑ t=1 λt n∑ i=1 ( ‖B̂t+1i −B t+1 i ‖ 2 2 + CE(M̂ t+1 i ,M t+1 i ) ) . (4)\nWe use discounted loss during training (Watters et al., 2017) to mitigate the effect of inaccurate prediction at early training stage and λt is the discounted factor." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "Datasets. We evaluate our method’s prediction performance on four different datasets, and demonstrate the ability to perform downstream physical reasoning and planning tasks on two of them. We briefly introduce the four datasets below. The full dataset details are in the appendix.\nPHYRE: We use the BALL-tier of the PHYRE benchmark (Bakhtin et al., 2019). In this dataset, we set T = 5. We treat all of the moving balls or jars as objects and other static bodies as background. The benchmark provides two evaluation settings: 1) within task generalization (PHYRE-W), where the testing environments contain the same object category but different sizes and positions; 2) cross task generalization (PHYRE-C), where environments containing objects and context never present during training. We report prediction using the official fold 0 and the physical reasoning performance averaged on 10 folds.\nShapeStacks (SS): This dataset contains multiple stacked objects (cubes, cylinders, or balls) (Ye et al., 2019). In this dataset, we set T = 15. We evaluate all baselines and our methods follow the protocol of (Ye et al., 2019) with uncertainty estimation incorporated (see Appendix for detail).\nReal World Billiards (RealB): We collect “Three-cushion Billiards” videos from professional games with different viewpoints downloaded from YouTube. There are 62 training videos with 18, 306 frames, and 5 testing videos with 1, 995 frames. The bounding box annotations are from an offthe-shelf ResNet-101 FPN detector (Lin et al., 2017) pretrained on COCO (Lin et al., 2014) and\nfine-tuned on a subset of 30 images from our dataset. We manually filtered out wrong detections. In this dataset, we set T = 20.\nSimulation Billiards (SimB): We create a simulated billiard environment with three different colored balls with a radius 2 are randomly placed in a 64×64 image. At starting point, one ball is moving with a randomly sampled velocity. We generate 1,000 video sequences for training and 1,000 video sequences for testing, with 100 frames per sequence. We will also evaluate the ability to generalize to more balls and different sized balls in the experiment section. In this dataset, we set T = 20.\nFor PHYRE and SS, it is possible to infer the future from just the initial configuration. So we set N = 1 in these two datasets. For SimB and RealB, the objects are moving in a flat table so we cannot infer the future trajectories based on a single image. Therefore in this setting, we set N = 4 to infer the velocity/acceleration of objects. We set k = N in all the dataset because we only have access to N features when we make prediction at t = N + 1. We predict object masks for PHYRE and ShapeStacks, and only predict bounding boxes for Billiard datasets.\nBaselines. There are a large amount of work studying how to estimate objects’ states and predict their dynamics from raw image inputs. They fall into the following categories:\nVIN: Instead of using object-centric spatial pooling to extract object features, it use a ConvNet to globally encode an image to a fixed d×m dimensional vector. Different channels of the vector is assigned to different objects (Kipf et al., 2020; Watters et al., 2017). This approach requires specifying a fixed number of objects and a fixed mapping between feature channels and object identity, which make it impossible to generalize to different number of objects and different appearances.\nObject Masking (OM): This approach takes one image and m object bounding boxes or masks as input (Wu et al., 2017; Veerapaneni et al., 2019; Janner et al., 2019). For each proposal, only the pixels inside object proposals are kept while others are set to 0, leading to m masked images. This approach assumes no background information is needed thus fails to predict accurate trajectories in complex environments such as PHYRE. And it also cost m times computational resources.\nCVP: The object feature is extracted by cropping the object image patch and forwarding it to an encoder (Ye et al., 2019; Yi et al., 2020). Since the object features are directly extracted from the raw image patches, the context information is also ignored. We re-implement CVP’s feature extraction method within our framework. We show we can reproduce their results in Section A.2." }, { "heading": "5 EVALUATION RESULTS: PREDICTION, GENERALIZATION, AND PLANNING", "text": "We organize this section and analyze our results by discussing four scientific questions related to the prediction quality, generalization ability across time & environment configurations, and the ability to plan actions for downstream tasks." }, { "heading": "5.1 HOW ACCURATE IS THE PREDICTED DYNAMICS?", "text": "To evaluate how well the world dynamics is modeled, we first report the average prediction errors on the test split, over the same time-horizon as which model is trained on, i.e., t ∈ [0, Ttrain]. The prediction error is calculated by the squared `2 distance between predicted object center location and the ground-truth object centers. The results are shown in Table 1 (left half).\nFirstly, we show the effectiveness of our proposed RoI Feature by comparing Table 1 VIN, OM, CVP, and Ours (IN). These four entries use the same backbone network and interaction network modules and only visual encoder is changed. Among them, VIN cannot even be trained on PHYRE-W since it cannot handle varying number of objects. The OM method performs slightly better than other baselines since it also explicitly models objects by instance masking. For CVP, it cannot produce reasonable results on all of the datasets except on the SS dataset. The reason is that in PHYRE and billiard environments, cropped images are unaware of environment information and the relative position and pose of different objects, making it impossible to make accurate predictions. In SS, since the object size is large and objects are very close to each other, the cropped image regions already provide enough context. In contrast, our RoI Feature can implicitly encode the context and environment information and it performs much better than all baselines. In the very challenging PHYRE dataset, the prediction error is only 1/4 of the best baseline. In the other three easier datasets, the gap is not as large since the environment is less complicated, but our method still achieves\nmore than 10% improvements. These results clearly demonstrates the advantage of using rich state representations.\nSecondly, we show that the effectiveness of our proposed Convolutional Interaction Network by comparing Table 1 Ours (IN) and Ours (CIN). With every other components the same, changing the vector-form representation to spatial feature maps and use convolution to model the interactions can further improve the performance by 10%∼40%. This result shows our convolutional interaction network could better utilize the spatial information encoded in the object feature map." }, { "heading": "5.2 DOES LEARNED MODEL GENERALIZE TO LONGER HORIZON THAN TRAINING?", "text": "Generalize to longer horizons is a challenging task. In this section we compare the performance of predicting trajectories longer than training time. In Table 1 (right half), we report the prediction error for t ∈ [Ttrain, 2×Ttrain]. The results in this setting are consistent with what we found in Section 5.1. Our method still achieves the best performance against all baselines. Specifically, for all datasets except SimB, we reduce the error by more than 30% percent. In SimB, the improvement is not as significant because the interaction with environment only includes bouncing off the boundaries (see Figure 4 (c) and (d)). Meanwhile, changing IN to CIN further improve the performance. This again validates our hypothesis that the key to making accurate long-term feature prediction is the rich state representation extracted from an image." }, { "heading": "5.3 DOES LEARNED MODEL GENERALIZE TO UNSEEN CONFIGURATIONS?", "text": "The general applicability of RoI Feature has been extensively verified in the computer vision community. As one of the benefits, our method can generalize to novel environments configurations without any modifications or online training. We test such a claim by testing on several novel environments unseen during training. Specifically, we construct 1) simulation billiard dataset contains 5 balls with radius 2 (SimB-5); 2) PHYRE-C where the test tasks are not seen during training; 3) ShapeStacks with 4 stacked blocks (SS-4). The results are shown in Table 2.\nSince VIN needs a fixed number of objects as input, it cannot generalize to a different number of objects, thus we don’t report its performance on SimB-5, PHYRE-C, and SS-4. In the SimB-5 and PHYRE-C setting, where generalization ability to different numbers and different appearances is required, our method reduce the prediction error by 75%. In SS-4, the improvement is not as\nsignificant as the previous two because cropped image may be enough on this simpler dataset as mentioned above." }, { "heading": "5.4 HOW WELL CAN THE LEARNED MODEL BE USED FOR PLANNING ACTIONS?", "text": "The advantage of using a general purpose task-agnostic prediction model is that it can help us do downstream planning and reasoning tasks. In this section, we evaluate our prediction model in the recently proposed challenging PHYRE benchmark (Bakhtin et al., 2019) and simulation billiards planning tasks.\nPHYRE. We use the B-tier of the environment. In this task, we need to place one red ball at a certain location such that the green ball touches another blue/purple object (see figure 1 right for an example). Bakhtin et al. (2019) trains a classification network whose inputs are the first image and a candidate action, and outputs whether the action leads to success. Such a method does not utilize the dynamics information. In contrast, we can train a classifier on top of the predicted objects’ features so that it can utilize dynamics information and makes more accurate classification.\nDuring training, we use the same prediction model as described in previous sections except for Ttrain = 10, and then attach an extra classifier on the objects’ features. Specifically, we concatenate each object’s features at the 0th, 3rd, 6th, and 9th timestep and then pass it through two fullyconnected layers to get a feature trajectory for each object. Then we take the average of all the objects’ features and pass it through one fully-connected layer to get a score indicating whether this placement solve the task. We minimize the cross-entropy loss between the score and the ground-truth label indicating whether the action is a success. Note that different from previous work (Bakhtin et al., 2019; Girdhar et al., 2020), our model does not need to convert the input image to the 7-channel segmentation map since object information is already utilized by our object-centric representation. During testing, We enumerate the first 10,000 actions from the pre-computed action set in Bakhtin et al. (2019) and render the red ball on the initial image as our prediction model’s input. Our final output is an sorted action list according to the model’s output score.\nWe report the AUCCESS metric on the official 10 folds of train/test splits for both within-task generalization and cross-task generalization setting. The results are in Table 3. Our method achieves about 8 points improvement over the strong DQN baseline (Bakhtin et al., 2019) in the withintask generalization setting. On the more challenging setting of cross-task generalization where the environments may not be seen during training, our method is 6 points higher than DQN.\nSimB Planning. We consider two tasks in the SimB environment: 1) Billiard Target State. Given an initial and final configuration after 40 timesteps, the goal is to find one action that will lead to the target configuration. We report the smallest distances between the trajectory between timestep 35-45 and the final position. 2) Billiard Hitting. Given the initial configurations, the goal is to find an action that can hit the other two balls within 50 timesteps.\nWe firstly train a forward model taken image and action as input, to predict the first 4 object positions and render it to image. After that the rendered images are passed in our prediction model. We score each action according to the similarity between the generated trajectory and the goal state. Then the action with the highest score is selected. The results are shown in Table 4. Our results achieves best performance on all tasks. The full planning algorithm and implementation details are included in the appendix." }, { "heading": "5.5 QUALITATIVE RESULTS", "text": "In figure 3, we show the qualitative prediction results both for our method as well as the OM and CVP baselines. In figure 4, we compare our prediction and ground-truth on the other three datasets. More results and videos are available at our Website." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we leverage the modern computer vision techniques to propose Region Proposal Interaction Networks for physical interaction reasoning with visual inputs. We show that our general, yet simple method achieves a significant improvement and can generalize across both simulation and real-world environments for long-range prediction and planning. We believe this method may serve as a good benchmark for developing future methods in the field of learning intuitive physics, as well as their application to real-world robotics." }, { "heading": "ACKNOWLEDGEMENT", "text": "This work is supported in part by DARPA MCS and DARPA LwLL. We would like to thank the members of BAIR for fruitful discussions and comments." }, { "heading": "B PLANNING DETAILS", "text": "Given an initial state (represented by an image) and a goal, we aim to produce an action that can lead to the goal from the initial state. Our planning algorithm works in a similar way as visual imagination Fragkiadaki et al. (2015): Firstly, we select a candidate action a from a candidate action set A. Then we generate the input images I. For SimB, we train a forward model take the initial image and the action embedding to generate object positions for the next 3 steps. Then we use the simulator to generate the 3 images. For PHYRE, we convert the action to the red ball using the simulator and get the initial image. After that, we run our model described in section 5.4 and the score of each action is simply the classifier’s output. We then select the action with the max score.\nWe introduce the action set for each task in section B.1, and how to design distance function in B.2. A summary of our algorithm is in Algorithm 1.\nB.1 CANDIDATE ACTION SETS\nFor simulation billiard, the action is 3 dimensional. The first two dimensions stand for the direction of the force. The last dimension stands for the magnitude of the force. During doing planning, we enumerate over 5 different magnitudes and 12 different angles, leading to 60 possible actions. All of the initial condition is guaranteed to have a solution.\nFor PHYRE, the action is also 3 dimensional. The first two dimensions stand for the location placing the red ball. The last dimension stands for the radius of the ball. Following (Bakhtin et al., 2019), we use the first 10k actions provided by the benchmark.\nB.2 DISTANCE FUNCTION\nInit-End State Error. Denote the given target location of m objects as y ∈ Rm×2. We use the following distance function, which measures the distance between the final rollout location and the\ntarget location:\nD = m∑ i=1 2∑ j=1 (p̂T,i,j − yi,j)2 (5)\nHitting Accuracy. Denote the given initial location of m objects as x ∈ Rm×2. We apply force at the object i′. We use the following distance function, which prefer the larger moving distance for objects other than i′:\nD = −min i m∑ i=1,i6=i′ 2∑ j=1 (p̂T,i,j − xi,j)2 (6)\nPHYRE task. Since we already train a classifier to classify whether the predicted trajectory will lead to a successful solution of the current task, during test time we can directly use it to classify whether the current action will lead to a successful solution. The distance function is just the negative output score.\nB.3 PLANNING ALGORITHM\nAlgorithm 1: Planning Algorithm for Simulated Billiard and PHYRE Input: candidate actions A = {ai}, initial state x, end state y (optional) Output: action a∗ for a in A do I = Simulation(x,a) ; p̂ = PredictionModel(I) ; calculate D according to task as in B.2; if D < D∗ then\nD∗ = D ; a∗ = a ;\nend end" }, { "heading": "C PHYRE 10 FOLD RESULTS", "text": "To enable future work to compare with our method, we provide AUCCESS scores for all folds in Table 5. The number of RAND and DQN is taken from Bakhtin et al. (2019)." } ]
2,021
LEARNING LONG-TERM VISUAL DYNAMICS WITH REGION PROPOSAL INTERACTION NETWORKS